Computers Are Hard: AI’s Language Obstacle

Artificial intelligence is widely touted as the next big thing, but there’s a giant obstacle to overcome first. The nuances of human language and speech are not easy to master.

Technology evolves every day. The rate at which it evolves today is exponentially greater than the rate at which it evolved even 10 years ago. Video quality improvements, more automation, robotics, machine learning and artificial intelligence are all on the list of technology that changes every day. As we dive farther into automation, we are learning how ML and AI can help us automate even more tasks. Businesses worldwide are beginning to look at implementing AI in some fashion to assist with business operations, which is a great thing, but AI still has limitations.

One thing we fail, as humans, to understand is the nuances of our languages. Not just English, but every language. The same word can have different meanings depending on context. Machines do not understand context, they don’t understand emotion in tone of voice. At least, not yet. Which is one of the reasons AI is limited. Another limitation is that AI is based on ML that is trained by humans. So if the training data is skewed in any way, any machine trained on it will be skewed as well.

Let’s take a look at a real-life example of AI getting it wrong. Last year, Antonio Radić, who hosts a chess channel on YouTube had a video cut off during a live-stream. He was interviewing a grandmaster when the broadcast abruptly cut out, with viewers being told that the video was removed for “harmful and dangerous” content. Radić has over one million subscribers on YouTube. The channel remained offline for 24 hours.

There still aren’t answers as to why the video was interrupted and taken down mid-stream, YouTube has only said that the removal was a mistake. It’s likely that mistake is due, in part, to artificial intelligence programs that are designed to detect misinformation, abuse and hate speech. According to Wired, Ashique KhudaBukhsh, a project scientist who specializes in AI at Carnegie Mellon University, and a serious chess player himself, wondered if YouTube’s algorithm may have been confused by discussions involving black and white pieces, attacks, and defenses. So he and a colleague created an experiment, at the end of which it was determined that without having a human in the loop, off-the-shelf classifiers’ predictions on chess discussions can be misleading.

Chess pieces are typically black and white, but without the context of them being gamepieces, AI cannot determine if there is hate speech involved or not. AI can’t just look for specific words and phrases, the algorithm used has to infer the meaning from a string of words. Asking a machine to infer something can only work when appropriate parameters are set. 

“Fundamentally, language is still a very subtle thing,” says Tom Mitchell, a CMU professor who has previously worked with KhudaBukhsh. “These kinds of trained classifiers are not soon going to be 100 percent accurate.”

Language is going to be a persistent problem for AI, likely for years to come. If you think about it, though, humans take YEARS to learn new languages. And we understand context! A machine has to be programmed to recognize context, which means the programmers have to give it as many options as possible to choose from. 

Another aspect of limitations in AI comes in the training data. If companies are using the likes of Twitter, Facebook and other social media platforms for training data, AI will never be unbiased. Humans are inherently biased, it’s just how we are made. We do our best to overcome these biases and accept differing opinions as best we can, but we are not perfect. Which means AI is not perfect. 

The cool thing about AI, though, is that it doesn’t HAVE to be perfect. Humans are still needed to make knowledge-based decisions and form opinions based on life experiences. Can AI learn through those things? Yes, and AI will continue to learn and improve and become more reliable the longer we use it. But it is incredibly important that businesses know the limitations of the tool and technology they are using. AI is incredibly complicated. In order to effectively and successfully implement the use of ML and AI in business, it is vital that an expert in the field is consulted, if not hired to help.

AI and ML are definitely the next big thing in technology, they are going to pave the way for the next round of innovations that propel our planet forward. It must be handled appropriately, and humans must learn that even AI has to be verified once in a while!

About the Author

Pieter VanIperen, Managing Partner of PWV Consultants, leads a boutique group of industry leaders and influencers from the digital tech, security and design industries that acts as trusted technical partners for many Fortune 500 companies, high-visibility startups, universities, defense agencies, and NGOs. He is a 20-year software engineering veteran, who founded or co-founder several companies. He acts as a trusted advisor and mentor to numerous early stage startups, and has held the titles of software and software security executive, consultant and professor. His expert consulting and advisory work spans several industries in finance, media, medical tech, and defense contracting. Has also authored the highly influential precursor HAZL (jADE) programming language.

Contact us

Contact Us About Anything

Need Project Savers, Tech Debt Wranglers, Bleeding Edge Pushers?

Please drop us a note let us know how we can help. If you need help in a crunch make sure to mark your note as Urgent. If we can't help you solve your tech problem, we will help you find someone who can.

1350 Avenue of the Americas, New York City, NY