The Truth About Artificial Intelligence

Continued advancements in artificial intelligence make it more widely accessible to businesses. But the truth is, we really don’t understand AI at all.

Artificial intelligence is not only highly touted as the next big thing, but businesses around the world are buying in and discovering how to make it work for them. Machine learning advancements have changed the way we look at AI, and with service providers offering datasets directly to businesses, the time it takes to implement the use of AI is far less than it was. Recently, though, we have collectively learned that AI is often biased, that it can be fooled with written words and that the FTC will get involved if your company uses biased algorithms (whether they know it’s biased or not). The truth about AI, though, is that we really don’t understand it at all.

You see, artificial intelligence is neither artificial nor intelligent. Not in the way we define what is artificial or what is intelligent, anyway. Wired magazine recently sat down with Kate Crawford, a researcher at Microsoft and professor at the University of Southern California. Her book, Atlas of AI, looks at the underpinnings of some versions of AI. 

In the interview, Crawford said, “AI is made from vast amounts of natural resources, fuel, and human labor. And it’s not intelligent in any kind of human intelligence way. It’s not able to discern things without extensive human training, and it has a completely different statistical logic for how meaning is made. Since the very beginning of AI back in 1956, we’ve made this terrible error, a sort of original sin of the field, to believe that minds are like computers and vice versa. We assume these things are an analog to human intelligence, and nothing could be further from the truth.”

She goes on to discuss the lack of regulations around artificial intelligence and how hastily-made algorithms and poorly trained Machine Learning models negatively impact how we use AI today. Combine the lack of regulations with the FCC saying it will take action against companies using biased AI, and businesses find themselves with a lot of gray area.

Artificial intelligence is fast becoming the tool of the lazy. Computers shouldn’t be making decisions based on badly grown datasets, and AI is still a really intense and expensive process. It should be used in places that are ethically reviewed and tested, and with training sets that are collected for its designed purpose. Don’t forget about mob mentality. The bigger the mob, the bigger impact of poor decision-making. Crowd or collective intelligence is a what can make the internet a doorway to the world, but there are limits to everything. AI that thinks like a mob is not necessarily a good thing.

Let’s be clear, the use of artificial intelligence isn’t going anywhere. In fact, its use is only going to increase as the technology becomes more widely available. What we mean is that AI algorithms and datasets should be extensively and intensively trained prior to implementation. That training should be specific to what the algorithm is designed to identify. For example, if an AI algorithm is being used to look for a potential new-hire based on specific qualifications, it should be trained on resumes, or information related to a specific job. Resumes often don’t include things like race, age or gender, which eliminates potential bias. Humans can make inferences and educated guesses about a person based on the language they use on their resume, but a computer which is untrained on cultural differences will not be able to discern that information.

That example is an oversimplification of how AI and ML work. ML is often trained on information found on the internet. That information can be anything from scientific journal publication to 160-character Twitter posts. Where the information comes from matters. Scientific journals are factual, based on studies and statistics. Social media is inherently biased with viewpoints on every aspect of every spectrum. While we want AI to think like a human and make decisions like a human, it never will. We should never expect a machine to be capable of understanding in the way that humans understand.

Artificial intelligence is an incredibly useful tool for almost every business across a variety of industries. But it’s still in its infancy, we are nowhere near understanding what its true capabilities are, or how our own actions and the information we feed it impact it. These are tools, tools which are, in some cases, causing harm, and they are completely unregulated. Before we go too far in utilizing this technology, it’s important for regulation or at least ethical consideration to happen. Once it does, businesses will have a much better idea of what is and is not allowed, and they’ll be able to better plan their AI future. Until then, business owners should use caution with AI and always consult an expert when in doubt.

About the Author

Pieter VanIperen, Managing Partner of PWV Consultants, leads a boutique group of industry leaders and influencers from the digital tech, security and design industries that acts as trusted technical partners for many Fortune 500 companies, high-visibility startups, universities, defense agencies, and NGOs. He is a 20-year software engineering veteran, who founded or co-founder several companies. He acts as a trusted advisor and mentor to numerous early stage startups, and has held the titles of software and software security executive, consultant and professor. His expert consulting and advisory work spans several industries in finance, media, medical tech, and defense contracting. Has also authored the highly influential precursor HAZL (jADE) programming language.

Contact us

Contact Us About Anything

Need Project Savers, Tech Debt Wranglers, Bleeding Edge Pushers?

Please drop us a note let us know how we can help. If you need help in a crunch make sure to mark your note as Urgent. If we can't help you solve your tech problem, we will help you find someone who can.

1350 Avenue of the Americas, New York City, NY