Bias in artificial intelligence is a known problem that researchers have been working on for some time. The FTC is giving businesses an incentive to use unbiased AI, suggesting legal action could be taken if bias appears in AI.
Computers, smartphones, tablets, IoT devices are pretty commonplace today. Many of us, though, do not think about the complexities that lie within those devices and how they function. Developers understand all too well those complexities, and they understand that they don’t really understand it at all sometimes. When you fix a problem in any other industry, the outcome can be predicted with relative accuracy. Not in technology, though. No, when you fix a coding problem, you have to first test it before it’s deployed because you have no idea what could break. It’s one of the most common reasons for cloud outages. As more businesses seek to use artificial intelligence for data analysis and business efficiency, the FTC has a new warning: Keep the bias out of your AI, or action could be taken.
Artificial intelligence has many uses, and with advancements in machine learning, businesses are learning how to make the technology work for them. One of the things we often talk about is the need to have an expert help with complex projects that require a deep level of understanding. It’s been widely reported that there is bias in AI. This is largely due to the data that the ML inside the AI has been trained on, data taken from snippets around the web. It’s a problem that researchers and developers around the world have been working to solve, and the FTC is adding incentive for companies that provide AI and ML training data:
“Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results,” writes Elisa Jillson, an FTC attorney, in a blog post. “Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence. In a rush to embrace new technology, be careful not to over promise what your algorithm can deliver. For example, let’s say an AI developer tells clients that its product will provide “100% unbiased hiring decisions,” but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination – and an FTC law enforcement action.”
We are in the midst of a massive social justice movement. It’s not just in the U.S. either, but many countries are acknowledging that racial disparity is a problem. Even FTC chair, Rebecca Slaughter, recently said that algorithm-based bias is “an economic justice issue.” And because of this movement, many businesses are looking to AI to help eradicate bias in the hiring process as well as other areas of data collection and analysis. But if the algorithms in use are biased, those biases can be exaggerated.
The FTC can prosecute companies under the Equal Credit Opportunity Act or the Fair Credit Reporting Act, both of which prohibit decision-making based on race, among other things. They can also prosecute under Section 5 of the FTC Act, which prohibits unfair or deceptive practices.
Here’s the thing that businesses need to remember, and we’ll use an analogy to help. Big tech, big businesses are akin to Shaq. They are dominating, aggressive and quick to market. They can also handle the blowback from the FTC should their product appear biased. Small businesses, startups, non-Fortune 500 companies that want to break into the AI sector should be prepared to proceed more like John Stockton. They can’t just push their product out and be aggressive, acting like Shaq without ensuring quality, because they are much less likely to survive FTC action. Smaller businesses that want to be disruptive in AI and ML need to shift their focus from first-to-market to best-quality-in-market. First-to-market will often be biased, so in order for smaller companies to stay competitive, quality must be a top consideration.
On top of this, businesses which don’t create their own AI or ML and need to purchase it from somewhere, also need to do their due diligence. You can have built this amazing product and the AI/ML datasets to power it and make it efficient, but if that AI turns out to be biased, it’s all down the drain. So once that purchase is made, companies need to pay close attention to outcomes and results provided by those tools, if any discrepancies or bias become apparent, put a pause on those tools. Don’t let something preventable discredit and/or shut down your business.
Computers are hard. Consult and/or hire an expert who can be a point of contact should any problems or questions arise.