Computers Are Hard: Bias in AI Could Bring FTC Action

Computers, smartphones, tablets, IoT devices are pretty commonplace today. Many of us, though, do not think about the complexities that lie within those devices and how they function. Developers understand all too well those complexities, and they understand that they don’t really understand it at all sometimes. When you fix a problem in any other industry, the outcome can be predicted with relative accuracy. Not in technology, though. No, when you fix a coding problem, you have to first test it before it’s deployed because you have no idea what could break. It’s one of the most common reasons for cloud outages. As more businesses seek to use artificial intelligence for data analysis and business efficiency, the FTC has a new warning: Keep the bias out of your AI, or action could be taken.

Artificial intelligence has many uses, and with advancements in machine learning, businesses are learning how to make the technology work for them. One of the things we often talk about is the need to have an expert help with complex projects that require a deep level of understanding. It’s been widely reported that there is bias in AI. This is largely due to the data that the ML inside the AI has been trained on, data taken from snippets around the web. It’s a problem that researchers and developers around the world have been working to solve, and the FTC is adding incentive for companies that provide AI and ML training data:

“Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results,” writes Elisa Jillson, an FTC attorney, in a blog post. “Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence. In a rush to embrace new technology, be careful not to over promise what your algorithm can deliver. For example, let’s say an AI developer tells clients that its product will provide “100% unbiased hiring decisions,” but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination – and an FTC law enforcement action.”

We are in the midst of a massive social justice movement. It’s not just in the U.S. either, but many countries are acknowledging that racial disparity is a problem. Even FTC chair, Rebecca Slaughter, recently said that algorithm-based bias is “an economic justice issue.” And because of this movement, many businesses are looking to AI to help eradicate bias in the hiring process as well as other areas of data collection and analysis. But if the algorithms in use are biased, those biases can be exaggerated.

The FTC can prosecute companies under the Equal Credit Opportunity Act or the Fair Credit Reporting Act, both of which prohibit decision-making based on race, among other things. They can also prosecute under Section 5 of the FTC Act, which prohibits unfair or deceptive practices.

Here’s the thing that businesses need to remember, and we’ll use an analogy to help. Big tech, big businesses are akin to Shaq. They are dominating, aggressive and quick to market. They can also handle the blowback from the FTC should their product appear biased. Small businesses, startups, non-Fortune 500 companies that want to break into the AI sector should be prepared to proceed more like John Stockton. They can’t just push their product out and be aggressive, acting like Shaq without ensuring quality, because they are much less likely to survive FTC action. Smaller businesses that want to be disruptive in AI and ML need to shift their focus from first-to-market to best-quality-in-market. First-to-market will often be biased, so in order for smaller companies to stay competitive, quality must be a top consideration.

On top of this, businesses which don’t create their own AI or ML and need to purchase it from somewhere, also need to do their due diligence. You can have built this amazing product and the AI/ML datasets to power it and make it efficient, but if that AI turns out to be biased, it’s all down the drain. So once that purchase is made, companies need to pay close attention to outcomes and results provided by those tools, if any discrepancies or bias become apparent, put a pause on those tools. Don’t let something preventable discredit and/or shut down your business. 

Computers are hard. Consult and/or hire an expert who can be a point of contact should any problems or questions arise.

About the Author

PWV Consultants is a boutique group of industry leaders and influencers from the digital tech, security and design industries that acts as trusted technical partners for many Fortune 500 companies, high-visibility startups, universities, defense agencies, and NGOs. Founded by 20-year software engineering veterans, who have founded or co-founder several companies. PWV experts act as a trusted advisors and mentors to numerous early stage startups, and have held the titles of software and software security executive, consultant and professor. PWV's expert consulting and advisory work spans several high impact industries in finance, media, medical tech, and defense contracting. PWV's founding experts also authored the highly influential precursor HAZL (jADE) programming language.

Contact us

Contact Us About Anything

Need Project Savers, Tech Debt Wranglers, Bleeding Edge Pushers?

Please drop us a note let us know how we can help. If you need help in a crunch make sure to mark your note as Urgent. If we can't help you solve your tech problem, we will help you find someone who can.

1350 Avenue of the Americas, New York City, NY