As more businesses lean toward using AI, it is important to know the risks that come with it.
Artificial intelligence is a fast-rising technology that many businesses plan to take advantage of in 2022, but as with all new technology, there are inherent risks to using AI. It is well-known that there is bias in AI, which is not the fault of the technology, but the training data used to make it function. Training data is often pulled from the internet, public data from all areas of the globe. Because humans are biased, the coded algorithm will, in turn, become biased. It wasn’t biased to start, but it ended up that way because it was inadvertently taught to “see” things differently. Now, a recent study has shown that AI can determine a person’s race from a radiology image with better accuracy than the humans that created it.
An international team of doctors took a deeper look into the reported disparate findings AI returns on medical imaging. There is no known correlation for race that a human can see on radiology images, so the team set out to figure out why AI can predict race where humans cannot. They hypothesized that if the model could identify a patient’s race, then it would suggest the model had learned to recognize information it was not directly trained to recognize.
“To investigate the cause of previously established performance disparities by patient race, we studied several hypotheses. We performed three main groups of experiments: A) performance quantification of the deep learning models to detect race from medical images, including the ability of these models to generalize to external environments and across multiple imaging modalities, B) assessment of possible confounding anatomic and phenotype population features, such as disease distribution and body habitus as predictors of race, and C) investigation into the underlying mechanism by which AI models can recognise race.”
Through their investigation, they found similar results as previous studies: that ML algorithms can predict, with high accuracy, whether a patient is Asian, Black or White. They then tested how this could be possible by utilizing different forms of imaging (X-Ray, CT, mammogram, etc.) and postulated that AI could read differences in bone density. However, even when the images were cropped and pixelated to the point a human may not be able to tell what it is, the algorithm was still highly successful.
With the FDA approving new algorithms in healthcare on a regular basis, the researchers became concerned that this would exacerbate the racial disparity already seen in the healthcare system. The bigger concern, though, is that their findings were unable to determine how the algorithm was correctly able to predict the race of a person when it had not been trained to do so, and when a human could not do so without further information.
This is the risk of artificial intelligence in any field, it’s unpredictable. We already know that bias exists in AI because humans are biased and the information used to train it was created by us. But think about the implications beyond this one. What risk are you taking with AI when you use it for your business? Is it learning information it wasn’t trained to learn? What is it doing with that information and how is it impacting your business analysis?
Then there’s the million dollar question: How do you know your artificial intelligence is even learning things it wasn’t trained to learn? Your business may not be healthcare, but every business has an HR department. Every business has employees and customers. What if your AI algorithm suddenly started attaching race to those people, even though it wasn’t told to perform that task? What if AI learned other sensitive information about your staff and clients that it isn’t supposed to learn? Remember, the FTC has already said that the AI used by a business is the businesses responsibility, not the writer of the algorithm.
Let’s just be honest with ourselves, if we can’t determine how something we created is doing something we did not tell it to do, then how can we fix it? That’s a problem for research teams around the globe, but in the meantime, make sure you are incredibly careful with any AI your business uses. It’s not just about racial bias, this instance is just an example. But if we can’t determine how that AI is getting race correct without us telling it to, then how can we determine that the AI used in any business isn’t doing the same? It doesn’t matter what the algorithm is doing, if it’s doing something it wasn’t told to do, it creates a risk.
In 2022, many businesses will adopt artificial intelligence in some form. Don’t be the business that goes under because of it! Hire an expert, bring in a 3rd party who knows AI inside and out and can tell you how useful an algorithm might be for your company. Make sure it’s the right decision for your business and that the algorithm you are looking for not only exists, but actually does what it’s supposed to do. You can have all of the trust and faith in your own employees to do their due diligence on AI, but if that’s not their main job function, it’s always smart to consult someone outside your firm.