Microsoft partners with several non-profits to enhance accessibility functions in technology. They hope to release updates by late 2021, and plan to share their data.
It’s no secret that technology has made some serious advancements over the last several years. The process is always ongoing with new innovations and changes happening every day. With the advancement of technology came the needs for accessibility functions for the differently abled. People with vision impairments can have their device dictate to them. People who struggle to type can utilize talk-to-text functions. There are magnifying glasses and other adaptable features on most devices today. Still, there’s always room for improvement and Microsoft is leading the way.
Currently, the way accessibility functions are designed is imperfect. Of course, nothing is perfect, but the data used to create the AI was not collected from the people who use it. Instead, that data was taken from what we previously considered to be normal. The problem? Our definition of normal needs to be re-evaluated. Enter Microsoft’s collaboration with Team Gleason, a non-profit named for former football player Steve Gleason who was diagnosed with amyotrophic lateral sclerosis (ALS).
One of the primary goals of this collaboration is to adjust facial recognition capabilities. Most facial recognition doesn’t account for people using a headstrap, a ventilator, oxygen, or other assistive devices that might obstruct the look of someone’s face.
“Computer vision and machine learning don’t represent the use cases and looks of people with ALS and other conditions,” said Team Gleason’s Blair Casey. “Everybody’s situation is different and the way they use technology is different. People find the most creative ways to be efficient and comfortable.”
The two organizations are partnering on Project Insight, which will collect the data and facial imagery of volunteer users who have ALS. The goal is to integrate the data with Microsoft’s existing services, but also to release it for other companies to use with their own algorithms. The potential release date is late 2021, which is largely because they have to start from scratch.
“Research leads to insights, insights lead to models that engineers bring into products. But we have to have data to make it accurate enough to be in a product in the first place,” she said [Mary Bellard, Microsoft’s AI for Accessibility effort]. “The data will be shared — for sure this is not about making any one product better, it’s about accelerating research around these complex opportunities. And that’s work we don’t want to do alone.”
Facial recognition isn’t the only aspect of accessibility being addressed, though. Microsoft is also collaborating with other non-profits to make other aspects better, like image descriptions. Currently, those descriptions are done from the point of view of someone who is standing. Well, there are plenty of people who use wheelchairs or other assistive devices which may put their head lower than someone who is standing. There are other things which are also being looked at.
While it may seem incredible that this wasn’t thought of at the outset of the accessibility function developments, it’s definitely a huge step forward. Getting it right for the people who need it most is really important. We are slowly seeing the lack of diversity in computing being addressed in language processing, localization, accessibility and machine model training. That fact that Microsoft is going to openly share their algorithms with others is outstanding. It should be a pattern for others to follow, as having access to the internet and computing is the equivalent to having books and pencils 100 years ago.
If you have a loved one who has a disability, work with someone who has a disability or live with a disability yourself, keep your eyes open. Late in 2021, you should see some incredible changes to the way accessibility in technology works. Hopefully, Microsoft’s work will yield the benefits they are aiming to achieve!