The Downsides of AI and Accessibility

Published August 28, 2024

Artificial intelligence (AI) use is becoming increasingly widespread. Applications like ChatGPT are nearly taking over some industries. Customer service, marketing, and even creative industries are utilizing AI for everyday tasks, and there has been a lot of discussion about how AI can benefit people with disabilities.

But nothing is perfect, and that includes AI. Let’s look beneath the surface and explore how AI can impede accessibility and what can be done to mitigate its downsides. 

Lack of human oversight

Artificial intelligence is just that — artificial. It does not have the human ability to differentiate things that are and aren’t accessible, which can sometimes cause issues. And depending on how AI is being used in a given case, some of these issues could be major. 

If you are using an AI-based tool to check Web Content Accessibility Guidelines (WCAG) compliance, for example, it could give false passes when an aspect of your website actually fails. Additionally, AI may not be capable of recognizing WCAG exceptions, such as WCAG Success Criteria 1.4.3, which exempts logos from contrast requirements. Logos may be immediately recognizable to humans, but AI might not be able to distinguish them from other images.

Another example is how AI handles alternative text. It may be easy enough for it to check whether alt text exists, but not to determine if the actual content of the alt text is useful. For example, if an image of a football contains alt text that describes it as a basketball, AI may not be able to pick up on that. A blog put AI-generated alt text to the test, and the results weren’t promising. 

This can pose a serious problem if you rely too heavily on AI and don’t double-check with human eyes.

Accuracy

Though there have been stories of intelligent conversations with chatbots, the ability of AI to accurately recognize content isn’t on par with a human’s. An AI’s ability to generate content is dependent on what it has been trained on, and even the most effective tools can’t deliver perfect accuracy when it comes to things like captions and translations. AI-generated captions often include misunderstood words so, although they are a good starting point, using them without double-checking their accuracy would not be suitable for accessibility. YouTube’s auto-captions are often off track, incorrect, and sometimes incomplete on their own, requiring corrections to be fully accessible. It is a well-known problem that doesn’t seem to have a solution yet.  

Ethics

AI is trained on human-generated content and that is a source of great controversy, even leading to lawsuits about AI-generated art stealing from artists and stealing user data. But those aren’t the only ethical questions surrounding the use of AI.

Unconscious biases towards people with disabilities are often perpetuated through AI. Since AI is built by humans, and humans have biases, those biases get built into the software. Biases can impede accessibility, especially if you don’t utilize AI with awareness and intention. At Amazon, for example, its recruitment algorithm discriminated against any resume containing the word “woman” by using data from a male-dominated workforce. The same kind of thing happens with accessibility. The data used to train AI is sourced from an ableist society, so those biases creep into the software.

There is also a growing concern that AI could replace workers in many fields. AI is automating many jobs in fields that employ people with disabilities. 15.3% of surveyed employed people with disabilities work in production, transportation, and material moving jobs — that’s a lot of people at risk of losing a job to AI and automation. Call centers are an industry popular for people with disabilities, mainly because of the recent trend towards remote work. Post-pandemic, 76% of call centers have at least 80% of their staff working remotely, but this industry is at risk of being replaced by AI and automation as well thanks to lower costs and the lifelike responses AI can give. 

Replacing jobs is just one facet of AI that makes it questionable as a technology. As AI develops and grows, ethics cannot be ignored. 

Mitigation

If you want to use AI as a step in your accessibility plan, you should work to mitigate these downsides. Work against the inherent bias in AI by not letting it do all the work unsupervised. Double-check the work to ensure that any biases can be worked around. When finding data to use to train AI, seek inclusive data to minimize harmful bias as much as possible.

When using data, ensure that the data you use is properly sourced and not stolen. Any sensitive data — like names, birthdays, or social security numbers — should be properly censored before being fed to AI, to avoid the possibility of a data leak. Any sensitive data should be treated with care, and any leaks handled immediately.

For employment, avoid replacing workers with AI entirely. AI has its issues, and people can still do superior work. It is more ethical to use AI as a tool to help support human workers rather than cutting jobs for the allure of technology.

AI can be a powerful tool for accessibility, but can just as easily harm it if mitigating steps aren’t taken. 

Conclusion

Many people are incorporating AI into their workflows, including accessibility-related work and other tasks that may impact accessibility. It can be a useful tool, but it does not come without its downsides. Accuracy, ethics, and a lack of human oversight can all hinder accessibility. However, you can mitigate these downsides and still use AI as an effective tool towards the goal of accessibility.

Vendor Directory

Accessibility.com offers the premier impartial listing of digital accessibility vendors.  Search for products and services by category, subcategory, or company name.  Check out our new Vendor Directory here.

Comments