AI and Accessibility: Building an Inclusive Future
Imagine a world where artificial intelligence is designed with inclusivity at its core. Approximately 16% of the global population experiences high-support-related disabilities—that’s at least 1.3 billion people. Yet, the latest WebAIM Million Accessibility scan, which uses automated testing tools to analyze the most frequented homepages on the web, found that nearly 96% of websites fail to meet foundational accessibility standards due to critical errors that violate the Web Content Accessibility Guidelines (WCAG).
Creating accessible digital products is a shared responsibility that spans leadership, product teams, designers, developers, content creators, and beyond. As we integrate artificial intelligence (AI) into our products and experiences, it’s vital to ensure that this integration is mindful of disability and accessibility.
Key Terms to Know
Before diving into how AI and accessibility intersect, here are a few key terms to keep in mind:
- WCAG (Web Content Accessibility Guidelines): A set of guidelines developed by the W3C to ensure digital products are accessible to people with disabilities. Most industries aim for Level AA compliance.
- Inclusive Design: A design approach that considers the needs of all users, including those with disabilities, to create products that are usable by everyone.
- AI Training Data: The datasets used to train AI models. Inclusive and diverse datasets are essential to ensure AI systems do not perpetuate biases.
- Assistive Technology: Tools and devices used by people with disabilities to interact with digital products, such as screen readers, voice recognition software, and adaptive keyboards.
- Shifting Left: A practice of addressing accessibility early in the design and development process to avoid costly fixes later and ensure better outcomes.
Including Disability and Accessibility in AI Training Data
As the saying goes, “garbage in, garbage out.” The quality of AI is only as good as the data it’s trained on. To create inclusive AI systems, it’s critical to use datasets that are diverse, representative, and inclusive of people with disabilities.
If AI systems are trained on datasets that exclude or misrepresent people with disabilities, we risk perpetuating biases that alienate and exclude. Accessible data should:
This democratizes design for non-technical users while offering developers a way to extend WordPress functionality and provide more options to their clients.
- Include accurate and representative depictions of individuals with disabilities. For instance, image generators often struggle to correctly represent disabled people or those using assistive technologies, leading to misrepresentation or exclusion.
- Feature data generated by a diverse group of users, including people with disabilities. For example, voice recognition systems must include voice samples from people with non-standard speech patterns, such as those with dysarthria, stuttering, or slurred speech.
- Train models on accessible code. Many AI systems, such as ChatGPT, are trained on publicly available code. Unfortunately, much of this code fails to meet accessibility standards, meaning AI often lacks the knowledge to provide valid recommendations for accessible components or websites.
How Can AI Datasets Be Improved?
There are promising projects working to address the lack of disability and accessibility data in AI. Here are two notable examples:
1. Speech Recognition Improvements
Google’s Project Euphonia and the Speech Accessibility Project at the University of Illinois are initiatives focused on improving voice recognition technology for people with diverse speech patterns and disabilities. Individuals with speech or voice disabilities can contribute voice samples to help train AI systems to better recognize non-standard speech patterns, such as stuttering or slurred speech.As voice control becomes more common in technology—used in devices like TV remotes, smartphones, and computers—it’s increasingly important to ensure that these systems accommodate individuals with speech disabilities and non-native speech patterns.
2. Image Recognition and AI Assistance for Blindness and Low Vision
Applications like Be My Eyes use AI and live volunteers to assist blind and low-vision users through live video connections. AI in tools like this is trained on diverse image datasets and continues to improve through user input. Other notable projects include Microsoft’s Seeing AI and Google’s Lookout, which use AI to describe visual environments to users with blindness or low vision.
What Can We Do?
When designing products, accessibility must be considered early in the process. This is often referred to as “shifting left”—addressing accessibility during the initial stages of design and development to avoid costly fixes later.While we may not all be in the position of developing Large Language Models (LLMs), we can ensure the tools we use are trained on diverse, representative, and accessible datasets. It’s also important to enforce high standards for data integrity and accessibility within our own products.Incorrect or incomplete AI-generated responses can lead to:
- Reputational damage.
- Additional effort for users who need to verify the accuracy of responses.
- Exclusion of certain user groups.
To ensure AI-enhanced products are accessible, ask the following questions:
- Does the data include valid, ethical, and reliable insights about people with disabilities?
- Is the data inclusive of accurate accessibility insights?
- Are people with disabilities actively involved in our research and usability testing?
- Are we using accessibility automation tools to enhance our product’s code (e.g., Axe DevTools, Intelligent Guided Testing, etc.)?
- Are we ensuring that everyone can access our product and that no group is excluded from using or purchasing it?
Looking Ahead
Accessibility is everyone’s responsibility, and as AI continues to shape the future of technology, we must ensure that the tools we build and use are inclusive, ethical, and representative of all users.