As generative AI technologies rapidly evolve and become integrated into everyday digital experiences, their impact on accessibility is gaining critical, yet often overlooked, attention. In this in-depth Q&A, Léonie Watson, Co-Founder of TetraLogical and renowned accessibility advocate, delves into the practical ways AI is transforming the digital landscape for over a billion people worldwide living with disabilities.
More than 1.3 billion people live with some form of disability, according to the World Health Organisation. In the UK, that figure is estimated at 16 million, or around a quarter of the population. Despite advances in inclusive design, many digital services remain difficult to access, especially for people who rely on screen readers, need simplified content, or process information differently. AI isn’t a simple silver bullet, but it is helping to bridge these gaps. And, as with any tool, the way it is applied will determine whether it delivers real progress or simply reinforces existing barriers, as Léonie Watson, Co-Founder at TetraLogical explains.
AI-powered tools are not only simplifying access to information by condensing complex documents, rephrasing content into plain language, and enabling interactive clarification but also providing groundbreaking support for blind and low-vision users. From interpreting notoriously inaccessible PDFs to generating on-demand image descriptions and enabling real-time visual awareness through smart glasses and mobile apps, AI is bridging long-standing gaps in digital accessibility.
There is a need for inclusive development processes that actively involve disabled communities. Without their input, AI tools risk perpetuating exclusion rather than eliminating it. She stresses that true inclusion requires planning, genuine listening, and the challenging of assumptions, reminding us that technology can open doors, but only if everyone is invited inside.
How is generative AI reshaping accessibility?
Watson: The most promising thing AI is doing for accessibility right now is giving disabled people opportunities. Tasks that were difficult or impossible before are now within reach. AI, in one form or another, has been doing this for decades. Applications capable of recognising speech or text have been in use since the 90s for example. With the more recent arrival of generative AI, the opportunities have grown considerably. For example, instead of getting a literal description of an image, a blind person can now ask for specific details like "What colour shirt is the person on the left wearing?"; or a neurodivergent person can use AI to summarise large amounts of data they might otherwise struggle to process, then ask for clarifications or further details.
The ability to converse (by text or voice) with AI can't be under-estimated because it mimics the way a disabled person might ask another person for assistance. It begins with a question like "Can you tell me what the cooking instructions are?", then morphs into a conversation that clarifies the given oven temperature is for a fan assisted oven, and reveals that the product should not be frozen again once defrosted.
AI is starting to address real barriers faced by over a billion people worldwide who live with disabilities. Despite progress in inclusive design, many digital services remain inaccessible, whether for screen reader users, people needing simplified content, or those processing information differently. AI isn't a silver bullet, but if used thoughtfully, it can help bridge gaps rather than reinforce them.
Many neurodivergent individuals struggle with lengthy documents, reports, academic writing, and administrative text due to factors like attention, language, and executive function. Generative AI can transform these into tailored summaries, rephrase complex wording into plain language, and even interactively clarify unclear areas. This enables users to control the flow of information the way they need it—crucial in today's overstimulating environment.
You’ve described AI as a helpful assistant, not a final authority. What practical steps can users take to use AI tools critically and safely?
Until the quality of what AI generates improves to the point where hallucinations are rare, and the accuracy of responses can be relied upon, it's vital that people understand that a judgement call is needed - and that they have the skills and knowledge to make that call.
Someone using AI to get descriptions of old family photos might not be too worried about a hallucinatory response, but the same person should feel very differently when asking AI about the recommended dosage for their medication. Most people already possess some of these skills, although primarily in the context of deciding whether a resource found online can be trusted, but with AI, the default assumption is that the response can't be trusted, and that verification is necessary, especially when it comes to decisions that affect the safety or well-being of people.
For blind or low-vision users, PDFs and image-heavy content can be especially inaccessible. How can AI help?
Watson: Indeed, PDFs remain a pervasive challenge, often scanned images or complex layouts that screen readers can’t parse. AI can now interpret those formats, summarising the content and presenting it accessibly. Moreover, AI enables ad hoc image description, everything from menus to infographics or social media visuals, areas often neglected in accessibility design.
AI now enables real-time image and scene descriptions. What do you think needs to happen for these tools to become mainstream?
Watson: Arguably, they're already mainstream. Although image and scene descriptions have historically been confined to apps and tools aimed at blind and low vision people, that's no longer the case. ChatGPT can describe what the camera is pointing at in real-time, as can iOS 18 (on an iPhone capable of enabling Apple Intelligence), and the Ray-Bans and Oakley Meta Glasses from Meta make the same functionality available in wearable form, to name a few.
In some cases, the use of real-time visual descriptions are still aimed at specific audiences. For example, one tool uses real-time descriptions to convert facial expressions and other non-verbal cues into haptic feedback, to assist neurodivergent people who struggle to recognise social cues like smiling or frowning. Given that features that make life more accessible for people with disabilities almost always make life more usable for everyone, it isn't much of a stretch to imagine ways real-time descriptions could be useful to more people, and that will almost certainly translate into features available to everyone.designed for blind and low-vision individuals
Are there advances in real-time accessibility aided by AI?
Watson: AI-powered smart glasses or mobile apps are emerging that analyse live video feeds to describe surroundings in real-time. This can mean reading signs, identifying objects, or understanding context in social spaces. We’re still addressing constraints such as battery life, privacy, and consistency, but as these improve, the potential for independence becomes transformative.
What’s the most urgent action designers and policymakers should take to address this?
Watson: Quite simply, the designers of AI tools need to make sure their web apps and mobile apps are accessible. This is accessibility 101, yet most popular AI platforms could do better. Given the billions being invested in AI development, there is really no excuse for not allocating the necessary resources to accessibility.
Policy-makers and regulators need to understand a fundamental truth about why so many disabled people are turning to AI - it doesn't matter if the quality is poor, it's frequently better than the alternatives. People with disabilities are aware that AI can produce inaccurate results, but in the absence of better alternatives, such as image descriptions on websites or someone to ask, AI remains a valuable and appealing option.
Additionally, regulators also need to prevent disabled people from having to choose between the benefits of AI and the knowledge that these Large Language Models (LLM) are trained on stolen data and consume excessive amounts of natural resources. Choosing between personal convenience and protecting the planet is one thing when it comes to recycling plastics, but it's quite another when the choice to save the planet means overlooking all the ways AI can enable people with disabilities to act more independently in the world.
As individuals, there's little that can be done to prevent disabled people being put in this position, so regulators need to hold the AI companies to account on their behalf. This can only be done at a national or international level, but if regulators don't step up to solve these problems, disabled people will continue to bear the brunt of the problem.
AI should never replace inclusive design, nor should its outputs be taken at face value. Errors are real; AI-generated image descriptions might confidently hallucinate wrong details. Summaries may omit facts or invent content. Users must therefore approach AI critically, treating it as a helpful assistant but not the final authority. For designers, the message is clear: AI must support, not sidestep, core accessibility principles.
How can we ensure that AI development truly includes the voices of users with disabilities?
Watson: That inclusion must be intentional. Recent UK research shows over a third of disabled individuals fear being left behind as AI gains traction in healthcare and public services. If they aren’t part of the design journey, tools will fail to meet their needs. Put simply, inclusion doesn't happen by accident. It takes planning, listening, and a willingness to challenge assumptions. AI can help open doors, but it’s up to us to make sure everyone is invited in.
Léonie Watson is the Director at TetraLogical and Chair of the W3C Board of Directors.