Blog

The Evolution of Alt-Text in the Age of AI

The Easypress Team

February 4, 2026

:

7

mins

Written By
The Easypress Team

Exploring the past, present and future of alt-text and how it continues to evolve in the era of AI.

The Brief History of Alt-Text

The attribute for alt-text has been around since the beginning of HTML in 1993, but it wasn't used as image tags on websites until 1999.

Initially, alt-text served as a simple description for images when internet speeds were slow, and images took a long time to load. This allowed users to understand the content without waiting for all visual elements to appear. Over time, its importance grew as a tool for web accessibility.

Alt-text wasn't considered a necessary component on the web until 2006 when the National Federation of the Blind (NFB) filed a lawsuit to challenge whether the limitations of the Americans with Disabilities Act (ADA) of 1990 applied to websites.

As the internet continued to evolve, so did the functionality and significance of alt-text. With advancing standards, such as the European Accessibility Act that came into force in 2025, alt-text has become a critical part of production workflows, rather than just an afterthought.  

However, with the need to increasing resource and budgets to accommodate the incorporation of alt-text, publishers have been led to seek convenient and inexpensive solutions to writing descriptions for their content.

Introducing... the boom of AI-generated alt-text and extended descriptions!  

The Surface Appeal of AI-Generated Alt-Text

Historically, manually writing image descriptions has been very labour-intensive, inconsistent, and is largely dependent on the ability of the person writing the alt-text itself.  

AI-generated descriptions appear to offer a simple solution. With minimal effort, platforms can now ensure that at least some description accompanies an image, especially when an author provide none.

But the core issue is not whether AI can describe images (which it can, even if with limited success) but whether the type of description it provides is meaningful, trustworthy, and responsive to the actual needs of readers. And in many cases, the answer is ‘no’.

The Problem of Scale with AI

One of the driving forces behind the pressure to use AI is scale. Modern publishers are no longer being asked to provide alt-text only for new, image-heavy titles, but across vast backlists, multiple formats and increasing accessibility standards.

We’re no longer talking about dozens or even hundreds of images, but thousands. At this volume, manually writing and reviewing high-quality alt-text is not just a question of care but of time, cost and logistics. AI looks attractive in this context because it promises to create almost instant, universal coverage: a description attached to every image, everywhere. Of course, the danger is that coverage is mistaken for true accessibility.  

AI’s Miscommunication Habit

Although there are times that alt-text produced by AI is adequate for describing a singular image, there is an underlying issue that often arises whereby the automated alt-text created is just completely wrong.

Generative models can be prone to “hallucination,” confidently asserting details that aren’t present in the image at all.  

Low-quality photos and minor distortions can lead the model to invent visual features, describe fictional people, or misinterpret settings. For readers who cannot visually check the image, this isn't a minor error— it’s a breakdown in trust and is extremely misleading.  

What AI Still Doesn’t Understand About Alt-Text

Although AI has been becoming more common and widely used in recent years, it still has a long way to go in the eyes of publishers.  

The fundamental flaw with AI is that it is currently unable to understand that image descriptions are not just about what is visible. Software fails to account for the fact that to readers need to understand an image in relation to the content and narrative itself.  

The best human alt-text writers know how to make decisions: when to describe, when to summarise, and when to omit from descriptions. Most of all, they know how to prioritise the reader and their needs.

Does AI Really Produce Inclusive Descriptions?

In terms of inclusion, there is a common misconception: content that ‘appears’ to be accessible and content that ‘is’ accessible are the same. The use of algorithms, such as AI, often only creates the appearance of accessibility, without the investment in production workflows or human intervention that promotes real inclusion.  

Embracing the Evolution of AI-Generated Alt-Text

AI-based tools for descriptions should be supportive, not substitutive. They should be used alongside human-based resources to work collaboratively in the production process.  

AI should be used to prompt suggestions, help designers and editors think through possibilities, or highlight images missing descriptions.  

‘Fast and cheap’ are not the same as ‘accessible and inclusive’. Publishers must resist the convenience of using AI unthinkingly, ensuring they are producing the best image descriptions possible and not just ticking boxes.

If the alt-text of an image is AI-generated, it should say so. Readers should have the ability to request more information, flag errors, or contribute their own insights on the effectiveness of the descriptions provided.

In Summary

From the introduction of the alt-text attribute in HTML to today’s focus on inclusive image descriptions, the evolution of alt-text has been extraordinary. Although a future where AI-generated alt-text replaces human writers currently seems unlikely, the hope is that a productive partnership will emerge to maximise the quality of alt-text publishers provide.  

Heading

Related Posts

Read More

Get a 1 month FREE trial today

Try our print and digital publishing platform for free today.

Contact us and we will provide the best solution to suit your digital publishing needs.

Start Free Trial!

Get all the latest blogs straight to your inbox!

Save time
and money!

Pricing