With its newest update, Microsoft Edge seeks to solve one of the easiest to remediate, yet often most overlooked, accessibility issues online − images missing alternative text (alt text).
Microsoft internal research indicates that somewhere around 50% of the images on the internet lack alternative text. Alt text provides text descriptions of what a digital image contains. Alt text can then be processed and read aloud by screen readers, the assistive technology that many people with vision impairments use to access the internet.
Images that lack alt text are inaccessible to screen readers and thus the visually impaired individuals who use them. The deficit exists not because of malice but because many digital content creators are likely unaware of the necessity of alt text to ensure accessibility for all users. Enter Microsoft Edge with an update that aims to fill in that gap. The new Edge tech would auto-generate alt text for the 50% of images on the internet that lack it.
For those unaware or who swear by Chrome or Safari, Microsoft Edge is the default browser for all Windows 10 devices. It’s intended to be highly compatible with the modern internet.
Without the Edge technology, when a screen reader encounters an image lacking alt text in Edge, the screen reader will read out “unlabeled graphic” to visually impaired users. This leaves the user with visual impairments without information about the image, limiting the user’s ability to interact with the web page and possibly prohibiting the user from accessing the page’s intended functions.
But the new Edge tech can give visually impaired users information about images they encounter to inform purchasing and other typical internet activities.
How does it work?
When Edge encounters unlabeled images, it sends them to its Azure Cognitive Services Computer Vision API for processing. As Travis Leithead, a program manager on Microsoft’s Edge platform team, explains:
“When a screen reader finds an image without a label, that image can be automatically processed by machine learning algorithms to describe the image in words and capture any text it contains. The algorithms are not perfect, and the quality of the descriptions will vary, but for users of screen readers, having some description for an image is often better than no context at all.”
For added accessibility, the Vision API can create alt text in English, Spanish, Japanese, Portuguese, or Chinese Simplified. These can then be processed by screen readers and read aloud to users as per their language preferences.
Most common web image formats such as JPEG, PNG, GIF, and WEBP are supported.
Limitations and caveats
The Edge tech won’t attempt to process images smaller than 50 x 50 pixels, large image files, images marked as decorative, or images that the Vision API categorizes as pornographic, gory, or sexually suggestive.
While the new tech is being rolled out immediately for Windows, Macs, and Linux, it won’t be available just yet for Android or iOS. So, unfortunately, access via smartphones will have to wait.
And the minds behind the Edge tech fully understand that it’s a work in progress with many improvements to maximize accessibility. Some improvements have already been made, including one that identifies images with existing alt text labels but whose labels are not helpful to users of screen readers. In such cases, the tech would work to assign a more descriptive label.
Setting an accessibility standard?
As the Microsoft team works all the bugs out of the new Edge accessibility tech, auto-generated alt-text could become the new industry standard for internet browsers. Google rolled out similar tech for its web browser Chrome back in 2019.
And auto-generation is nothing new for major social media platforms. Twitter and the Meta platforms are busy perfecting auto-generated captioning for video content and their auto-generation of alt text for images.
To enable the new features on a computer that uses Edge as its internet browser, head to Edge://settings/accessibility. Look for “Get image descriptions from Microsoft for screen readers.” When this setting is enabled, a prompt will appear with a summary of the feature and a link to additional privacy information. After agreeing to continue, Narrator and other popular screen readers will have the ability to read the results of Edge's autogenerated alt text.