Automated Accessibility Testing Is Not a Shortcut

Published August 24, 2020

Your project team has never done accessibility before. A quick Google search reveals an automated testing tool. Great! Time saver! A site-wide test reveals over 900 issues. The situation seems really dire. The team accepts their unwieldy to do list. They spend lots of time figuring out how to triage the issues and tackle them. It’s now all hands on deck. Everyone scrambles to push these changes live ASAP so they can move on and build new things.

You breathe a sigh of relief. Your website is now accessible. You’re done, or so you think.

A moment later, you receive a note from a frustrated customer with a visual impairment who still can’t access any of the content on your website.

What went wrong?

It’s easy to assume a simple automated accessibility testing tool will catch everything. However, it was never intended to be a sole measure of a site’s accessibility level.

Digital accessibility testing is more than automated accessibility testing alone

Digital accessibility testing is intended to measure how well your website, service, or app is compatible with people who have varying degrees of ability. This requires a balance of both automated and manual testing methods to achieve an appropriate gauge.

Automated tools were purpose built to accelerate QA testing only. They work by comparing the code of a webpage against some of the success criteria in WCAG. This produces several gaps including errors that can only be detected with manual testing, and the lack of error analysis.

These tools tend to output long lists of issues. Despite catching every instance of an error that they're programmed to catch and identify as an error, they won’t identify the root issue. For example, if there are many color contrast errors, they won’t tell you they originate from one line in the CSS. That requires additional time and effort for your team to investigate.

Therefore, despite the quantity of issues it raises, automated testing will not provide an exhaustive, nor refined list.

Why won't it test all the criteria?

Automated tools rely entirely on code and heavily on code quality. If the code is clean, they may display no issues, despite glaring problems. Because of this, the output can be incomplete. Those missing errors can then be pushed into the live site undetected. Here are just a few examples.

Color contrast not met with text and images

A very common practice, for example, is to have text overlaid on top of hero banners on corporate websites.

There are multiple ways to code this, such as:

  1. Overlaying a text box on top of the image.
  2. Adding the image using CSS as a background property of the text box.
  3. Embedding text as part of the image.

All of these scenarios will pass automated tests, yet may fail to meet the minimum color contrast ratio with manual testing. The background color here is not revealed in the code.

Test with color contrast tools: To check this in design software, identify the background image color using a color picker. If the image is busy, try blurring the image slightly. This forces the pixel colors to converge on an average. If you have large blocks of color as your background, select the closest tone to the font color. Manually enter this value along with the font color in a tool like the WebAIM color contrast checker to assess the contrast ratio.

Many modern automated testing tools already warn testers to use manual tools to check for this.

Screen readers and alternative text

Alternative text quantity

Automated testing will only reveal whether or not images have alt tags. It will not advise on whether those images should be marked as decorative.

Adding alt tags to every image for access is certainly noble. However, there is a risk in sacrificing the quality of experience for people who use screen readers by providing too much distracting information. This can produce a very repetitive and overwhelmingly verbose experience, especially to those who are new to this technology.

For a simple example, imagine a pencil icon with a label. If the alt text and label both say ‘Edit’, the screen reader can sound like this: "Edit image, Edit. Edit button." This message can obscure the fact that it is simply an edit button.

Alternative text quality

Automated tools will also not be able to check if the alt text serves as an appropriate or accurate descriptor for the image. If there is an image of an apple and the alt text says "orange," it won’t flag this. It is not a substitute for manual testing with screen readers.

Test with screen readers: Try this by using one of the most common screen readers, such as NVDA. Check the whole website and assess the content.

Font is too small or unreadable

There is no official WCAG guideline specifying minimum font size. Automated testing will not detect poor design choices. If parts of your website have an extremely small font size, it needs to be re-evaluated. if it’s too small, it might as well not be there at all.

Test with user research: These types of issues will often be revealed through careful observation in user research with all people, not just people with disabilities. If users with clear vision need to lean in or reach for the zoom tool, rethink this design decision.

As you can see in these examples, automated tools should be paired with other testing methods to form an accurate accessibility assessment.

So where does automated testing shine?

Despite their flaws, automated tests are not all bad. Your website could be a complex mess with hundreds of pages of code under the hood. Considering the lengths that a QA tester will need to go through to perform regression testing. Automated testing can serve as a canary in the coalmine, to help draw attention to specific problem areas, or identify site-wide error patterns.

What else can my team do?

Providing access to people with diverse abilities is much more than just meeting WCAG and the legal requirements. Next time you are thinking about an accessibility test strategy for your website, consider automated testing as a complement rather than substitute to user research and manual testing.

Of course, prevention is the best medicine here. All teams can prevent a majority of these issues by making sure they adopt an accessibility-first mindset, in all things they do.

Learn more about accessibility training

Our mission is to create objective and trustworthy information and resources to become a catalyst for equal access to the physical and digital worlds. To support the efforts of our colleagues working to improve accessibility and communication, Accessibility.com is excited to announce a limited-time special offer for our Accessible Customer Service for Virtual Customer Service Teams certification course.

The Accessible Customer Service for Virtual Customer Service Teams training course is specially crafted to help customer service representatives gain the specific knowledge and awareness they need to deliver inclusive service for customers with disabilities.

 

Accessibility Services for Small to Medium-Sized Businesses - Free Online Event!

Join us on Wednesday, May 1st, at 1 PM ET for a free online event to explore how to evaluate and select accessibility services for your small to medium-sized business. Click here to learn more about this event and to register.

Click here to see our Events Calendar.

Accessibility.com's 2024 events will utilize the Zoom Events platform, offering a virtual expo hall for attendees to meet with prospective vendors. If your company is interested in being part of the expo hall, don't hesitate to get in touch with Amanda@Accessibility.com.

Vendor Directory

Accessibility.com offers the premier impartial listing of digital accessibility vendors.  Search for products and services by category, subcategory, or company name.  Check out our new Vendor Directory here.

Comments