Overcoming manual accessibility testing challenges

Published November 22, 2022

To ensure compliance with Website Content Accessibility Guidelines (WCAG), website developers and business owners must regularly test site functionality and accessible features. Automated accessibility tests are available in many software products, but their scope is limited. Some manual accessibility testing is necessary to fill in the gaps missed by automation.

Yet, because manual testing can be intimidating and time-consuming for developers and business owners who are not WCAG experts, manual testing efforts are often neglected in favor of automation.

What should site developers and business owners focus on when testing their websites to ensure they overcome accessibility roadblocks?

Roadblock: WCAG Itself, Part I

WCAG can feel like an unwieldy document. With multiple versions of WCAG from which to cull compliance guidelines (1.0, 2.0, 2.1, and version 3.0 actively under development), it can be difficult for digital accessibility newcomers to know where to begin.

Further complicating digital accessibility initiatives is the fact that there are three different levels of WCAG conformance —A, AA, or AAA— each level has different and increasingly more difficult-to-attain criteria.

The funding source for a company can dictate the required level of WCAG conformance. An accessibility tester could reasonably expect to run diagnostics on over 50 success criteria on any page. With numbers like that and measures so specific and nuanced, a manual site tester would need to have a degree of familiarity and expertise with WCAG that average developers and entrepreneurs simply do not have.


Thanks to recent technological advancements in accessibility testing software, average developers and small business owners do not need to become WCAG experts overnight. New software offers guided instructions for manual testing, including the specific criteria that require a manual (versus automated) test.

Having a specific checklist of manual tests can prevent developers and entrepreneurs from feeling overwhelmed at the sheer volume of WCAG as they begin the site design process. But one feature of WCAG that a checklist won’t help with is its often-ambiguous nature.

Roadblock: WCAG Itself, Part II

In addition to its vast and unwieldy nature, WCAG and its criteria can often be unclear and require some interpretation. Many WCAG guidelines require contextualization that requires previous experience with digital accessibility compliance measures. These situations can be so nuanced and situation-specific that often different qualified accessibility experts will have different opinions about how to interpret the guidelines and proceed to ensure accessibility.

For an average developer who is only dipping their toe into the accessibility pool and relies on expert opinions to shape their processes, receiving contradictory advice from multiple experts can be frustrating, confusing, and can lead to significant delays in site production.


Unfortunately, there is no cheat code for vanquishing the WCAG’s often nebulous criteria; the guidelines' language will always need interpretation until the policies are overhauled and made more specific.

Until then, the only real solution to this issue is to get experts on the same page. The best way to do this is to perform multiple tests and document all processes and results. Nothing eliminates discrepancies like clear, detailed, obsessive documentation. This also serves to establish testing processes that are consistent and reliable.

Roadblock: Use of Screen Readers also requires some expertise

One of the main things manual accessibility testers will be testing is how well a site communicates with screen readers, the assistive devices that many with visual impairments use to access information on the internet. Screen readers will be able to access the same information as a visually impaired user if the coding for the site follows conformance guidelines.

When it comes to screen reader access, automated testing just doesn’t cut it. Because there are various brands of screen readers with varying functionalities, and because many assistive situations are difficult to replicate in an automated setting, manual testing is necessary to determine how readable a site’s content is for all possible assistive devices.

But precisely because there are so many types of screen readers available, manual testing of these devices can feel overwhelming and intimidating. To properly test all possible devices, a developer or business owner must first be familiar with WCAG success criteria applicable to a desired level of compliance. Then an accessibility tester would also need to be familiar with the usage of various screen reader brands (JAWS, NVDA, VoiceOver, and Talkback are popular, just to name a few). Then to begin testing, a tester would have to try those criteria on each brand for each possible platform (desktop computers, responsive web, native mobile, etc.).


It may sound overwhelming, but initiatives can be put in place to streamline the process. To tighten up the screen reader testing process, testing teams could develop customized testing methodologies that are standardized based on chosen rulesets.

In other words, teams could develop standardized testing templates that are uniform across brands and platforms. A customized, standardized testing template would have the effect of streamlining the testing process. It would lead to less time, energy, and resources expended in testing individual screen readers on multiple criteria for multiple platforms.

As a bonus, these template shortcuts would also reduce the amount of experience and knowledge of specific screen reader brands needed by testers.


Vendor Directory

Accessibility.com now offers an impartial listing of digital accessibility vendors.  Search for products and services by category, subcategory, or by company name.  Check out our new Vendor Directory here.