Skip to content
Web Development
Winning experiences & solutions

Go the extra accessibility mile– detect commonly undetected issues


In recent years, there has been a proliferation in automated accessibility testing tools, particularly for the Web. These tools greatly help in early detection of certain accessibility issues. However, automated tools are not a bulletproof accessibility solution as they cannot detect many potential issues. Let's look at how to detect these undetected issues and make our systems accessible to many more people.

Automated accessibility testing

Web accessibility testing is commonly done against the W3 Web Content Accessibility Guidelines (WCAG) and its success criteria. Some of the criteria can be translated into binary rules that can be automatically tested by tools – for instance, detecting sufficient color contrast for text, alternate text for images, and labels for form fields. These tools greatly facilitate the detection of specific accessibility issues early on. 

However, automated tools can also give the wrong impression that a web page is fully accessible when the tool finds no issues. In practice, automated tools can only detect up to about 30% of accessibility issues, at least with the currently available technology. So human judgment is always required for manual accessibility assessment.

Manual accessibility testing

Most people interact with the Web using a pointing device, such as a mouse, trackball, or touchscreen. Practitioners involved in software development are no exception. Hence, they usually perform a manual test using a pointing device. The rest of the people, a large minority both with and without disabilities,  interact with the Web using keyboard-only and/or assistive technologies (AT), such as screen readers, braille displays, and switches. So it’s equally important to do manual testing also using keyboard and screen readers.

Keyboard accessibility guidelines are pretty easy to learn and integrate into the testing routine. Screen reader accessibility requires some initial effort to familiarize yourself with the technology and the way people use it to browse the Web. Luckily, there are excellent guidelines including Sara Soueidan’s guide on setting up a screen reader testing environment and the DWP Accessibility Manual’s descriptive templates on what and how to test with different screen readers.

Commonly undetected accessibility issues


Working as a Senior UX and accessibility specialist, I regularly conduct accessibility tests for systems in development and accessibility audits for systems in production. While mobile apps are also tested, this post is focused on browser-based responsive Web systems. The findings include both issues detected and undetected by automated tools. Common issues detected by automated tools are well documented, for instance in WebAIM Million. So instead, I focus on common issues that are undetected by automated tools. The issues are categorized by types that are mostly interrelated. Unsurprisingly, the issues also relate to keyboard and AT.

Skip navigation link

People using keyboard-only or AT such as a switch or mouth stick navigate mostly linearly between focusable elements. Getting to the main content requires them to go through all navigation links repeatedly on every page. That is a tedious effort, especially for people with motor disabilities. A skip navigation link allows people to skip the navigation section and go directly to the main content. 

Unfortunately, this link is commonly missing. To test it, start navigating a page with a keyboard. The skip link should appear first on the page. Activating it should set the focus on the main content – very simple and helpful. Apart from the link, effective navigation with keyboard and AT also requires logical focus order.

Focus order management

By default, the focus order of elements is based on their order in the Document Object Model (DOM), which should match the visual order. With static content that includes mainly links, the focus order is straightforward. However, dynamic content includes actions such as loading more content, opening a modal dialog, and submitting a form. Dynamic actions require focus order management, that is, programmatically setting the focus based on a logical sequence after activating an action. This ensures that the user experience with keyboard and AT is sequential and intuitive.

Some common cases that require focus order management:

  • Dynamically generated content. For instance, after activating a ‘Load more’ button, focus should be set on the newly loaded first focusable element.

  • Dynamically removed content. For instance, when a list item is removed, the focus can be set on the next element in the list (or previous element if it is the last one).

  • Modal dialogs. When a modal is opened, focus should be set on the modal and the content behind the modal should not be accessible by keyboard. When a modal is closed, focus commonly returns to the element that opened the modal (depending on the action taken in the modal).

  • Validation errors. When a form has validation errors, a common accessible pattern is to present an errors list and set the focus on the list.

However, with dynamic actions, the focus is often lost, reset to the top of the page, or remains untouched. This results in an unpleasant experience like having to linearly navigate the page all over again. The easiest way to test focus order is by using a keyboard to navigate a page, including activating dynamic actions and see where the focus is. If you don’t see the focus at all, you may have a focus indicator issue.

Focus indicator

For people using keyboard-only, a focus indicator is the equivalent of a cursor for mouse users. It shows where you are on a web page. Without a clear visual focus indicator, or a cursor, it’s impossible to navigate and use a website. Browsers have their own default implementation of focus indicator for native focusable elements, such as button, links, and form controls. This implementation is WCAG compliant and mostly clear enough.

With native elements, issues are mostly observed when the default focus indicator is overwritten or completely removed. With custom UI components, the focus indicator is often not properly implemented. To test, simply navigate through the page using a keyboard. Anything that can be operated with a mouse should also be keyboard accessible and have a sufficiently clear focus indicator. Make sure you also test focus indicator for dynamic content as described in the previous section.

Semantic HTML

People using keyboard-only and screen readers rely on semantic HTML for navigating the Web and understanding the structure of content. Semantic HTML elements are accessible by default. Issues are commonly related to missing or incorrect semantics, for instance: 

  • Buttons that are implemented using a generic element, instead of a button, would only work for people using a pointing device. Actually, this is such a common issue that there is even a T-shirt for that.

  • Incorrect or missing heading, list, and table elements. Without the correct element, people using screen readers cannot understand the structure of the content.

  • Missing markup for landmark regions. These regions are required for efficiently navigating a website with screen readers.

Testing semantic HTML is done using manual keyboard navigation to check that focusable elements work as expected, and using a screen reader to check that structure elements are correctly implemented. A review of the DOM against the UI could also reveal most semantic HTML issues.

Custom components

Some UI patterns don’t have (yet) a native semantic element, for example, tabs, popover menus, and accordions. Such patterns are implemented with several elements as custom components. Unfortunately, custom components often only work with a pointing device. Common accessibility issues include, for instance:

  • Keyboard navigation is missing or incorrectly implemented

  • A focus indicator is missing

  • Accessible Rich Internet Applications (ARIA) roles, states, and properties are missing or incorrect. People using screen readers rely on ARIA semantics to understand the purpose and behavior of custom components, and therefore cannot use improperly implemented components.

Testing custom components requires understanding of the components’ expected behavior with a keyboard and screen reader. The W3 ARIA Authoring Practices Guide has great guidelines for most common custom components. Smashing magazine has also compiled an extensive guide to accessible components. Preferably, use these sources to implement custom components. It will save you tons of testing effort.

Dynamic updates

Some UI patterns can also generate dynamic content updates, for instance, search results that appear as you type, success and error notifications after an action, and adding an item to the shopping cart. A common issue with these updates is that they are only visually available, not programmatically, and thus not accessible to screen readers. Making them accessible requires implementing the updated content using ARIA live regions and testing it with screen readers. 

Language settings

Language is critical for screen readers, particularly for people who speak multiple languages and use multilingual websites. Screen readers use the language settings on web pages to determine the correct pronunciation. 

Automated tools are currently able to detect only the existence of a page language settings, but not if the setting is correct or if parts of the page content are using a different language than the page language. Here in Finland for instance, Finnish and Swedish are official languages and many websites are multilingual. If both languages are used on the same page without correct language markup, screen readers' announcements will sound more like the Swedish Chef - funny, but not very helpful.

Language settings are tested with a screen reader by listening through the content or by reviewing the content alongside inspecting the markup behind it.


Key takeaway: Go the extra mile

Using automated tools is a cost-effective solution for early detection of many accessibility issues. But it’s evidently not enough. So go the extra accessibility mile and conduct manual tests with a keyboard and screen reader. Once this is a habit, you’ll also learn how to avoid these issues in the first place. Eventually the ‘extra mile’ becomes effortless and your systems become accessible to many more people. 

Do you need help with accessibility? Just drop us a line.