Accessibility is easy to discuss in the abstract and much harder to practice well. It is tempting to turn it into a checklist, a score, or a vague promise that a site is “accessible” because an automated tool said something encouraging. But accessibility is ultimately about people using real devices, real browsers, real assistive technology, and real websites.
That is why survey data can be so useful. It does not replace direct user research, manual testing, or lived experience, but it can challenge bad assumptions. It can show which environments are common, which habits are widespread, and which parts of a website matter more than developers may realize.
The WebAIM Screen Reader User Survey 2024 includes several points that have helped shape how I think about accessibility, especially as a developer building and testing Siteimp. These are the points I keep coming back to.
Useful points from the WebAIM Screen Reader Survey 2024
99.8% of users who use screen readers use the internet with javascript turned on.
86.1% of computer users who use screen readers use Windows.
52.3% of users who use screen readers use Google Chrome as their primary browser.
JAWS is the most common primary desktop/laptop screen reader; 40.5% of screen reader users report that it is their primary screen reader. NVDA is number two with 37.5% of users.
In the mobile world, users who use screen readers are most likely to use an Apple device as their primary mobile device. 70.6% of users report using an iPhone, iPad, iEtc. Despite this, only 9.7% of users report that VoiceOver is their most commonly used screen reader.
Points #4 and #5 require some context. 49.5% of users who use screen readers report using mobile and desktop/laptop about the same amount. 40.2% report that they use a desktop/laptop most of the time.
58% of mobile users are most likely to use an app over the web if given the choice.
Do you think the solution is better assistive technology? The data from this survey does not support this – 85.9% of respondents think that more accessible websites would have a greater impact on web accessibility than improved assistive technology.
For those of you content engineering types....
When screen reader users are trying to find information on a lengthy website, 71.6% of users will navigate via headings. And 57% of screen reader users find heading levels extremely helpful.
Why these points matter
The first thing that jumps out is how practical this data is. These findings are not abstract accessibility trivia. They affect how we test websites and applications.
If most screen reader users on computers are using Windows, then Windows testing matters. If JAWS and NVDA are the most common desktop screen readers, then testing with NVDA is not a strange edge case. It is one of the most direct ways a developer can start understanding how many people experience the web.
The JavaScript point also matters. It does not mean developers should stop caring about progressive enhancement, resilience, or graceful failure. It does mean that the old assumption that screen reader users are mostly browsing with JavaScript disabled is not a useful foundation for modern accessibility work.
The heading data is especially important for content-heavy websites. Headings are not decorative text. They are navigation, structure, and context. If a lengthy page has weak headings, confusing heading levels, or headings chosen only for visual size, it may be harder to use even if the paragraphs themselves are well written.
Accessibility is not only an assistive technology problem
The most important point may be that respondents thought more accessible websites would have a greater impact than improved assistive technology. That puts responsibility back where it belongs: on the people designing, writing, building, testing, and maintaining websites.
How this shapes testing
For my own work, this data reinforces a simple idea: accessibility testing has to include real workflows, not just automated reports. Automated checks are useful, but they cannot tell you whether a page feels coherent when navigating by headings, whether a workflow makes sense with a screen reader, or whether a keyboard user can move through an interface without getting lost.
It also reinforces why Siteimp’s accessibility work is moving beyond a single score. A score can be useful, but it compresses too much. The individual checks, explanations, likely scope, and practical review steps matter because they help turn accessibility from a vague aspiration into a specific practice.
The goal is not to pretend automated testing can replace real users, manual review, or accessibility expertise. The goal is to collect better evidence, explain it clearly, and help people find the next useful thing to check.
Conclusion
These survey points are useful because they make accessibility feel less theoretical. They remind us that people are using specific screen readers, browsers, operating systems, devices, and navigation strategies. They also remind us that better websites matter.
If users are navigating long pages by headings, headings deserve care. If Windows, Chrome, JAWS, and NVDA are common parts of the screen reader experience, they deserve attention in testing. If people say more accessible websites would help more than better assistive technology, then developers, designers, writers, and site owners have work to do.
That is not a discouraging conclusion. It is an empowering one. Better structure, clearer labels, stronger headings, keyboard-friendly interactions, and more thoughtful testing are all things we can improve. The work is specific, practical, and worth doing.