Skip to content

Testing My Portfolio for Accessibility: What axe-core Found (and What It Didn't)

| 6 min read

What happened when I added Playwright and axe-core to my portfolio site — the structural issues that surfaced, how I fixed them, and what automated testing still missed.

I've been spending more time on the frontend side of my work lately — improving my React skills and getting more deliberate about how I build UIs. While doing that, I kept noticing how often accessibility is treated as something to bolt on at the end, if at all — WebAIM's 2025 analysis of one million homepages found that 94.8% had at least one detectable accessibility failure. I didn't want to build that habit into my own site, so I decided to take it seriously from the start.

That meant more than running Lighthouse. I added Playwright and axe-core to run automated accessibility checks against every route on every build. I also test with keyboard navigation — not just as a QA step, but as part of how I use my own site. And I started using VoiceOver (the default screen reader on macOS) to check things that automated tools don't catch.

Here's what I found.

The skip link wasn't working

A skip link is an accessibility pattern that lets keyboard and screen reader users jump directly to the main content, bypassing the navigation on every page. It works by linking to an anchor: <a href="#main">Skip to main content</a> needs a corresponding <main id="main"> to land on.

My skip link existed, but it wasn't working reliably because I had placed <main> inside individual page components rather than in the shared layouts. The fix was to move it into the layout wrappers — layout.tsx, BlogLayout, ProjectLayout, and so on — so that every page consistently has the skip link target in the right place. The page-level components that previously used <main> were updated to use <section> instead, which keeps the content semantic without competing with the layout's <main>.

I also added id="main" to the <main> element itself. This is the correct pattern — the skip link target has to be explicitly reachable by ID.

Some decorative icons weren't hidden from assistive technology

I have a helper function, decorateIcon, that handles icon accessibility in two cases: if an icon is decorative (no label provided), it gets aria-hidden="true" and role="presentation" so screen readers skip it; if it's meaningful (a label is provided), it gets role="img" and an accessible label.

The function works correctly. The problem was that I had simply forgotten to use it on some icons. Those icons were neither hidden nor labelled, so screen readers were encountering them without context. axe-core flagged them, and the fix was straightforward: wrap the missed icons in decorateIcon.

This is a good example of why automated testing is useful beyond the initial build. I'd applied the pattern consistently in most places, but "most places" isn't good enough for accessibility. The tests catch the gaps that manual review misses.

Too many divs

A lot of the content on the site was wrapped in <div> elements that could have been more semantic. I went through and replaced them where appropriate — grouping self-contained pieces of content under <article> tags, and using <section> for grouped but non-standalone content. Screen readers use landmark elements to let users navigate the page structure, so getting these right matters beyond just code hygiene.

Interactive icons needed explicit labels

The mobile menu toggle and the site logo both had accessibility issues around labelling — Firefox's accessibility tools flagged that it wasn't explicit what these elements were for. Adding IDs and ensuring the elements were properly labelled for screen readers resolved this. It sounds like a small thing, but for someone navigating by screen reader, an unlabelled interactive element is a dead end.

What automated testing didn't catch

The VoiceOver testing turned up something none of the automated tools flagged: the emojis I'd added to my README documentation. I'd included them thinking they might help neurodivergent readers scan the content more easily. What I hadn't considered is that VoiceOver reads emojis aloud — every single one, with its full description. A README with emoji-decorated headings becomes a wall of "star emoji, heading text, star emoji" when read by a screen reader.

I removed them. The intent was good; the outcome wasn't. It's a useful reminder that accessibility decisions made without testing can go the wrong way even when you're trying to be thoughtful.

A note on tools

Automated tools like axe-core are genuinely useful — they catch structural and attribute-level issues consistently and at scale. But they're widely cited as finding around 30-40% of accessibility issues. The rest requires manual testing: keyboard navigation, screen reader walkthroughs, and thinking about the actual experience of using the site without a mouse or with vision assistance.

If you're looking for a list of tools to get started, I keep a reference list of accessibility tools and resources on this site.

— Karl

Enjoyed this post?

I'm available for freelance work and consultancy in EdTech and full-stack development.