


Designing a cross-platform system that allows low vision users to read an image’s Alt Text without activating a screen reader.
atform: Desktop
Platform: Desktop / iOS
Role: UX Researcher and UI Designer
Project Type: Social Media Feature Add-On, Vibe-Coded Google Chrome Extension
Industry: Accessibility, Social Media, AI
Tools: Figma, Claude, Zoom, Descript, Whimsical


Background:
About Blindness and Low Vision:
“Blind people are not a monolith. Everybody does things a little differently.
- One user during our usability study
Only 15-18% of legally blind people are totally blind. (SOURCE: Iowa Dept for the Blind)
Blindness is a spectrum; people who have low vision have varying degrees of visual acuity. How they see at any given moment is influenced by the relative brightness of the world around them, time of day, room lighting, eye fatigue, and more.
About Accessibility in General:
Accessibility is defined as the design of products, devices, services, vehicles, or environments to be usable by disabled people.
There are a variety of reasons why accessibility is a deeply important consideration in digital design. For starters, it’s simply the right thing to do - it is a way to break a cycle of exclusionary practices towards people with disabilities and provide an experience that they can use.
It’s also a tenet of good design. Design is fundamentally the practice of making the world a more friendly and usable place for all. All too often, sighted designers prioritize the visual world alone - focusing on pixel-perfect experiences but not necessarily creating accessible ones. These two things should go hand-in-hand.
Lastly, there is a strong business case for Accessibility. According to The World Economic Forum, “By 2030 the global assistive technology market is expected to be worth $31.22 billion. 3.5 billion people will be using assistive products by 2050.”
The Problem:
For sighted and low vision users, Alt text is inaccessible without a screen reader.
Alt text might contain useful information to add context or additional detail to someone with partial vision, but they can’t access it without using a screenreader.
What if there were a way to make Alt Text accessible to everyone?
Inspired by a suggestion from a friend who has low vision, we explored building a tool that could make that possible.
High Fidelity Prototyping:
An A/B Test
We immediately changed gears and built a brand new prototype from scratch to address user feedback.
Like our first prototype, this one also is an A/B test - though we're testing different things this time. The two tasks that the users were asked to accomplish, though nearly identical, presented both a way to compare how alt text was displayed and a springboard for a conversation around AI vs. Human generated content.
A) Alt Text overlayed on top of the image


When the user clicks the "Alt text button" on the left, they see the text displayed over the image (as shown on the right).
B) Alt Text displayed below the Alt Text button, beneath the image


Do you notice the difference between the way Alt Text is displayed?
This group displays the text below the "Alt Text / Dismiss" button.
Which one would users prefer? Let's find out.
Usability Testing

Key Findings:
Most participants preferred alt text displayed below the image over it being superimposed atop it (Option B).
Ultimately, our usability test validated the addition of an Alt Text Visual Display button. Naturally, it's not a one-size fits all silver bullet. An Accessibility feature can provide different options for different people at different times.
3 out of 4 users believe they would find a use for it from time to time. The third user, while having low vision, uses a screenreader almost exclusively when using technology - which would sidestep the utility of this tool.
Positive Feedback
• Alt text button is useful: All participants found the alt text button valuable, especially on platforms like Facebook and Instagram where image content is often unclear for low-vision users. The concept itself was validating for this group and seen as potentially beneficial for a broader audience.
• AI or Human-written text: All participants felt that labelling alt text as AI-generated or human-written was appreciated. It increases user trust and helps users judge the reliability and tone of a description.
Iterative Feedback
• Overlays vs. below-image: Below-image display lets users bounce between the image and description, comparing them easily, whereas overlays can obscure details or require dismissing before viewing the photo again. Most users preferred it for this reason.
Other Takeaways
Assistive tech limitations: Though this test was meant to be visual (designed for low vision users who wouldn’t necessarily turn to a screen reader for this task), we learned that Figma lacks accessibility for creating mobile prototypes.
During user interviews, we learned that low vision users turn use different accessibility tools on different devices, sometimes toggling a screenreader on and off for a specific task.
The Figma prototype was not screen reader-compatible on mobile, which limits realistic testing. Prototypes for future accessibility should be platform-appropriate - which might mean using a more accessibility-friendly prototyping tool (like Framer, which plays nicely with a screenreader on both mobile and desktop)
Avoid all-caps in displayed text and prioritize natural reading flow and familiar fonts. Currently, Facebook's AI Alt Text tagger (as shown verbatim in our second prototype) tries to match whatever case it sees in an image at the expense of readability, as it makes makes reading full sentences awkward.
Inconsistent Platform Accessibility: When discussing Bluesky’s implementation of a similar Alt text feature, one user found it ironic that the platform added this low vision accessibility feature but the user still found screenreader accessibility still lacking and somewhat awkward.