Adding a single line of code can make some interactive visualizations accessible to screen reader users

Interactive visualizations have changed the way we understand our lives. For example, they can coronavirus infections in every state.

But these images are often inaccessible to people using screen readers, software programs that scan the contents of a computer screen and make them available through synthesized voice or Braille. Millions of Americans use screen readers for a variety of reasons, including total or partial blindness, learning disabilities, or motion sensitivity.

Researchers from the University of Washington collaborated with screen reader users to design VoxLens, a JavaScript plugin that allows people to interact with visualizations with one extra line of code. VoxLens users can get a high-level summary of the information described in a graph, listen to a graph translated into sound, or use voice commands to ask specific questions about the data, such as the average or minimum value.

The team presented this project May 3 at CHI 2022 in New Orleans.

“When I look at a chart I can pull out all the information I’m interested in. Maybe it’s the general trend or maybe it’s the maximum,” said lead author Ather Sharif, a UW doctoral student at the Paul G. Allen School of Computer Science & Engineering. “Right now, screen reader users are getting very little or no information about online visualizations, which can sometimes be a matter of life or death in light of the COVID-19 pandemic. The goal of our project is to provide screen reader users with a platform where they can extract as much or as little information as they want.”

Screen readers can inform users about the text on a screen, because this is what researchers call “one-dimensional information.”

“There’s a beginning and an end to a sentence and everything else comes in between,” said co-senior author Jacob O. Wobrock, UW professor at the Information School. “But once you move things into two-dimensional spaces, like visualizations, there’s no clear beginning and end. It’s just not structured in the same way, meaning there’s no clear starting point or sequence for screen readers.”

The team started the project by working with five screen reader users with partial or total blindness to find out how a potential tool might work.

“In terms of accessibility, it’s very important to follow the principle of ‘nothing about us without us,'” Sharif said. “We’re not going to build something and then see how it works. We are going to build it taking into account the feedback from users. We want to build what they need.”

To implement VoxLens, visualization designers only need to add one line of code.

“We didn’t want people to jump from one visualization to another and experience inconsistent information,” Sharif said. “We have turned VoxLens into a public library, which means that you will hear the same kind of summary for all visualizations. Designers can just add that one line of code and we’ll do the rest.”

The researchers evaluated VoxLens by recruiting 22 screen reader users who were either completely or partially blind. Participants learned how to use VoxLens and then completed nine tasks, each of which involved answering questions about a visualization.

Compared to participants from a previous study who did not have access to this tool, VoxLens users completed the tasks with 122% more accuracy and 36% shorter interaction time.

“We want people to interact with a graph as much as they want, but we also don’t want them to spend an hour looking for what the maximum is,” Sharif said. “In our study, interaction time refers to how long it takes to extract information, so reducing it is a good thing.”

The team also interviewed six participants about their experiences.

“We wanted to make sure that these accuracy and interaction time numbers we saw were reflected in how the participants felt about VoxLens,” Sharif said. “We received very positive feedback. Someone told us they’ve been trying to access visualizations for the past 12 years and this was the first time they could do it easily.”

Currently, VoxLens only works for visualizations created with JavaScript libraries, such as: D3chart.js or Google Sheets. But the team is working to expand to other popular visualization platforms. The researchers also acknowledged that the speech recognition system can be frustrating to use.

“This work is part of a much larger agenda for us — removing design biases,” said co-senior author Katharina Reinecke, YOUR associate professor at the Allen School. “When we build technology, we tend to think of people who are similar to us and have the same capabilities as us. For example, D3 has really revolutionized access to online visualizations and improved how people can understand information. But values ​​are ingrained and people are left out. It is really important that we start thinking more about how we can make technology useful for everyone.”

Other co-authors of this article are: Olivia Wanga UW student at Allen School, and Alida Muongchan, a UW student studying human-centered design and engineering. This research was funded by the Mani Charitable Foundation, University of Washington Center for an informed publicand the University of Washington Center for Research and Education on Accessible Technology and Experiences

/public release. This material from the original organisation/author(s) may be of a point in time, edited for clarity, style and length. The views and opinions are those of the author(s). View full here

#Adding #single #line #code #interactive #visualizations #accessible #screen #reader #users

Leave a Comment

Your email address will not be published. Required fields are marked *