I joined a design and engineering team of 27 students at Carnegie Mellon University to build an inclusive mobile app for accessible ridesharing of Autonomous Vehicles (AV). I led the design of the conversation and voice interaction feature. This project was submitted to the U.S. Department of Transportation Inclusive Design Challenge.
Tools
VoiceFlow
Figma
FigJam
VoiceXD (in beta)
Methods
Rapid iteration
User Testing
CUI Modeling
Prototyping
How might we give riders independence and control while using an Autonomous Vehicle (AV)?
Provide a multi-control mobile app for disabled riders to independently control AV.
Table of Contents
Context
Starting point
Rapid Iteration & Testing
“Fill-in-the-blank”
Final Prototype
Results
Reflection
What is Unigo?
Unigo is an accessible smartphone app for control and communication in autonomous vehicles (AVs).
Unigo was designed to envision better AV ride sharing experiences for disabled riders. But for that to be realized, AVs must have interfaces accessible to everyone regardless of their ability and communication preferences.
Our research team found that target users have requested voice control features. And we also know that voice controls are used in the form of Voice User Interfaces (VUIs) by many people on phones today. My task was to investigate the build of VUIs to Unigo: how do we enable our project to support those actions?
Overly structured conversations don’t work.
The design failed to allow for participants to complete tasks the way they wanted to with linearly structured conversations.
The initial testing I did was of a linear conversation design. I created paper prototypes to conduct Wizard of Oz testing, and recruited a couple of my colleagues as test participants. I quickly found that it limited natural conversation. I wanted to be sure this was the case by iterating on this design. I created a second iteration on Google Slides, and tested with disability community members as well as my other colleagues.
Unfortunately, both iterations failed to support seamless conversation. With this structure, it couldn’t support any kind of organic language or allow room for flexibility. It was clear that linear: “this, then that” type of structure was not going to work.
Pivot to a design that supports natural conversations.
Designing a dialog flow centered on user  intents, and responding to fill in missing pieces of information to exectute changes.
Instead of pre-programming a whole conversation for the user to follow, what if the technology could just ask questions to “fill in” the missing piece of information? That’s the thought behind this model. I wanted to test this right away, so I made a visual model in Figma.
To test this approach, I used printouts of this model to take notes on how conversations happened during testing. Recruited participants included disability community members and people in my network who have used voice agents. These testing sessions were helpful for me to determine the most common phrases, and what information the Voice Agent (VA) should ask.
Overall, “Fill-in-the-blank” model seems to naturally fit with participant voice interactions.
1. All participants said “my, me, mine.” (e.g. “the light above me”)
1 participant said that being asked which seat/light is annoying.
2. Numbers are generally intuitive for participants.
2 participants thought it’s odd, but easily learned.
1 participant strongly preferred numbers.
3. Make assumptions if you’re confident.
Ask only what’s required, and make assumptions on what is considered optional.
Fleshing out an interactive VUI prototype.
I created a VoiceFlow interactive prototype and a design system model that incorporated user testing feedback.
Dialog flows with utterances, responses, intents, and error catching were fleshed out in FigJam. This established a design system for the team’s VUI. If anyone wanted to build out more conversations, they may base it off of this template.
Select dialog flows were prototyped into VoiceFlow. I focused on prototyping controls that represented a format that could be replicated across the rest of the controls. Prototyped controls include temperature, backrest, headrest, and fan speed. To confirm that it was working, my team tested this prototype to fine tune details.
đź”— Test the VoiceFlow prototype hereThe Unigo app and documentation were submitted to the Inclusive Design Challenge.
Our team submitted our final app at the end of May. The project is accepted as a semifinalist. Final results are still pending.
Citation of the submitted report:
Nikolas Martelaro, Patrick Carrington, Sarah E. Fox, and Jodi Forlizzi. 2022. Designing an Inclusive Mobile App for People with Disabilities to Inde-pendently Use Autonomous Vehicles. In 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (Automo-tiveUI ’22), September 17–20, 2022, Seoul, Republic of Korea. ACM, New York, NY, USA, 23 pages.
‍
DOI Link (will work in late September):https://doi.org/10.1145/3543174.3546850
A little testing, even if it’s scrappy, can go a long way.
In reflecting on this project, I can easily say that this was one of my favorite projects as a graduate student at CMU. Not only was a lot of fun, but it also challenged my initiative as a deisgner. As I was designing Voice User Interfaces (VUI), I had a realization that conversations aren’t linear all the time and unpredictable. This came from initial testing I did with paper and scrappy protocols with people who happened to be available near me.
And I am so glad I did that testing. The starting design that I used for the test failed epically, but instead it worked as a catalyst for conversation with the participant. Together we generated ideas to improve the approach. And of course, the design at the end is not just “my” design. It’s the result of everyone’s ideas and feedback.
Being open to testing with others and having the humility to incorporate the feedback has helped push this project to where it landed. I’ll take this lesson with me going forward in my career as a HCI and UX professional.