🎙 Project 02

Conversation design for accessible autonomous vehicles

Role
Voice & Conversation Designer
Team
Carnegie Mellon University, Propel IT
TYpe
Design Challenge at CMU
Time
January - June 2022

Video of the VoiceFlow demo for temperature controls in the car.

Tools & Methods

Tools

VoiceFlow
Figma
FigJam
VoiceXD (in beta)

Methods

Rapid iteration
User Testing
CUI Modeling
Prototyping

30 second pitch

How might we give riders independence and control while using an Autonomous Vehicle (AV)?

Provide a multi-control mobile app for disabled riders to independently control AV.

Screen based controls for in-car temperature changes.

Voice User Interface (VUI) for in-car temperature changes.

Designing conversations for accessible Autonomous Vehicles (AV).

Table of Contents

Context
Starting point
Rapid Iteration & Testing
“Fill-in-the-blank”
Final Prototype
Results
Reflection

Context

What is Unigo?

Unigo is an accessible smartphone app for control and communication in autonomous vehicles (AVs).

Where does Voice User Interface (VUI) fit in?

Unigo was designed to envision better AV ride sharing experiences for disabled riders. But for that to be realized, AVs must have interfaces accessible to everyone regardless of their ability and communication preferences.

Our research team found that target users have requested voice control features. And we also know that voice controls are used in the form of Voice User Interfaces (VUIs) by many people on phones today. My task was to investigate the build of VUIs to Unigo: how do we enable our project to support those actions?

Rapid Iteration and testing

Overly structured conversations don’t work.

The design failed to allow for participants to complete tasks the way they wanted to with linearly structured conversations.

4 Rapid testing sessions

1 Disability community testing session

2 iterations

User testing findings

The initial testing I did was of a linear conversation design. I created paper prototypes to conduct Wizard of Oz testing, and recruited a couple of my colleagues as test participants. I quickly found that it limited natural conversation. I wanted to be sure this was the case by iterating on this design. I created a second iteration on Google Slides, and tested with disability community members as well as my other colleagues.

Unfortunately, both iterations failed to support seamless conversation. With this structure, it couldn’t support any kind of organic language or allow room for flexibility. It was clear that linear: “this, then that” type of structure was not going to work.

Paper prototype setup for initial testing of the Linear Conversation design.

Virtual user tests using Google Slides on a Linear Conversation prototype.

Fill-in-the-blank

Pivot to a design that supports natural conversations.

Designing a dialog flow centered on user  intents, and responding to fill in missing pieces of information to exectute changes.

8 Rapid testing sessions

12 Feedback sessions

1 Disability community testing session

5 total iterations

The “Fill-in-the-blank” model

Instead of pre-programming a whole conversation for the user to follow, what if the technology could just ask questions to “fill in” the missing piece of information? That’s the thought behind this model. I wanted to test this right away, so I made a visual model in Figma.

Initial Fill-in-the-blank model in FigJam.

User testing findings

To test this approach, I used printouts of this model to take notes on how conversations happened during testing. Recruited participants included disability community members and people in my network who have used voice agents. These testing sessions were helpful for me to determine the most common phrases, and what information the Voice Agent (VA) should ask.

Overall, “Fill-in-the-blank” model seems to naturally fit with participant voice interactions.

1. All participants said “my, me, mine.” (e.g. “the light above me”)
1 participant said that being asked which seat/light is annoying.

2. Numbers are generally intuitive for participants.
2 participants thought it’s odd, but easily learned.
1 participant strongly preferred numbers.

3. Make assumptions if you’re confident.
Ask only what’s required, and make assumptions on what is considered optional.

Fill-in-the-blank prototype testing notes on paper.

Final prototype

Fleshing out an interactive VUI prototype.

I created a VoiceFlow interactive prototype and a design system model that incorporated user testing feedback.

Dialog flows

Dialog flows with utterances, responses, intents, and error catching were fleshed out in FigJam. This established a design system for the team’s VUI. If anyone wanted to build out more conversations, they may base it off of this template.

Dialog flow for controlling temperature of the car.

Prototyping VoiceFlow

Select dialog flows were prototyped into VoiceFlow. I focused on prototyping controls that represented a format that could be replicated across the rest of the controls. Prototyped controls include temperature, backrest, headrest, and fan speed. To confirm that it was working, my team tested this prototype to fine tune details.

đź”— Test the VoiceFlow prototype here

Final VoiceFlow prototype of Unigo VUI.

Video of the VoiceFlow demo for temperature controls in the car.

Results

The Unigo app and documentation were submitted to the Inclusive Design Challenge.

Our team submitted our final app at the end of May. The project is accepted as a semifinalist. Final results are still pending.

Final submission

Citation of the submitted report:
Nikolas Martelaro, Patrick Carrington, Sarah E. Fox, and Jodi Forlizzi. 2022. Designing an Inclusive Mobile App for People with Disabilities to Inde-pendently Use Autonomous Vehicles. In 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (Automo-tiveUI ’22), September 17–20, 2022, Seoul, Republic of Korea. ACM, New York, NY, USA, 23 pages.
‍
DOI Link (will work in late September):https://doi.org/10.1145/3543174.3546850

Video submission of Unigo to Inclusive Design Challenge competition.

Next steps

I had conversations with a designer from the conversation design editor,  VoiceXD (beta version). Next steps will be to add more conversation flows to VoiceXD.

Reflection

A little testing, even if it’s scrappy, can go a long way.

In reflecting on this project, I can easily say that this was one of my favorite projects as a graduate student at CMU. Not only was a lot of fun, but it also challenged my initiative as a deisgner. As I was designing Voice User Interfaces (VUI), I had a realization that conversations aren’t linear all the time and unpredictable. This came from initial testing I did with paper and scrappy protocols with people who happened to be available near me.

And I am so glad I did that testing. The starting design that I used for the test failed epically, but instead it worked as a catalyst for conversation with the participant. Together we generated ideas to improve the approach. And of course, the design at the end is not just “my” design. It’s the result of everyone’s ideas and feedback.

Being open to testing with others and having the humility to incorporate the feedback has helped push this project to where it landed. I’ll take this lesson with me going forward in my career as a HCI and UX professional.

Back to top