Lyft OTTO: Design VUI for Autonomous Vehicle

--

Process Document Number 1, Co-authors: Carol Ho, Christopher Costes, Lulin Shan, Matt Geiger

Project Introduction
Within this document, you’ll find the story of our work prior to beginning our demo video. This is a project for Interaction Design Studio at Carnegie Mellon University in Fall 2020, under the instruction of Dina El-Zanfaly and Kyuha Shim. Our design brief was to create a Voice Interface Experience for the automobile industry, set five years into the future (2025). The first sprint of work was first and secondary research to understand the market trend as well as the potential users’ needs. From this initial research, we determined creating trust in autonomous driving as the biggest opportunity. As we’ve progressed into the designing phases we’ve established prototypes of the brand identity, mobile interface, voice animations, and storyboards that center around an autonomous driving service which creates trust and connection.

THE RESEARCH

A. Transportation Snapshot

The demand for public modes of transit rises
According to the Pew research center’s research, until 2016, one in ten people use public transit daily, and the number is growing. And when it comes to the types of public transportation, there’s increasing popularity in riding app from 2016 to 2018 from 15% to 36% in the United States.

Most of the trips are local and short-term
According to Deloitte’s report, most car trips are local and short-term. While the lengths of trips vary, the numbers suggest that the primary use of cars isn’t for longer journeys, mean our best opportunity is likely in addressing the everyday. (at the right of the graph)

Car sharing marketing opportunities grow
In 2017, there were already around 10 million people using this type of service, and, according to a study by Frost & Sullivan, by 2025, their numbers will reach 36 million, maintaining the annual growth rate of 16.4%. Global Market Insights forecasts the value of the global car sharing market in 2024 at USD 11 billion.

Source: Frost & Sullivan, Future of Car Sharing Market to 2025

Based on the market trend, we found that with mostly local destination and car purchases motivated by giving many people access, community/shared transportation options will likely grow in importance. Moreover, the growing numbers of micro-transit and the large numbers of people using public transit means Autonomous Driving will need to be multimodal and integrated within many transportation systems. Therefore, our project direction will be looking into the context of public communication in Autonomous Driving, and also exploring other opportunities such as community and shared options.

B. Landscape of Autonomous Driving

Autonomous driving normally refers to self-driving vehicles or transport systems that move without the intervention of a human driver.

The levels of autonomous driving ranges from Level zero to Level five. It is defined by The Society Of Automotive Engineers (SAE) International in 2014 and is widely recognized in the industry. No technology is yet capable of Level 5, full automation, and some experts claim this level will never be achieved. The most automated personal vehicles on the market perform at level 2, where a human driver still needs to monitor and judge when to take over control, for example, Tesla’s Autopilot.

Levels of driving automation summary. Adapted from SAE by Wevolver.

The autonomous driving system is similar to the existence of a human driver. Similar to the entire process of humans performing driving actions, self-driving cars also need to “see clearly” the surrounding road conditions, transmit information to the “brain” to think about the optimal route, and finally make a decision to “control” the path of the vehicle. Therefore, the industry generally believes that “perception-decision-execution” are the three most critical autonomous vehicles systems, corresponding to the three parts of the human body: “eyes, brains, and limbs.”

C. Industrial Trends

Based on the industrial trends research by McKinsey, the future of autonomous driving can be synthesizing with the four attributes: the ownership of the vehicle, the place that the vehicle operate, the objects being transferred, and the technology behind it.

We then applied the four attributes to analyze the current autonomous vehicle brands and categorized them into three categories: fleet owned-high autonomy, the privately owned-partial autonomy, and privately owned-primary driving assistant.

D. User Experience

How we interact with vehicles
On a daily basis, we interact with vehicles in various roles, both inside and outside the vehicles. Some of the most coming stakeholders include the driver, other vehicles, motorcyclists, bicyclists, and pedestrians. The shift towards autonomy must take into account the full spectrum of these interactions.

Current Approaches to User Experience
Let’s look at two major players in the field, Tesla and Waymo, as they have taken different approaches towards user experience.

Tesla is working on evolving what a car can do as a product. The brand is well known for its AutoPilot and Full self-driving capability. On the other hand, Waymo is trying to build an infrastructure of cars as a service. Even if you don’t own a car, you can still enjoy the convenience brought by autonomous driving.

“Whether the car is the product or part of a service, to make autonomous vehicles work, users must feel good inside and outside of them. ”

THE CONCEPT — Trustworthy Personalities

When we think about a world filled with AI driver vehicles and autonomous taxi services, things can quickly grow to feel futuristic, but also terrifying. What is the role of a driver when that driver is entirely non-human and how do passengers cope with these changes? Are they happy, scared, nervous, or Angry? Maybe even ambivalent at the new world opening before them? This is the world where our concept for Lyft, called OTTO, takes place. OTTO is meant to be a service addressing a world where fully autonomous ride-sharing is possible in many areas, but where passengers are still unsure about how to feel about these technological advances.

The Problem — How do you establish trust?

One of the most consistently cited problems about driverless cars today is a lack of trust in the technology. Even with growing advances, potential customers still feel unwary about not having a human behind the wheel and struggle to have faith in a technology that can easily endanger their own and others' lives. Fear and mistrust for new methods of travel have a historic president and similar reactions can be seen in both the popularization of automobiles in the early 20th century and when ride-sharing was first established by companies like Lyft. While advances in technology may help, faith in fully autonomous ride-sharing is years farther away than its creation. Despite these facts, many are still not addressing the problem of building trust.

Trusting drivers is not an old problem (picture by Clem Onojeghuo)

The Solution — Keeping it human

What is missing from the equation is a reimagining of how we experience emotion, connection, and reliability in drivers. The OTTO system seeks to address this problem for Lyft by providing scores of unique and evolving AI assistants with personalities that are distinct but with behavior that consistently. This concept stems from research showing how unique AI personalities can providing trust and comfort to users of voice assistants. For Lyft, a brand that prides itself on providing millions of unique interactions each day, this presents a series of business advantages. Because each ride is unique it also creates a localized focal point for any negative experiences. With the perception that an AI is operating autonomously from Lyft, rather than part of one collective hivemind, a bad ride doesn’t poison every ride or autonomous driving as a whole. However, most importantly it keeps the service anchored to its deeply human and personal approach to ridesharing. Even with fewer humans driving, Lyft could maintain a human feel.

OTTO In Action — Evolving for familiar care

The OTTO system will work as the next level of Lyft’s current experience. Lyft’s Amp will become home to each car’s unique OTTO AI, with a new screen and the capability to gesture, allowing human-like responses and informative animations that remain consistent between drivers. Presented through a combination of unique graphics, voices, and temperaments, each AI will be one of a kind. Just as you might recognize your Lyft driver or learn the names of bus drivers, customers will know right away who is driving them and experience a unique interaction.

Each OTTO can be identified by a unique graphic

The service will mimic the system of today, with many different drives working each today centered around the customer’s control. Through their preferences, ratings, personal interaction, the OTTO’s personality, and actions will be formed by customer and community preference. Sorting each AI personality into a matrix of five traits, we can create new personalities based on the most successful AI drivers and adjust for the least popular ones. The result of this will eventually be a unique driver for each locality who recognizes all the individual needs as a passenger.

While each driver has a baseline personality, each will adaptable to a customer’s preference and the context of the ride. For example, an AI driver Named Monty may have a friendly dad personality, but when a customer has their settings set to “no conversation” the AI’s personality will adapt to be sweet and keep from oversharing. Likewise, if Monty is driving someone on their commute it might know to share a weather update for the evening vs when they’re driving someone to the gym and need a motivation song. Beyond what the customer has explicitly confirmed or what the time dictates, OTTO’s will also learn from the past behavior of each rider's trip, like remembering exactly which song to play before the gym or what temperature is best for hotter days. Just like a real person, these personalities will present themselves differently as the customer and the context demands.

Consistant Patterns — Different Facets Same Language

Among all these personalization and personalities it might be easy for customers to feel lost or unsure exactly what each driver is doing. To combat this, we’ve created a uniform and consistent communication language represented through the AI’s animations. This consistent yet adaptable mode model arose from the human face, something that is infinitely unique, but some expressions remain universally recognizable. While each AI may have a unique graphical element to speak through (called a “Facet”), no matter the variations they will follow the same visuals and establish a predictable pattern of behavior.

Trusting Your Driver— Making AI’s More Human

This concept leaves room for human connection and for the experience of ride-sharing to go beyond an indifferent journey from A to B. Most importantly, each of these unique AI identities establishes trust while establishing a common language. By making the journey feel special and still familiar, customers feel the human side of the AI driver, relaxing in the hands of someone they can have faith in, rather than a singular disconnected voice.

As driving changes, so too must out relationships with drivers be they human or not. However, in this process, we cannot leave behind the emotional aspects of being driven, even by a stranger, that makes us feel safe. OTTO will change not only our trust in driverless rideshares but also why we trust them.

THE BRAND IDENTITY OF LYFT

Pink Flamingos: a brand identity and cultural phenomenon. Don Featerstone (LEFT) Divine (RIGHT)

To understand Lyft’s brand identity, you first need to understand kitsch: a German term of folk art. The prototypical plastic pink flamingos and aggressively casual image is perhaps best personified by their use of prosthetic facial hair on vehicles.

Yes, this really was a thing (http://craziestgadgets.com/)

Until their rebranding in 2016, the ride-sharing service mascot was a pink mustache (or “Carstache”). The iconic pink, and frivolous ornament communicated a sort of carefree “I know I’m getting into a stranger’s car, but it sure would be weird for a serial killer to drive around in something so eye-catching” vibe to it. This playful and slightly absurd visual element remains part of Lyft’s DNA, even as they continue to grapple with serious ethical problems and questionable labor and business practices.

Image: YouTube.com — “New Year, New Lyft.” Uploaded by Lyft, Jan 27, 2015

The evolution of the Carstache led to the release of the “Glowstache” in January 2015: a plastic, internally illuminated pink mustache which drivers could mount onto the driver’s dashboard. The glowstache was not long for this world, sadly; Lyft introduced their most recent (and still current) mascot in late 2016. The Lyft AMP is a smart display, capable of showing a rider’s name (reassuring confirmation of the driver’s authenticity and usage of the app).

Groovy.

Brand Words

Adaptable

Personal

Authentic

Technology Choices

Evolution of the Amp

The Amp as the center of communication

Speaking to a monolithic device from the inside is less personal. By housing the conversational UI a physical device, we give riders a focal point to direct their interactions. This is intended to make voice interactions less diffuse, enabling riders to distinguish between the car itself and the interface.

Concept rendering of next-generation interface — Lyft OTTO Amp

No Tablets on the Dash

While our team recognized the many benefits that a larger dashboard interface can bring to drivers in todays vehicles—including vehicles with “assisted driving” or semi-autonomous modesfully autonomous vehicles create a different context for their use. Most importantly, we consistently found it to be a blocker when trying to create experiences around trust. Today, riders typically glance at these large displays to check their location and see the route of the driver; these actions are typically done out of a healthy level of caution and uncertainty. If we want to build trust, we need to remove these large monuments to anxiety and doubt, bringing the focus back to the joy of the journey.

Mockup rendering of OTTO Amp with inverted design for ceiling mount.

The center console in this mockup does not feature a display, but instead provides an open and internally lit compartment for riders. The display is mounted on the ceiling to provide front and backseat passengers with an accessible viewing angle.

For OTTO it was also important to give the rider as much control and customizability as possible, which simply wasn’t possible on a large and standardized display with limited access. Rather, we moved all vital interactions to the customer mobile phone, allowing them to keep customize and adjust as needed while still communicating with their OTTO through the amp. Mobile phones also prevented users from ‘leaving their map’ as happens with a dashboard display, now their map is always with them.

STORYBOARD

To be the AI driver in autonomous vehicle, the virtual assistant should be with a passenger’s whole riding journey. As a team, we came up with 3 scenarios that we were interested in further exploring.

3 scenarios we came up with as a team

Designing storyboards

To begin with, each of us sketched out at least one user flow which we deem important. The goal of creating storyboards was to visualize how, when, and why people would use voice for interactions with cars.

storyboards we created

After sketching out different scenarios, we narrowed down our focus to passengers’ interaction with AI drivers. In this phase, virtual assistant can confirm the trip with the user, pick up the user, and navigate the passenger to the destination.

Developing the storyboard

We went through round two for the script which helped us narrow down on the states for the virtual assistant. The story we would like to develop was “Jeral left her office and headed down main street. Using Lyft to call a car to pick her up and take her to gym”. When developing the script as a team, we tried to combine unique ‘human moments’ to our story to enrich the plot. For example, during the ride, the passenger made a remark about forgetting something in her office. We also incorporated multi-passenger scenario in the later part of the journey. We divided our script into 12 different phases. They were: 1. Send a request; 2. Get into car; 3. Greeting inside car; 4. Needs to change destination; 5. Destination reset; 6. Destination arrival; 7. Start of second pickup; 8. Enter with other passenger; 9. Greeting inside car; 10. Heading to destination; 11. Drop-off; 12. After drop-off.

Storyboard developed as a team

Journey Map

Until now, we had developed a storyboard with interactions, scripts, and scene images, which may have felt a little bit lengthy but covered a lot of details. We moved on to next stage to create a journey map. Based on comments from Dina and Q, as well as takeaways from a lecture by HCII Prof. Paul Pangaro, we tried to simplify the user flow by highlighting functions we want to cover in the final video.

key functions
Journey Map

Storyboard — Mid Review

For mid review, we created two stories showing interactions between passenger and AI driver. Story 1 is about a single passenger’s ride from office to gym. Story 2 showcases a trip involving multiple passengers. We also created several mobile UI to highlight features of the autonomous ride-sharing service.

--

--

Carol Ho
Design VUI for Autonomous Vehicle | Interaction Design Studio

Master of Interaction Design student at Carnegie Mellon University. Optimistic for humanity and enthusiastic for tech. Portfolio Page: https://caroltyho.com