Reimagining in-car UX


Automobiles have been evolving over time, and with 17-inch touchscreen displays within cars, the way humans interact with the vehicle has been changing too. A driver’s primary focus usually is considered to be driving and navigating the car safely. With the advent of multiple features creeping into vehicles that enable the drivers to even Tweet while driving, there seems to be a drift from what is a priority while driving. The self-driving cars are here, sure, but not yet entirely. The roads have a very few cars that drive themselves, and that for very limited stretches, but the majority is still driven by humans, and are prone to human mistakes. My intention was to get into the space of driver-machine interaction and find potential opportunity spaces for improvement of existing systems.


The central console and instrument clusters in cars currently have abundant information that need a lot of attention from drivers which could be distracting and also dangerous at times. Using multiple devices, concentratinng on road signage, traffic lights and other events that occur while driving could be cognitively burdensome. This creates driver anxiety which could lead to a situation where one mistake is made, and that snowballs to multiple errors.


Contextual Inquiries

Three contextual inquiries were conducted where beginner drivers were shadowed and observed. They were asked to drive like how they normally would including performing actions such as changing weather controls, music, attending phone calls etc

Semi-structured Interviews

Conducted semi-structured interviews with beginner drivers at the BMV. Tried to understand their behavior and emotional state while driving. This helped me get a sense of the difficulties faced by drivers who are new to driving.


Empathy Mapping

To get into people’s shoes, understand their behavior and emotions while they perform the act of driving, I decided to synthesize my findings by creating persona empathy maps
that related to the very specific people whom I observed. These personas are direct correlations to people who were shadowed while driving.

Affinity Diagram

To synthesize all the pain points observed through contextual inquiries and semi-structured interviews, an affinity diagram was created to visually analyze the common themes generated through these sessions. 

Insights from Research

Anxiety while driving causes irritation, distraction, and errors. 

Systems should provide minimal and important information only. 

Proactive guides are better than voice-controlled systems. 

Cognitive load to make decisions while driving to be reduced. 

Concepts Explored

Content Relevancy

The relevancy of information on the assistive panel reduces as you move towards the right, that is the most driver-relevant information is located on the left portion of the interface and information less relevant on the right.

Do not disturb mode

Providing an unobtrusive notification system to reduce alerting when in drive mode. 

Contextual Apps

The ability for the phone to change modes and only necessary apps visible while driving. Think of Android Auto but without actually connecting to the car system. 

Parking Assistants - On Demand

A system that helps find parking when users ask for spots around.

Parking Assistants - Auto Suggestive

A system that helps in finding parking nearby by auto-suggesting. 

Road Signage Assistance 

Cars communicating in Stop signs and deciding who should progress. 

The Design

This Proactive Assistive Panel helps mitigate anxiety by keeping the drivers informed about the situations and providing contextual help. The controls on the steering wheel help drivers to keep the focus on the road and do not require them to take the hands o the wheel which could potentially be dangerous. The parking assistance can be triggered either by the system or could be triggered by the driver. It helps in finding parking around an area which makes it easier for drivers to navigate around a new area and park.

The system does not require complete voice control which through research has been proved to be extremely risky. It requires only binary input from the user as the system prompts with actions that it could perform based on the context or scenario while driving. This greatly reduces the cognitive load of the driver to take decisions. Imagine not having to say “Hey Siri” or “Ok Google” and wait for a response only to realize that the voice recognition had detected wrong words.

Detailed DesignDetailed Design

The Interactions

Conversational Binary Response 

From talking to drivers and noticing people perform multiple tasks while driving, I noticed that most systems these days have complex interactions with very little room for error. The touch targets are either too narrow to accurately press or consists of multiple convoluted steps to achieve even a trivial action. I wanted to keep the interactions as simple as possible and make the driver think as little as possible — hence the idea of a conversational interaction that requires only binary input from the driver.

Context-driven Assistant 

Driving is a complex activity and taking context into consideration is primal to a successful in-car experience. Google Now has cards that can identify where the car has been parked — this is a very nifty tool especially when one parks their car in a big parking lot. Finding parking when you reach a destination, routing via roads with minimal traffic etc takes context into account in addition to learning your driving behavior over time.


Completely self-driving cars are here, and it only a matter of time before they pervade roads. The whole definition of context changes once the cars start driving themselves — there would be a great opportunity to design for entertainment or even making the car a modular workplace during long commutes. Having said that, car consoles/clusters still are very poorly designed. A huge gap needs to be bridged in how the current consoles work before self-driving cars take over.