top of page
Explanations and Expectations: Trust Building in Automated Vehicles
Michigan Autonomous Vehicle Research Intergroup Collaboration (MAVRIC)
Autonomous Vehicles UX Researcher
5/1/17 - 9/1/17

Published Paper: Presented at Human Robot Interaction Conference 2018
Overview
My team and I designed a research study to determine if the timing and content of messages communicated to humans from automated vehicles affect human trust levels in automated vehicles. ​Four driving conditions were tested: the vehicle told the human it was performing an action before it completed the action, the vehicle told the human after it performed the action, the vehicle only acted with the human's permission, and the vehicle provided no explanation to the human.
Problems
Automakers continue to introduce varying levels of highly automated vehicles to consumers. Although this trend will continue and the level of autonomy of vehicles continues to increase, consumers report in many cases that they are do not trust the partial or fully automated vehicles and are unwilling to adopt and use this technology.
Hypotheses
1. When the automated vehicle provides an explanation before it takes action, rather than after, humans will have greater trust in the vehicle.
2. When drivers are given the option to decide if the vehicle will take action, this will lead to greater trust than just providing an explanation.
Study Design
Interview Scripts
Literature Review



Questionnaire

Vehicle Simulator

Driving Scenarios

Physiological Metrics
Heart Rate

Heart Rate Variability

Galvanic Skin Response


Eye-tracking
Preliminary Results
The study is still underway, but eight subjects have participated so far. Though not conclusive at this point, these results suggest that explanations provided before the automated vehicle takes action, may lead to greater human trust. However, giving the human the option to approve or disallow the vehicle's action may not lead to greater human trust.

bottom of page