How’s my driving? Researchers measure trust in autonomous vehicles
Connecting state and local government leaders
Scientists at the Army Research Lab are using facial recognition technology to gauge the trust soldiers have in their robotic teammates, enabling developers to calibrate systems so that they work efficiently with humans whose native trust levels vary.
Scientists at the Army Research Lab are using facial recognition technology to gauge the trust soldiers have in their robotic teammates, enabling developers to identify changes in trust levels that might indicate increased wariness in high-risk environments and allow them to calibrate systems so that they work efficiently with humans whose native trust levels vary.
In convoys of automated vehicles, some supervising soldiers tend to over-trust the automation, while others may mistrust the system from the start. Using a simulated autonomous driving scenario, the researchers found soldiers could be placed in one of four basic trust categories based on their demographics, personality traits, responses to uncertainty and initial perceptions about trust, stress and workload associated with interacting with automated vehicles.
For their experiment, the researchers had 24 participants, ages 18–65 years, perform a leader-follower driving task operating a simulated vehicle on a two-lane closed-circuit roadway. Participants had to navigate the road, avoid collisions and decide whether to engage their vehicle’s autonomous assistant to help them maintain speed and lane position with respect to the leader.
Throughout the driving task, each participant’s face was recorded via a webcam mounted to the simulation screen, which allowed the researchers to measure their facial expressions on a frame-by-frame basis for each task, and classify those expressions as indicating either happiness, sadness, surprise, fear, anger or contempt.
The researchers used a model-based clustering method that showed marked differences in their levels of subjective trust described four trust-based patterns in their paper. They concluded that trust calibration metrics may not be the same for all groups of people and that trust-based interventions, such as changes in user display features or communication of intent, “may not be necessary for all individuals, or may vary depending on group dynamics.”
One group, for example, had a high desire for change, were open, extraverted and conscientious. “Tied with their low neuroticism scores, we expect this group to be novelty-seeking, be less impacted by stress or workload, and thus be more willing to accept and trust automation,” the researchers wrote. “When identifying trust calibration metrics, we expect members of this group to use the automation and be willing to hand off and take away control, but they may be prone to overtrust.”
Another group exhibited high emotional uncertainty indicating greater negative trust response when the reliability of the automation was low. Those with low cognitive uncertainty, but high agreeableness and conscientiousness, suggested they preferred predictable, planned behavior but would be willing to give automation a chance. The fourth cluster included the youngest participants who did not respond emotionally to uncertainty, but tended not to seek novelty and preferred predictability and structure in uncertain conditions. The researchers said they expected those in that group “to have higher stress and workload while interacting with automation and to exhibit a general negative response to automation.”
With human-autonomous teaming gaining momentum in the military, the researchers said they were interested in finding ways to evaluate affect-based trust, which refers to the “attitudinal state in which the individual makes attributions about the motives of the automation,” they said in their paper.
“It is often stated that for appropriate trust to be developed and effectively calibrated, an individual’s expectations must match the system’s actual behaviors,” said Catherine Neubauer, an ARL researcher and lead author of the paper. “We believe this approach extends the state-of-the-art by explicitly evaluating facial expressions as a way to quantify and calibrate affect-based trust in response to automation level capability and reliability,” she said. “It could also provide a method to understand the continuous variations in trust during a human-agent interaction, as opposed to the standard approach of participants self-reporting changes in trust after an interaction has occurred.”
ARL researchers will use facial expression analysis to help reveal when trust-based interventions are needed to improve soldier responses to automation, lab officials said. They also plan to study how group-based interventions can improve trust and team cohesion when soldiers must jointly perform high-consequence tasks with automated agents, such as the Next Generation Combat Vehicle.
NEXT STORY: Ushering in a new era of work with RPA and AI