Performance wing team researches human-machine trust

  • Published
  • By Gina Marie Giardina
  • 711th Human Performance Wing
A research team in the 711th Human Performance Wing’s Airman Systems Directorate here focuses on how humans make reliance decisions with technology or, in other words, how humans develop, maintain and lose trust.

At first glance, this may seem like a small feat, but Dr. Joseph Lyons, a Human Trust and Interaction Branch technical adviser, explained that there are many variables to consider when thinking about the entire trust process.

“We look at everything from individual differences -- things that are stable about each person such as personality, experiences and biases -- to personal preferences for interacting with people or machines, to machine performance and other characteristics,” Lyons said. “These are all things we bring to any situation and this is how we make decisions about whether or not to trust a machine.”

Lyons explained that a person’s various traits paired with the initial impression creates information, which then guides behavior and interaction with the technology or machines. Trusting, he continued, is about the willingness to be vulnerable to that machine.

An everyday example is the automated cruise control in a car, Lyons said. “If you have it and use it, then you obviously trust it. If you have it, but you never use it, then that could be a form of distrust.”

Lyons implicated that the key question is: Why does a person use it or not use it? He went on to explain that machines are being given more and more decision authority, so as they get more authority and they become more capable, our appropriate reliance on these technologies is important.

“Studies have shown that the greater the autonomy in technology, the greater risk to trust when mistakes are made. This is why it is important to develop the appropriate trust of technology,” Lyons said. “Autopilot is an example. If a pilot flying a plane is totally dependent on autopilot and it makes a mistake, that pilot’s overreliance on the system may lead to an error in decision making."

Lyons’ team also looks at the technology itself to determine what types of interactions it has with the user, what information it displays, and its familiarity as perceived by the user.

“For familiarity, think about something like a GPS," Lyons said. "There are lots of different GPS’s out there now, but the first one that came out had to deal with a lot of calibration issues.

“But, as more and more systems came on board, people became more familiar with what a GPS was, to the point that they trust them without questioning,” he added. “In fact, today there are cases where people will follow GPS guidance into a lake. Clearly, that is an example of too much trust.”

Autonomous cars will likely be similar, Lyons predicted. “How familiar people are with the technologies makes a big difference in terms of how skeptical they will be.”

Another piece of research that the team looks at is transparency.

“Transparency is the idea of developing shared awareness and shared intent between a human and a machine,” Lyons said. “It might sound funny to talk about the intent of a machine, but if we’re giving these things some level of decision-making autonomy, the intent -- even if just perceived -- of that system makes a big difference in how we interact with it.”

The purpose of incorporating a machine or technology with a human is to add benefit and improve performance. While Lyons and his team research the designs that work, they also find the designs that do not work and analyze why.

“The paper clip cartoon character that used to pop up on computer screens is a good general example of a design that did not add benefit,” Lyons said. “Most people were just annoyed with it because it had zero shared intent with its users and got in their way.”

But a design that does work is the Automatic Ground Collision Avoidance System. Pioneered by a partnership between the Air Force Research Laboratory, NASA, the Air Force Test Center and Lockheed Martin, this system is designed to reduce the number of accidents due to controlled flight into terrain, which is a leading cause of fatalities of pilots. The system briefly takes over controls and corrects the flight path if a pilot loses consciousness due to G-force induced loss of consciousness.

But do pilots trust this system?

“We go out and try to gauge pilots’ trust in this new technology that just so happens to take full control of the pilot’s aircraft, which initially pilots did not really like," Lyons said. "But, Auto GCAS is out in the field and it has saved the lives of four pilots since 2014."

While the pilots might not have initially liked the idea of this technology, they’ve learned to appreciate that it can save their lives, Lyons said. "And not only that, but it does not interfere very often.”

Lyons said that part of his team’s focus with Auto GCAS is also to try to see if and when the system does interfere, and to feed that information back to the stakeholders so that they can continue to improve the effectiveness of the system.

While the branch works with fielded systems like Auto GCAS, a considerable part of their research is experimental in nature.

“We will work to simulate a human-machine interaction of some kind and study the factors that shape trust in that context,” Lyons explained.

A few of the studies include focuses such as the impact of multitasking and different errors, looking at individual differences such as executive functioning capabilities, trait trust/suspicion, and automation schema, among many others, Lyons said. He noted that the team looks at things like how software engineers develop trust of code, and they are also starting to explore the impact of tactile cues on trust.