AFRL 3-D audio research helps to make cockpit safer

Dr. Griffin Romigh tests 3-D audio software that spatially separates sound cues to mimic real-life human audio capabilities, Nov. 7, 2014 at Wright-Patterson Air Force Base, Ohio. The application allows operators in complex communication environments with multiple talking voices to significantly improve voice intelligibility and communication effectiveness. The technology, which consists primarily of software and stereo headphones, has potential low-cost, high-value application for both aviation and ground command and control communication systems. Romigh is a researcher at the Air Force Research Laboratory’s Human Effectiveness directorate, Battlespace Acoustics Branch.  (Air Force photo/Richard Eldridge)

Dr. Griffin Romigh tests 3-D audio software that spatially separates sound cues to mimic real-life human audio capabilities, Nov. 7, 2014, at Wright-Patterson Air Force Base, Ohio. The application allows operators in complex communication environments with multiple talking voices to significantly improve voice intelligibility and communication effectiveness. The technology, which consists primarily of software and stereo headphones, has potential low-cost, high-value application for both aviation and ground command and control communication systems. Romigh is a researcher at the Air Force Research Laboratory’s Human Effectiveness directorate, Battlespace Acoustics Branch. (Air Force photo/Richard Eldridge)

WRIGHT-PATTERSON AIR FORCE BASE, Ohio (AFNS) -- Imagine yourself in a cockpit, flying a mission, listening to a multitude of critical voices, delivering vital messages, all at the same time and from the same direction.

Now imagine the same environment, except that the voices are now distinct and separate, and you can not only define who they are and interpret what they are saying, but you can hear them as if you were at a party and having a conversation with one person, able to hear that person distinctly, above the noise of the party.

The Air Force Research Laboratory, Human Effectiveness directorate Battlespace Acoustics Branch, has developed 3-D sound technology that creates a sound environment that mimics the way the human body receives aural cues -- much like 3-D movies create the perception that the viewer is part of the movie.

The technology has proven so effective that recently the Air Force's Technology Transfer office collaborated with aviation audio control systems developer PS Engineering to give the company an exclusive use license, allowing PS Engineering to incorporate the multi-talker technology into their new audio system, the PMA450, for general aviation.

According to PS Engineering founder and CEO, Mark Scheuer, AFRL's technology allows the company's digital audio interface, IntelliAudio, to place two communication channels in various positions within the stereo headset, making simultaneous radio signals sound as if they are coming from different locations -- essentially a 3-D sound environment.

"The all-new (digital audio system) brings to the pilot a new invention, thanks to the cooperative efforts of the U.S. Air Force," Scheuer said.

The basis of the multi-talker technology has its history in the development of virtual audio display technology, according to former AFRL research scientist and key developer of the multi-talker technology, Dr. Douglas Brungart, the chief scientist, National Military Audiology and Speech Center at Walter Reed National Military Medical Center, Bethesda, Maryland.

During his career at AFRL and while a student at MIT, Brungart studied the way listeners perceive sounds that are located near the head, which eventually led to the discovery that listeners could easily separate voices that were close to the head and far from the head, even when both talkers were located in the same direction from the listener. The discovery made it possible to develop a technology application that would spatially separate sound sources in a multi-talker speech display.

That technology has been adopted by PS Engineering, and has potential application for a multitude of aviation and ground command and control systems, including air traffic control and remotely piloted vehicles.

"We've been excited about this technology for a long time, but previous applications have been limited by the relatively expensive hardware required to implement it," Brungart said. "However, as technology has advanced, the cost of implementing virtual audio has dropped dramatically."

According to Battlespace Acoustics technical advisor, Dr. Brian Simpson, spatial sound cue separation is critical to effective communication and speech intelligibility. Simpson and his team of researchers believe they have found a key to maximizing audio clarity by placing sound cues into a display that plays them so that they are perceived to be coming from different places -- the way humans listen to and perceive sound in reality.

"The improvement is tremendous," Simpson said. "You can achieve greater communication effectiveness, reduce workload and, importantly, improve overall safety in flight operations. Even highly trained pilots, who are used to listening to multiple channels, can benefit from this technology, and the more complex the environment, the greater this benefit will be."

The technology is also intuitive and easy to learn, low-cost and extremely high in value, Simpson added. The research team hopes to see it integrated into future Air Force and commercial capabilities.

"I feel like we're on the tipping point," Simpson said. "As soon as one flying community gets it, everyone else will want it."