Skip to main content
Guest  homeNews home
Story
4 of 10

Audio engineer awarded funding to improve and personalize hearing devices and headphones

Hwan Shim is using technology to personalize audio experiences because human ears are all different.

Shim, an assistant professor of electrical and computer engineering technology, is developing new audio technology that could decrease extraneous sounds in noisy environments. This could improve applications from arts performances to personal devices such as hearing aids.

“We often make technology and equipment with ‘average’ ears in mind,” said Shim. “But what if we can personalize this technology when we wear certain headphones or other devices? What if we can add personal effects that can still be natural and more spatial? At RIT, we are pushing this field.”

Shim was recently awarded National Institutes of Health funding of nearly $750, 000 for “Semantic-based auditory attention decoder using large language models,” a project to determine how individuals distinguish sounds, and how the brain helps ‘censor’ sound that is non-essential to individuals, including those who need devices to improve hearing.

 Addressing sensorineural hearing loss requires more than amplification, and some auditory devices such as cochlear implants, while advanced, continue to improve by differentiating speech from background noise, Shim explained.

His project, which began in August, could advance this speech-in-noise perception by mimicking the brain’s ability to discern and decode auditory messaging and sounds. His research team will use machine learning and integrate large language models to determine EEG-based selective attention decoding to improve hearing devices.

<p>CREDIT</p>">

a man wears a prototype EEG device on his head

Traci Westcott/RIT

Biomedical engineering undergraduate, Danny Teets, part of the audio engineering research team, wears a prototype EEG device to explore how the brain censors sound in noisy environments.

Today’s audio devices are sophisticated, but some underperform in noisy or distracting environments. To address these challenges, Shim and his research group in the Music and Audio Cognition Laboratory, based in RIT’s College of Engineering Technology, developed a prototype headset to provide neurofeedback information. The device is being used to record signals indicating how the brain recognizes needed information and separates out unwanted distractions.

Shim’s work leverages the acoustic enhancement technology called Active Field Control (AFC), an immersive sound technology developed by Yamaha Corp. and RIT engineering colleague Sungyoung Kim, which is used in live performance spaces, such as auditoriums, and in virtual spaces, including virtual reality and gaming systems. Shim’s early doctoral research focused on spatial impressions, including reflections and reverberation that underpin such technologies. Coupled with AFC technology, he continues to explore spatial audio in various contexts, and these systems allow his research team to recreate realistic, noisy environments to develop, and test individualized selective attention decoding techniques.

“The state of the industry is in immersive audio,” said Shim, who is an expert in applied acoustics, auditory cognition, and neuroscience, and who has industry experience as an engineer at Samsung. "Using field control processors, we can actively change the sound space, for example, during a live performance. We are also seeing changes in listening habits, and we are making better technology for immersive audio to give a more natural feeling of sound.”

Shim said that although today’s systems are maturing, there are still gaps and limitations to presenting realistic, although enhanced, sound.

Over the course of the project, the team will work to further three specific areas: improvements to their prototype device; expansion of the training program used to distinguish sounds; and refinements to analytical methods once data from the system are captured. Shim will also be exploring how individuals contextualize information, including sound, and how audio systems then incorporate these distinctions. This is an engineering challenge Shim sees as a natural evolution of audio technologies that have progressed from early recording devices to those with the most sophisticated noise-reducing functions.

“We know that people have their own experiences and intimacy with sound. They can perceive differences,” he said. “We also know that the technology can decode the target speech from noise. Our idea is to clarify meaning of language to be more context-based. We are still figuring out what is going on in the brain. This can be next generation technology.”

Latest All News