Hearing

Our brains have a remarkable knack for picking out individual voices in a noisy environment, like a crowded coffee shop or a busy city street. This is something that even the most advanced hearing aids struggle to do. But now Columbia engineers are announcing an experimental technology that mimics the brain’s natural aptitude for detecting and amplifying any one voice from many.

Extraordinarily sensitive

The brain area that processes sound is extraordinarily sensitive and powerful; it can amplify one voice over others, seemingly effortlessly, while today’s hearings aids still pale in comparison, said Nima Mesgarani, PhD, a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute and the paper’s senior author.

By creating a device that harnesses the power of the brain itself; so they hope our work will lead to technological improvements; hence that enable the hundreds of millions of hearing impaired people worldwide; so to communicate just as easily as their friends and family do. Modern hearing aids are excellent at amplifying speech; hence while suppressing certain types of background noise, such as traffic. But they struggle to boost the volume of an individual voice over others.

Scientists calls this the cocktail party problem; so name after the cacophony of voices that blend together during loud parties. In crowded places, like parties, hearing aids; which tend to amplify all speakers at once, said Dr. Mesgarani; who is also an associate professor of electrical engineering at Columbia Engineering. This severely hinders a wearer’s ability to converse effectively; hence essentially isolating them from the people around them.

Speech separation algorithms

Using this knowledge the team combine powerful speech separation algorithms with neural networks; so complex mathematical models that imitate the brain’s natural computational abilities. They create a system that first separates out the voices of individual speakers from a group; also then compares the voices of each speaker to the brain waves of the person listening. The speaker whose voice pattern most closely matches; so the listener’s brain waves is then amplified over the rest.

The researchers published an earlier version of this system in 2017 that, while promising, had a key limitation: It had to be pretrained to recognize specific speakers. If you’re in a restaurant with your family; so that device would recognize and decode those voices for you, explain Dr. Mesgarani. But as soon as a new person, such as the waiter, arrived; so the system would fail.

The team’s algorithm tracked the patients’ attention as they listened to different speakers that they had not previously heard. When a patient focused on one speaker, the system automatically amplified that voice. When their attention shifted to a different speaker, the volume levels changed to reflect that shift.

Encouraged by their results, the researchers are now investigating how to transform this prototype into a noninvasive device that can be placed externally on the scalp or around the ear. They also hope to further improve and refine the algorithm so that it can function in a broader range of environments.