Machine learning predicts when background noise impairs hearing

Science


Hearing illustration
Sounds good: machine learning could soon be used to create better hearing aids. (Shutterstock/FusionMS)

Machine learning algorithms could one day be used to improve speech recognition in hearing-impaired people, researchers in Germany have shown. Using a novel algorithm, Jana Roßbach and colleagues at Carl von Ossietzky University could accurately predict when people with both normal hearing, and those with different levels of hearing impairment would mishear over 50% of words in a variety of noisy environments – an important test of hearing-aid efficacy.

The lives of many hearing-impaired people have been significantly improved by hearing aid algorithms, which digitize and process sounds before delivering an amplified version into the ear. A key challenge still faced by this technology is improving the devices’ ability to differentiate between human speech and background noise – something that is done using digital signal-processing algorithms.

Researchers often use listening experiments to evaluate the ability of hearing aid algorithms to recognize speech. The aim of these tests is to determine the level of noise at which hearing aid users will recognize only half of the words spoken to them. However, this approach is expensive and time consuming and cannot easily be adapted to account for different acoustic environments, or for users with different levels of hearing loss.

Deep machine learning

In their study, Roßbach’s team used a human speech recognition model based on deep machine learning, which uses multiple layers to extract higher-level features from raw input data. When combined with conventional amplitude-enhancing algorithms, the model could be used to extract phonemes – these are the units of sound that form the building blocks of words.

To train their algorithm, the researchers used recordings of random basic sentences, produced by ten male and ten female speakers. They then masked this speech using eight possible noise signals, which included a simple constant noise and another person talking over the speaker. The team also degraded the recordings to different degrees, to mimic how they would sound to people with different levels of hearing impairment.

Noise threshold

Afterwards, Roßbach and colleagues played the masked recordings to participants with both normal hearing, and those with different degrees of age-related hearing loss. After asking the participants to write down the words they heard, they could then determine the threshold of noise that caused each listener to mishear over 50% of the words they heard. As the team hoped, the responses of participants with different hearing abilities closely matched the predictions of the machine learning model, to within an error of just 2 dB.

The researchers still face several challenges before their algorithm can be used to improve practical hearing aids. For now, the technology cannot be used to identify which words were spoken in the speech it interprets as being misheard. This means it cannot accurately reconstruct the correct phonemes within the amplified sounds produced by hearing aids.

In their future research, the researchers will adapt their technique to maximize the intelligibility of speech for any hearing-impaired person. If successful, their approach could eventually be implemented in hearing aids which are tailored to the needs of specific users.

The research is described in The Journal of the Acoustical Society of America.

Products You May Like

Articles You May Like

Apple Final Cut Pro 11 With New AI-Powered Caption Generation and Spatial Video Editing Released
New modular synchronous source measure system from Lake Shore Cryotronics – Physics World
Ty Segall Announces Debut Album From New Band Freckle, Shares New Song: Listen
The Who and The Kinks producer Shel Talmy has died, aged 87
9 Habits Of Well-Groomed Men