lars_round
Lars Bramsløw

Research Engineer, PhD, Project Leader

> Learn more

mail@eriksholm.com

Does Deep Neural Networks (DNN) separation of voices help hearing aid users segregate speakers?

Eriksholm Research Centre are the first to show a clear enhancement in performance when hearing impaired listeners are being presented with voices separated by means of DNN.

The Deep Neural Networks algorithms (DNN) can recognize and separate known voices and thereby help the user segregate two different speakers. While our collaborator Tampere University has the expertise on developing the algorithms that make the voice separation possible, Eriksholm Research Centre was able run tests to find out whether the algorithm worked the same way for hearing aid users in real life as expected in theory.

To do so, the test subjects needed to pay attention to two voices speaking at the same time – a new test based on the so-called Danish Hearing In Noise Test (HINT). The tests showed that when DNN is used to separate the voices, the test subjects performed better than when the DNN was not used. This is a very important result, since it has never before been documented.

dnn_graf_960x640

Results

According to the study, DNN provides 13 % points benefit for segregation of speech. This is done by presenting the two separate voices separately to the two ears, meaning that the user has all the information and can attend as desired.

When DNN is used for traditional separation, where the user listens to one separated voice, and thus must choose who to listen to, the benefit is showed to be no less than 37 % point.

Right now, it only works on two known voices, but over time the system is supposed to gather a whole library of known voices to make it easier for hearing aid users to segregate known voices, e.g family members an colleauges.

Listen to the sound bites from the test

To hear the difference between the two sound clips as clear as possible, please put on head phones.

[iframe src="//wdh.23video.com/v.ihtml/player.html?token=d479084e963e3a7bc6877670f0a1b4a8&source=embed&photo_id=26902200" width="900" height="655" frameborder="0" border="0" scrolling="no" allowfullscreen="1" mozallowfullscreen="1" webkitallowfullscreen="1" gesture="media"][/iframe]
learnmore-sound

In this sound clip, two voices are competing with no use of DNN.

[iframe src="//wdh.23video.com/v.ihtml/player.html?token=a5f9ac328bf831f5ea59d1cf68177828&source=embed&photo_id=26992849" width="900" height="655" frameborder="0" border="0" scrolling="no" allowfullscreen="1" mozallowfullscreen="1" webkitallowfullscreen="1" gesture="media"][/iframe]
learnmore-sound

This sound clip shows the same two competing voices - but they are now separated using deep neural networks and presented in the left and right ear.

Watch our Facebook Live lecture with Lars Bramsløw

Research engineer did a Facebook Live lecture on segregation of voices using DNN in December 2017. Watch it here.

https://wdh.23video.com/secret/28202894/e99573bf6c91803e7f85729e0d7d864f
fb-live-thumbnail

Further reading:

Bramsløw L, Naithani G, Hafez A, Barker T, Pontoppidan NH, Virtanen T (2018). Improving competing voices segregation for hearing impaired listeners using a low-latency deep neural network (submitted). The Journal of the Acoustical Society of America.