“I hope that future science will lead us to understand how personalized audiology depends on the situations, the intent, and the individual capabilities, so that we can create hearing aids that truly adapt to the individual.”
I primarily work with data from fitting software, hearing aids, and hearing aid users. From that data we try to understand how to build personalized audiology for the future.
My first quest was to solve the segregation problem, the problem that users of hearing aids face with competing voices, for instance at family dinners and cocktail parties. Here I was interested in seeing how machines can learn to recognize voices so that they could be separated as if conversation partners had their own microphone.
I found Eriksholm to be the perfect place to provide the opportunity for me to pursue the above
I’m motivated by everything we learn through our research, and by transforming these learnings into tangible improvements for people living their life with a hearing loss. And in between the quantum leaps, I am happy to settle for many small improvements.
Besides spending time with my family, I play floorball, which is a really fast paced sport, and then I play music.
Deep neural networks and its two impacts on hearing care. First, when we showed that deep neural networks can learn specific voices so well that the separated outputs improve segregation for people with hearing loss. Then, secondly, the related breakthrough in Oct 2020, with the launch of Oticon More, where a key component of hearing aid signal processing in hearing aids was significantly upgraded with deep neural networks. It has been awesome to pursue the long-term potential with deep neural networks and to witness the immediate impact on products with smaller networks.
I hope that future science will lead us to understand how personalized audiology depends on the situations, the intent, and the individual capabilities, so that we can create hearing aids that truly adapt to the individual.