Annual Report

Director's Report

uwe_400We can all make sense of how fast new technologies evolve around us and that this accellerating technological change will impact us all one way or the other.

You may argue that  - as Niels Bohr said - predictions are difficult, particularly about the future. But technological developments are often more predictable than one might believe, because they follow exponential growth curves.

So which technological developments are the ones which will have such a big influence on the audiology of the future? 

Here is our best guess:

1. Personal digital assistants

Amazon’s Alexa og Apple’s Siri are well known and rapidly developing due to breakthroughs in the underlying technology: Artificial Intelligence. You may well imagine that we all soon will use  some “digital butlers” for many daily routines and purposes. Imagine, what it might mean for future Hearing Instrument users, if they have such a digital assistant in their ear always available and helping them to make the best out of their daily life; Being able to communicate and receive all the advice and assistance they ask for.

2. Data is called the new “crude oil”

Big Data, the capabilities of Artificial Intelligence, is a game changer. Hearing instruments of the future will be in a “pole position” for sensing our bio data. In earlier Eriksholm Research Centre reports, we talked about the progress in integrating sensors like EEG, pulse & temperature measurement, skin resistance, accelerometer, fNIRS and more into the hearing instrument. These body worn sensors will help to constantly adapt the hearing instruments to the needs of their users. Together with data from the hearing care professional, medical records etc. they will enable a true personalisation of Hearing Healthcare Services around the clock. Data fusion in a “sky clinic” will enable services of completely new quality, allow prediction and prevention of deseases and thereby enable us to stay fit and healthy longer.

3. Hearables

Hearables are hearing instruments which don’t serve a medical purpose, like compensating for hearing impairment. Hearables will be used as e.g. the personal butler I mentioned above. They will have “augmented reality” functions, like the famous Bablefish, doing synchroneous, instant translation. Hearables are assumed to have a growth potential to up to hundreds of million devices per year. Some are considering them a competition for hearing instruments, because they will automatically adapt to their users and thereby do a hearing loss compensation without the user even being aware of it. However, they will also create new opportunities, like removing the often quoted stigma of hearing instruments. It will become “cool” to use these hearables for all kinds of purposes.

4. Moore’s law

Probably the most well known of all growth curves of technology. The observation Gordon Moore of Intel did in 1975 was that computer power was doubling roughly every two years. It has since often been declared that this development will come to an end; but still it continues. It will give us the computer power to miniaturize hearing instruments and at the same time to increase their sensor functions far beyond todays imagination.

5. More for less

Healthcare systems all over the world are under heavy cost pressure. Aging societies and new medical treatments are driving the costs up. This leads to the need to strengthen prevention, helping people with chronic conditions to stay active and independent longer, and to follow up on outcome measures. These requirements again lead to increased need for body worn sensors, which enable us to collect bio data for making predictions on health conditions and measuring outcome of treatments. Increasingly healthcare providers will demand proof of efficacy of expensive treatments.

Here again Hearing Instruments as body worn sensors will be in a pole position for providing such services.

6. Brain research

The US and EU are spending 2 b USD on brain research. China reported the launch of an even more ambitious program with 3 b USD volume. These programs aim at basic research for better understanding how the human brain works and at the same time at applied science in order to identify treatment paths for all kinds of brain disorders. This comprises of exciting research fields like sensory substitution, Human Intranet and Brain-Computer Interface. It is amazing how plastic the human brain is. It was e.g. shown that blind patients can “hear to see”, by creating a kind of “SONAR” sound, which translates a visual input into sound patterns. Within hours of training the brain the blind persons adapted so that their visual cortex reacted. They could literally see their surroundings via this acoustic pattern. Knowing that we indeed hear with our brain, this research in the human brain opens up many fascinating opportunities of future treatment of hearing impairments of all kinds.

As these six trends are Megatrends and quite robust it is not that difficult to “connect the dots” and create a vision for how hearing healthcare may look like in 10 years. It will become much more Personalized, Predictive, Preventive , Participatory - the 4 Ps of healthcare.

These visions are the driving force for our research here at Eriksholm. Diving into the subsequent chapters, you will se how we translate this into exciting research in our three areas of Augmented Hearing, Cognitive Hearing Science, and eHealth.

The future is what we make of it!

- Uwe Hermann


Augmented Hearing

The group’s new name, Augmented Hearing, puts a focus on the future of hearing care and the services delivered by hearing devices. It describes the analysis and optimized presentation of surrounding sounds that enhances clarity and independent object formation in the brain. Augmentation goes beyond compensation: Why settle for anything less than super-human hearing? Future Augmented Hearing services start with investigating the real problems of people with hearing loss and developing solutions with artificial intelligence and deep learning that solve those problems. The accelerating development of new deep learning methods and the now seemingly near future, where trained deep neural networks will become available in hearing devices, is a constant motivation for pursuing this journey.

The research area is led by Niels Henrik Pontoppidan. Read more about Augmented Hearing here.

Exciting progress in 2017

Analysis of small and larger data sets led in 2017 to fascinating insights that will alter future hearing care. People with normal hearing can filter out noise and other voices, and thus interpret competing voices as separate streams – here people with hearing loss struggle. In 1953, Collin Cherry described the Cocktail party problem and even speculated: "What if hearing devices could separate two voices and present one in each ear?"

With professor Toumas Virtanen’s group at Tampere University of Technology, Finland, we demonstrated that segregation enhancement with deep learning based separation of competing voices for people with hearing loss leads to improved speech recognition!

This is remarkable progress compared to earlier works where the quality of separation was not sufficient for people with hearing loss even if the algorithms improved technical metrics like signal to noise ratio. The benefit was demonstrated using the Competing Voices Test which Eriksholm developed and refined during the last couple of years. In this test, participants listen to two competing voices without knowing in advance which of the two competing voices to repeat. People with hearing loss improved their speech recognition of the target from 50 % to 63 %.

Deep learning algorithms that learn separation from mixing two clean speech signals provide the benefit, and remarkably, the learning is without any inclusion of known speech parameters. 

Even more remarkably, deep learning only needs to mix a few minutes of speech from two voices to learn how to separate those two voices. In real life, this segregation enhancement allows people with hearing loss to react to their own name or react to keywords while being engaged in an ongoing discussion.

The results show that segregation improve people with hearing loss’ ability to maintain two competing voices as separate sound streams and keep the independent information separated. The technology is not ready for immediate implementation in hearing devices yet, demonstrating the benefit in the lab is the important first step.


EVOTION - Big data supporting public health policies

Together with our 12 partners in the European Horizon 2020 project EVOTION, we have been preparing the components of the EVOTION platform during the first year of the project. A special kind of Oticon hearing aids (EVOTION 12) and Genie fitting software has been developed for the EVOTION project. At the same time, our partners have been busy with e.g. ethics approvals in two countries, definition of protocols, development of an app for the users, information materials for hearing aid users, development of a backend infrastructure, implementation of secure data storage and transmission including integration with hospital data systems and anonymization of data, and development of a language for specifying public policy making.

The focal point of this has been the initiation of the clinical validation at four clinical sites in London, UK, and Athens, Greece, with more than 1,000 hearing aid users. The first data has been collected and together with our EVOTION partners, we look forward to analyse data and share insights during the final two years of the project.

Read more about the EVOTION project

Together with DTU Compute and Copenhagen Centre for Health Technology, we conducted a validation study with the EVOTION hearing aids. The EVOTION project hearing aids generate data on the sound environments and which programs the hearing aid users select at what time. Given four programs that combine different levels of help from spatial noise reduction with timbre contrasts, the data shows signs of exploration and convergence towards personal preferences.

From the lab to real environments

Our speech segregation enhancement progress with deep learning and artificial intelligence was in the lab, and thus our ambition is to bring this technology closer to implementation in real environments and address the simplifications which the lab provides. We are not starting from scratch and many challenges can be overcome by integrating with available technology, i.e. using exciting noise reduction and spatial filtering for first stage noise reduction. Instead we will focus on developing new solutions to the challenges where available technology does not apply, e.g., automating the learning of new voices and adapting voice models as voices change due to aging or temporarily because of a cold.

Similarly, the real-world data logging with the EVOTION hearing aids will also take on from the initial steps with a few hearing aid users when EVOTION patients gets enrolled at the clinical sites. This will be a major upscaling of the amount of data on sound environments and hearing aid users’ preferences, and from such large amounts of data we expect to see patterns across users and across sound environments, and thus qualify the hypothesis which the small amount of pilot data indicates.

Cognitive Hearing Science

Within the Cognitive Hearing Science area our focus throughout 2017 and into the near future is “Hearing & Cognition”. 
After years of research within this field, our huge interest in the cognitive aspects of audiology has rapidly expanded to cover new exciting ways to assess hearing aid outcome. Methods like pupillometry and working memory has now showed their advantage in assessing hearing aids under more ecological, realistic, daily-life conditions. In addition, new not yet seen cognitive hearing assessment methods are underway.

It is amazing to feel what “neuro-feedback” means for the Human Machine Interface. The Cognitive Hearing Science team demonstrates this with a “Restaurant Problem Solver” vision: a physical set-up where a number of people around a table are represented each by one loudspeaker. When you join the table with an Ear-EEG sensor in your ear, you just need to look at the loudspeaker you want to listen to and it is amplified, whilst the other ones are attenuated. Easy to imagine this application in real life, where the Ear-EEG can be used to steer beam-forming microphones or to pick and choose button microphones attached to several speakers. 
The reactions from hearing aid users trying this Restaurant Problem Solver have been ranging from “when can I have this?” to “it is so intuitive and easy, you really feel in control”.

Eriksholm’s strategic research area Cognitive Hearing Science is headed by Thomas Lunner. Learn more about Cognitive Hearing Science here.


Cognitive control of a hearing aid

The vision of being able to cognitively control a hearing aid has been on the Eriksholm agenda for many years. Thus, Eriksholm Research Centre was a driving force when defining the COCOHA (Cognitive Control of a Hearing aid) project in collaboration with École Normale Supérieure in Paris, UCL in London, UZH in Zürich and DTU in Copenhagen.
The project has received strong, international attention. Members of the Cognitive Hearing Science team have been invited speakers at numerous international conferences.

The project, which has received funding from the European Union's Horizon 2020 research and innovation programme, has Thomas Lunner as Principal Investigator at Eriksholm Research Centre. During 2017, one PhD student (Antoine Favre-Félix) and six post-docs (Emina Alickovic, Tanveer Bhuiyan, Carina Graversen, Sergi Rotger Griful, Martin Skoglund, and Alejandro Lopez Valdes) worked intensively on the project. This major effort means we now have a research platform of a hearing aid, which contain integrated sensors to assess head movements and electrophysiology from the ear canal. This platform will now enable us to gain knowledge on improvements in speech intelligibility by cognitive control of a hearing aid, and evaluate testimonials of hearing impaired testing the system.

Learn more about COCOHA.

Listening effort in the European population: a new innovative programme of research and training (LISTEN)

In 2017, the LISTEN project came to an end. It was a European Industrial Doctorate project collaboration between the VUmc and Eriksholm Research Centre. This project has received funding from the European Union's FP7 Research and Innovation funding programme under grand agreement No 607373.
In this project two Phd candidates, Yang Wang and Barbara Ohlenforst, investigated if and how current hearing aid technologies can successfully decrease listening effort required for hard of hearing people. By applying pupillometry, the impact to hearing loss and hearing aid technologies were assessed during speech perception. It was demonstrated that hearing-impaired people spend more listening effort in conditions with fairly low background sounds where most of the speech can be recognized. Furthermore, it was shown that advanced hearing aid signal processing actually reduces listening effort under such ecological conditions.

Another research goal within LISTEN was to investigate the connection between parasympathetic nervous system and hearing impairment my means of the pupillary response. It was examined whether the pupil light reflex (PLR) can be applied as a sensitive method to evaluate parasympathetic dysfunction. PLR results suggested an increased level of parasympathetic nervous system activity in people experiencing higher levels of need for recovery on a daily basis. Furthermore, it was shown that daily-life fatigue and hearing impairment had an independent and equal contribution to the pupillary response within a speech-in-noise task. People with higher-level of daily-life fatigue and worse hearing acuity showed smaller pupil dilation while speech recognition.

Learn more in the video below.

[div style="width:100%; height:0; position: relative; padding-bottom:56.26794258373206%"][iframe id="videoplayer" src="//" style="width:100%; height:100%; position: absolute; top: 0; left: 0;" frameborder="0" border="0" scrolling="no" allowfullscreen="1" mozallowfullscreen="1" webkitallowfullscreen="1"][/iframe][/div]

“Innovative hearing aid research – ecological conditions and outcome measures” (HEAR-ECO)

A new Innovative Training Networks (ITN) started in December 2017. The H2020 EC Marie-Curie ITN project “Innovative hearing aid research – ecological conditions and outcome measures” (HEAR-ECO) will address the increasing concern for public health and social participation by developing and combining new tools and outcome measures that can be applied in realistic communication scenarios. The aim is to translate these tools into innovative developments and evaluations of new technology for people with hearing loss. HEAR-ECO is currently recruiting a new team of early stage researchers (ESR) that will be working at the nexus of technology, psychology, physiology and audiology. The key questions to be answered are how task demands, motivation, and invested effort modulate speech understanding and hearing-aid benefit in daily life. During the project phase, all ESRs will spend half of their time at the Eriksholm Research Centre.

During 2017 we also had visits from MHH Hannover, where new ecological methods based on the SWIR test (working memory outcome) combined with pupillometry was developed for cochlear implants.


Our strategic research area eHealth capitalises on information and communication technology (ICT) to improve hearing healthcare services. eHealth improves the flow of information within and around healthcare. To quote the World Health Organisation, “eHealth can put information in the right place at the right time, providing more services to a wider population and in a personalised manner.” Our vision is that eHealth will provide tools for hearing care professionals to provide participatory, personalised, predictive, and preventive hearing healthcare. All of our eHealth solutions take the needs of the patient, the family, and the hearing care professional into consideration. This helps secure adherence with the hearing treatment.

The research area was in 2017 led by Ariane Laplante-Lévesque. Learn more about eHealth here.


App(etite) for life with hearing loss

This project was successfully completed in 2017. iterative

The project was led by Annette Cleveland Nielsen and financed by the Danish Ministry of Higher Education and Science’s Agency for Science, Technology and Innovation. It was a collaboration between the Eriksholm Research Centre and Professor Anne Marie Kanstrup from Aalborg University. The project asks hearing aid users, their family and friends, and hearing care professionals to innovate the hearing healthcare of the future. In three iterative rounds of meetings spanning 2016-2017, the participants co-designed innovative and empowering eHealth solutions.

Follow this link to read more about the project.


Eriksholm guide to better hearing

This project develops and evaluates an e-learning program to better equip hearing aid users so they experience fewer negative consequences as a result of their hearing impairment. 2017 saw the completion of the development of the program and the start of the testing phase with National Health Service patients in collaboration with Melanie Ferguson, David Maidment, and their team from the Nottingham Biomedical Research Centre.

Follow this link to read more about the project.


Eriksholm big data project

This internal project, started in 2017, collects data currently located in several places in one place, use the data, and make them grow. The project involves:

1) Gathering the data the Eriksholm Research Centre currently has on it's research participants with hearing loss* (n≈250, eg cognitive data, hearing aid logging data, outcome data) in one location, make sense of the data, and assess their quantity and quality.
2) Formulating and testing hypotheses based on these data to contribute to better fitting/treatment.
3) Identifying data needed to complete the picture we currently have (e.g. wearable sensor data, at-home validated test data).
4) Start collecting the data identified in 3).

Benefits and applications include insights from large datasets for better personalisation of hearing solutions. The first steps taken has been to start identifying data locations and types. The second step was to read out data from hearing aids including serial number, type, usage, volume control, and program time data and to complement this with a measure of hearing aid outcomes, the International Outcome Inventory for Hearing Aids (IOI-HA). What we need to do next is to continue the data gathering with a focus on information collected during previous research projects.


International Meetings on Internet and Audiology

Born from the Eriksholm Research Centre’s wish to facilitate international scientific exchanges in the rapidly developing field of eHealth, the International Meetings on Internet and Audiology have been a resounding success. The Eriksholm Research Centre has been organising these meetings together with colleagues from Linköping University and the University of Louisville. The first meeting took place in Linköping, Sweden in 2014, the second meeting took place at the Eriksholm Research Centre, Denmark in 2015, and the third meeting took place at the University of Louisville, US on 27-28 July 2017. We thank our main sponsors, the US National Institutes of Health / National Institute on Deafness and Other Communication Disorders as well as the Oticon Foundation.

Special issues of the American Journal of Audiology were published after each meeting, summarising many of the presentations for the benefit of those who could not be present. The next meeting is planned for the first half of 2019.

Follow this link to read more about the previous and upcoming meetings.

The eHealth team has also contributed actively to research projects including Better HEARing Rehabilitation (BEAR), In school with HEARing impairment (IHEAR – link in Danish language), and the PhD project of Husmita Ratanjee-Vanmali at the University of Pretoria embedded in a new non-profit eHealth clinic in South Africa.

The eHealth team furthermore works together with Augmented Hearing on the Evotion project.

2017 in numbers