While everyday conversation and speech understanding is highly dependent on auditory perception, the use of visual cues also play an important role, especially in challenging listening conditions. Previous studies have shown that adding visual information can improve speech perception in noise. However, relatively little is known about how the addition of visual information during a speech perception task affects deployment of cognitive resources and listening effort.
This project addresses the question of how effort is differentially allocated in audiovisual and auditory-only environments. Funding bodies include the William Demant Foundation and Innovation Fund Denmark.