To develop a testing paradigm that uses immersive audio to elicit and measure non-target hearing in normal hearing and hearing impaired listeners.
We usually characterize hearing loss by its effect on the perception of sounds that are the target of the listener’s attention. Indeed, much of the work in the field of audiology and hearing science has centred on assessing and improving hearing impaired (HI) listeners “target hearing”, with an emphasis on the ability to focus on a speech signal in the presence of task-irrelevant noise. However, these efforts only capture one aspect of listening in complex scenes. Before it was co-opted to support communication (speech processing), the auditory system evolved as an "early warning system" - continuously monitoring the unfolding acoustic environment and rapidly directing attention to new events. It is sensitive to a wide space inaccessible to the other senses, and the brain relies on it to detect important changes in our surroundings (in the distant past the appearance of a predator or prey; in the present - a bus looming behind us in a busy street).
This project is funded by the William Demant Foundation, and is a unique collaborative opportunity between experts in audio experience design (Lorenzo Picinali), behavioural and brain measures of auditory scene analysis (Maria Chait - UCL) and hearing technology R&D (Martha Shiell - Eriksholm/Oticon) to develop a testing paradigm that uses immersive audio to elicit and measure non-target hearing in normal hearing (NH) and HI listeners. In particular, we focus on a critical function of hearing - the ability to notice that a new source has joined the scene, or an old one is no longer active - an important task in our everyday life. By recreating the situation in the lab with carefully controlled stimuli, we can get a handle on the brain and perceptual mechanisms which underpin listeners ‘situational awareness’ and examine how they are affected by hearing impairment.