Dr David McAlpine, Ms Adenike Deane-Pratt, Mr Torsten Marquardt, Mr Miles Paterson and Mr John Agapiou.
University College London.
Localising the source of a sound is fundamental to the way we perceive our environment. Incoming sound waves are processed by each ear independently, and the resulting information is combined in the brain to create the perception of auditory space.
Understanding and responding to the conversation of one person in a crowded bar or restaurant requires not just social skills, but a brain that can detect tiny differences in the arrival time of sounds at our two ears. This is called the interaural time difference (ITD) and involves measurement of differences that vary from 600 microseconds down to around 10 microseconds. This is much faster than, for example, the normal response time of the nervous system which is measured in milliseconds. Another vital process is our ability to filter out and discount conflicting directional information coming from reflected sounds or echoes - the precedence effect. Recent research is revealing the complexity behind these unconscious processes.
'It is not just making speech intelligible', says David McAlpine. 'The auditory system constructs a spatial representation of our world in the brain - it quite literally is hearing where you are and it is fast enough to alert the visual system to interesting stimuli.' The use of functional magnetic resonance imaging (fMRI) can reveal areas in the brain that are used to assemble this sound picture of the world, from the brainstem to the cerebral cortex.
Our successful construction of the 'auditory scene' is best demonstrated by interfering with the aural cues on which the brain relies. For example, by increasing the amount of reverberation it is possible to eliminate the precedence effect and render speech completely unintelligible.
'Using virtual reality headsets and manipulating the ITD we can move the perception and location of a sound around in a person's head', says David. 'This could be reflected in fMRI images with more activity apparent on one side of the brain or the other.'
A phenomenon called the Binaural Intelligibility Level Difference can be illustrated by playing noise together with low-intensity speech which can barely be understood until the virtual location.