Chairs
Professor Stephen David,Oregon Health & Science University, USA
Professor Stephen David,Oregon Health & Science University, USA
Humans and other animals create a coherent sense of the world from a continuously changing sensory environment. To understand this process, Professor Stephen David’s lab conducts experiments that manipulate behavioural state and record the activity of neural populations during the presentation of natural and naturalistic sounds. Data from these studies is used to develop computational models of neural sound coding, with an aim of understanding communication disorders and improving engineered systems for sensory signal processing. Professor Stephen David received his AB from Harvard University in 1998 and his PhD from the University of California, Berkeley in 2005.
13:30-13:50
Objective, reliable, and valid? Measuring auditory attention
Professor Jonas Obleser, University of Lübeck, Germany
Abstract
Auditory attention is a fascinating feat. For example, it is most astonishing how our brain 'does away' with considerable differences in sound pressure between a behaviourally relevant sound source and other interferences. Meanwhile, auditory attention has remained this elusive phenomenon: do we really understand enough just yet of auditory attention to build machines that attend, or machines that help us attend? Illustrated by behavioural, electrophysiological, and functional imaging data from his own lab and others, Professor Obleser will take stock of the evidence: are top-down selective-attention abilities indeed a stable, trait-like feature of the individual listener, with predictable decline in older adults? And, what are we really getting from our current go-to neural measures of auditory attention, speech tracking aka 'neural entrainment' versus alpha-power fluctuations? Luckily, Professor Obleser will probably be out of time as the talk reaches the main question: what are we measuring when we measure auditory attention?
Show speakers
Professor Jonas Obleser, University of Lübeck, Germany
Professor Jonas Obleser, University of Lübeck, Germany
Jonas Obleser studies processes of auditory cognition and neuroscience. Since 2016, he has been Chair of Physiological Psychology at the University of Lübeck, Germany. After training and a PhD in Psychology at the University of Konstanz, he worked at the Institute of Cognitive Neuroscience, University College London, as well as at the Max Planck Institute in Leipzig where he set up the Research group 'Auditory Cognition'. His current research interests include neural oscillations in sensation, perception and cognition as well as executive functions like attention and memory, and how these processes interface neurally in human listeners. His research is currently being funded by the European Research Council (ERC).
14:10-14:30
Auditory selective attention: lessons from distracting sounds
Dr Elana Golumbic, Bar Ilan University, Israel
Abstract
A fundamental assumption in attention research is that, since processing resources are limited, the core function of attention is to manage these resources and allocate them among concurrent stimuli or tasks, according to current behavioural goals and environmental needs. However, despite decades of research, we still do not have a full characterisation of the nature these processing limitations, or ‘bottlenecks’ – ie what processes can be in performed in parallel and where the need for attentional selection kicks in. This question is particularly pertinent in the auditory system, which has been studied far less extensively than the visual system, and is proposed to have a wider capacity for parallel processing of incoming stimuli.
In this talk Dr Golumbic will discuss a series of experiments studying the depth of processing applied to task-irrelevant sounds and their neural encoding in auditory cortex. She will look at how this is affected by the acoustic properties, temporal structure, and linguistic structure of unattended sounds, as well as by overall acoustic load and task demands, in attempt to understand what levels suffer most from processing bottlenecks. In addition, she will discuss what we can learn about the capacity of parallel processing of auditory stimuli from pushing the system to its limits and requiring the division of attention among multiple concurrent inputs.
Show speakers
Dr Elana Golumbic, Bar Ilan University, Israel
Dr Elana Golumbic, Bar Ilan University, Israel
Elana Zion Golumbic is a Senior Lecturer at the Multidisciplinary Center for Brain Research at Bar Ilan University, where she heads the Human Brain Dynamics Laboratory. Her main research interest is studying how the brain processes dynamic information under real-life conditions and environments. Research in the Human Brain Dynamics lab focuses on understanding the neural mechanisms underlying the processing of natural continuous stimuli, such as speech and music, on-line multisensory integration and focusing attention on one particular speaker in noisy and cluttered environments. Research in her lab utilises a range of techniques for recording electric and magnetic signals from the human brain (EEG, MEG and ECoG), as well as advanced psychophysical tools (eye-tracking, virtual reality, psychoacoustics). This rich methodological repertoire allows studying the system at multiple levels, and to gain a wide perspective on the link between Brain and Behaviour.
15:30-15:50
The neuro-computational architecture of auditory attention
Professor Elia Formisano, Maastricht University, The Netherlands
Abstract
Auditory attention is a crucial component of real-life listening and is required, for instance, to enhance a particularly relevant aspect of a sound or to separate a sound of interest from noisy backgrounds. When listening to simple tones, attending to a certain frequency range induces a rapid and specific adaptation of neuronal tuning, which ultimately results in enhanced processing of that frequency range and suppression of the other frequencies. But which are the neural mechanisms enabling attentive selection and enhancement when listening to complex real-life sounds and scenes? At which levels of neural sound representation does attention operate? And how do these mechanisms depend on the specific behavioural requirements? High-resolution fMRI and computational modelling of sound representations both provide a relevant contribution to address these questions. Sub-millimetre fMRI enables distinguishing the activity and connectivity of neuronal populations across cortical layers non-invasively in humans (laminar fMRI). This is required for disentangling feedforward/feedback processing in primary and non-primary auditory areas and the communication between auditory and other areas (eg frontal areas). Modelling of sound representations allows formulating well-defined hypotheses on the nature of simple and complex features processed in the network of auditory areas and how the neural sensitivity for these features is affected by attention and behavioural task demands. The combination of laminar fMRI and sound representation models is thus ideally positioned to unravel the neural circuitry and the computational architecture of auditory attention in naturalistic listening scenarios.
Show speakers
Professor Elia Formisano, Maastricht University, The Netherlands
Professor Elia Formisano, Maastricht University, The Netherlands
Elia Formisano is Professor of Analysis methods in Neuroimaging and Scientific Director of the Maastricht Brain Imaging Center (MBIC). He is principal investigator of the Auditory Cognition research group at the Department of Cognitive Neuroscience, Maastricht University and principal investigator of the research line Computational Biology of Neural and Genetic Systems at the Maastricht Centre for Systems biology (MaCSBio).
His research aims at discovering the neural basis of human auditory perception and cognition by combining multimodal functional neuroimaging with methods of advanced signal analysis and computational modelling. He pioneered the use of functional MRI at ultra-high magnetic field and machine learning for the investigation of human audition.
16:10-16:30
How attention modulates processing of mildly degraded speech to influence perception and memory
Professor Ingrid Johnsrude, Western University, Canada
Abstract
Professor Johnsrude and colleagues have previously demonstrated that, whereas the pattern of brain (fMRI) activity elicited by clearly spoken sentences does not seem to depend on attention, patterns are markedly different when attending or not to highly intelligible but degraded (6-band noise vocoded) sentences (Wild et al, J Neurosci, 2012). They have replicated and extended this work to sentences that, although slightly degraded (12-band noise vocoded), can be reported word-for-word with 100% accuracy. Even for these very intelligible materials, a marked dissociatation was observed in patterns of brain activity when people attended to these compared to when they were performing a multiple object tracking task. Furthermore, in both of these experiments, memory for degraded items was enhanced by attention, whereas memory for clear sentences was not, suggesting that even perfectly intelligible but degraded sentences are processed in a qualitatively different, attentionally gated, way, compared to clear sentences. Supported by a Canadian Institutes of Health Research operating grant (MOP 133450) and Canadian Natural Sciences and Engineering Research Council Discovery grant (3274292012).
Show speakers
Professor Ingrid Johnsrude, Western University, Canada
Professor Ingrid Johnsrude, Western University, Canada
Dr Johnsrude (PhD McGill) is Professor and Western Research Chair at Western University. She trained as a clinical neuropsychologist but for the last 20 years has been using neuroimaging (particularly EEG and fMRI) and psychoacoustic methods to study the importance of knowledge and attention in guiding auditory and speech perception. Dr Johnsrude is the author of more than 100 peer-reviewed publications which have been cited nearly 19700 times (Google Scholar). She has won multiple awards for her research, including the EWR Steacie Fellowship (2009) from the Natural Sciences and Engineering Research Council of Canada. She lives in London, Canada with her husband and two children.
16:50-17:10
General discussion
17:10-17:30
Concluding session