Links to external sources may no longer work as intended. The content may not represent the latest thinking in this area or the Society’s current position on the topic.
From sender to receiver: physics and sensory ecology of hearing in insects and vertebrates
Theo Murphy international scientific meeting organised by Dr Andrei Kozlov and Dr Joerg Albert.
Hearing evolved independently in insects and vertebrates, and the gross anatomy of auditory systems can look very different indeed. For example, grasshoppers have ears on their legs. The biophysics of signal transduction in the ear and the neural processing of sound in the brain, however, share basic similarities across species. This meeting aims to explore and discuss these fundamental principles.
More information on the speakers and programme will be available soon. Recorded audio of the presentations will be available on this page after the meeting has taken place.
Attending the event
This is a residential conference, which allows for increased discussion and networking.
- Free to attend
- Advanced registration essential (more information on registration will be available soon)
- Catering and accommodation available to purchase during registration
Enquiries: contact the Scientific Programmes team
Schedule
09:00 - 09:05 | Welcome by the Royal Society | |
---|---|---|
09:05 - 09:30 |
Mechanical tuning of the hair bundle
In the vertebrate ear, hearing starts with the deflection of the hair bundle. Remarkably, the hair bundle can oscillate spontaneously, providing frequency-selective amplification of weak inputs. This talk will first discuss how gating of the ion channels that mediate mechano-electrical transduction shapes hair-bundle oscillations. Although the activation time of the transduction channels is two orders of magnitude lower than the oscillation period, we find that channel kinetics provides a key determinant of the oscillation waveform and frequency. In auditory organs, morphological gradients suggest that the hair bundle may operate as a tuning fork. In the second part the talk, will show that the stiffness and tension of the gating springs that pull on the transduction channels increase from the low to the high frequency end of the rat cochlea. These results reveal that the transduction apparatus of the hair cell is mechanically tuned according to the cell’s characteristic frequency. Dr Pascal Martin, Institut Curie, France
Dr Pascal Martin, Institut Curie, FranceAfter undergraduate studies in physics and chemistry at Ecole Supérieure de Physique et Chimie Industrielle de la ville de Paris (ESPCI; Paris, France), Pascal Martin obtained a PhD in physics in 1997 from Université Pierre et Marie Curie (UPMC; Paris, France). From 1997-2000, he was a postdoc with Jim Hudspeth at the Rockefeller University (New York, USA), where he started working on hair-cell biophysics. In 2000, he was hired as a tenured researcher by the CNRS and returned to Paris to start an independent line of research as a group leader of the Institut Curie, where he is still working. Pascal Martin has been developing quantitative biophysical approaches to study the properties of the hair-cell bundle, as well as of biomimetic molecular systems that comprise molecular motors and reconstituted filaments from the cytoskeleton. |
|
09:30 - 09:40 | Discussion | |
09:40 - 10:10 |
Nonlinear dynamics of inner ear hair cells
Hair cells of the inner ear exhibit a highly nonlinear response to external signals, across a broad range of stimuli, and this nonlinearity has been shown to be crucial to the acuity of hearing. The dynamics of the hair bundles has furthermore been shown to be active, and to exhibit motility in the absence of input. To understand the physical mechanisms behind the sensitivity of auditory detection, we explore how hair bundles synchronize their innate oscillations to external stimuli. We demonstrate experimentally that bundles can phase-lock to a broad range of frequencies, in various mode-locking ratios. Further, we demonstrate the presence of chaos in the underlying dynamics of active bundles, and explore its impact on the sensitivity of detection. Secondly, we explore the interaction between active bundle mechanics and the electrical circuit comprised of somatic ion channels, and measure the impact of this coupled system on the overall detection performed by the hair cell. Finally, we show that the presence of innate oscillation is a ubiquitous phenomenon, which occurs in multiple end organs. Professor Dolores Bozovic, UCLA, USA
Professor Dolores Bozovic, UCLA, USADolores Bozovic received her PhD in Physics in 2001, from Harvard University, on electron transport in carbon nanotubes. She then completed postdoctoral training at Rockefeller University, from 2001-2005, in a Sensory Neuroscience laboratory. From 2005 to the present, she was Assistant, Associate, and full Professor at the Department of Physics and Astronomy and the California NanoSystems Institute, at University of California Los Angeles. The Bozovic lab focuses on problems at the interface between physics and sensory neuroscience. In particular, the group studies how auditory and vestibular systems perform mechanical sensing down to the nanometer level. |
|
10:10 - 10:20 | Discussion | |
10:20 - 10:40 | Coffee Break | |
10:40 - 11:10 |
MET channel blockers: fundamental insights and potential for otoprotection
The mechano-electrical transducer (MET) channels of sensory hair cells are nonselective cation channels. They have a high permeability but low conductance for calcium ions, which regulate hair-cell adaptation. The MET channel is permeable to large polycations, which block the channel by competing with Ca2+ for binding sites in the permeation pore, but also enter the channel when the hair cells are hyperpolarized. Aminoglycoside antibiotics such as gentamicin, which have loss of hearing and balance as a side effect, enter hair cells by this route. Mutations in TMC proteins result in quantitative changes in the interaction of aminoglycosides with the channel. This suggests that TMC proteins are pore-forming subunits of the MET channel. We are studying various other polycations. Some of these compete with aminoglycosides and provide protection from ototoxicity, while others reveal a gradient in size of the MET channel pore along the length of the mammalian cochlea. Professor Corné Kros, University of Sussex, UK
Professor Corné Kros, University of Sussex, UKCorné Kros MD PhD is Professor of Neuroscience at the University of Sussex. He qualified as a medical doctor at the University of Groningen, The Netherlands. He went on to do a PhD in Physiology in Cambridge, studying inner hair cells, the sensory receptor cells in the cochlea that signal the reception of sound to the brain. In his research career he continued to focus on cochlear hair cell physiology, in particular spontaneous activity in pre-hearing inner hair cells and the process of mechano-electrical transduction by which these cells detect sound. His current interests, funded by the MRC and Action on Hearing Loss, lie in the detrimental side effects of aminoglycoside antibiotics on hearing, and the development, based on fundamental insights in the mechano-electrical transducer channel, of blocking agents that might prevent hair-cell damage due to treatment with these drugs. |
|
11:10 - 11:20 | Discussion | |
11:20 - 11:50 |
Multiple manifestations of adaptations in mammalian auditory hair cells are driven by stimulus modality
Fast adaptation of hair cell mechanotransduction currents manifests itself differently depending on mode of stimulation. Recent evidence demonstrating a non calcium dependent component of fast adaptation was challenged as being a stimulus artifact because fluid jet responses were different from stiff probe stimulation. Our data demonstrate the biological underpinnings of fast adaptation as elimination of PIP2 from membranes eliminates fast adaptation. Fluid jet stimulation did not reveal a time dependent component of adaptation, yet steady-state adaptation persisted at positive potentials. Shaping the stimulus to be fast, revealed a fast adaptation in the current response that persisted at positive potentials. Professor Anthony Ricci, Stanford University, USA
Professor Anthony Ricci, Stanford University, USAProfessor Anthony Ricci (Tony) was born in NYC, Bronx. Professor Ricci went to Case Western Reserve University, obtaining a degree in Chemistry. Graduate School at Tulane University, degree in Neuroscience, studying hair cell neurotransmission. I did a postdoctoral fellow in Manning Correia's lab at University of Texas studying the biophysical properties of vestibular hair cells, followed by a postdoc with Robert Fettiplace at the University of Wisconsin studying hair cell mechanotransduction. His first faculty position was at LSU in New Orleans, followed by his present position at Stanford University. |
13:00 - 13:00 |
From Hearing to listening- the role of auditory cortex in making sense of sounds
This talk will discuss two recent lab studies that have demonstrated that neural activity in auditory cortex does not merely reflect sound acoustics. The first study sought to determine the co-ordinate frame in which spatial tuning exists in auditory cortex by recording from the auditory cortex of freely moving ferrets and reconstructing spatial receptive fields in either head-centered or world-centered co-ordinate frames. While the majority of neurons (~80%) encode sound location relative to the head, a minority (20%) of neurons represent the location of a sound source in the world, independently of the orientation of the animal. The second study will highlight the impact that visual signals can have on auditory cortical activity and present data suggesting that one role for the early integration of auditory and visual signals in auditory cortex is to support auditory scene analysis. Dr Jennifer Bizley, University College London, UK
Dr Jennifer Bizley, University College London, UKDr Jennifer Bizley obtained her D.Phil. from the University of Oxford where she was also a post-doctoral fellow. She is currently a Reader and holder of a Royal Society / Wellcome Trust Sir Henry Dale Fellowship, at the Ear Institute, University College London where her research group is based. Her work explores the brain basis of listening and, in particular, how auditory and non-auditory factors influence the processing of sound. Her research combines behavioural methods with techniques to measure and manipulate neural activity as well as anatomical and computational approaches. |
|
---|---|---|
13:30 - 13:40 | Discussion | |
13:40 - 14:10 |
How the human brain detects patterns in sound sequences
This talk will present ongoing work in Professor Chait's lab using brain imaging (EEG, MEG and fMRI), behavioural and eye-tracking experimentation to reveal how human listeners discover patterns and statistical regularities in rapid sound sequences. Sensitivity to patterns is fundamental to sensory processing, in particular in the auditory system, and a major component of the influential ‘predictive coding’ theory of brain function. Supported by growing experimental evidence, the ‘predictive coding’ framework suggests that perception is driven by a mechanism of inference, based on an internal model of the signal source. However, a key element of this theory - the process through which the brain acquires this model, and its neural underpinnings – remains poorly understood. Experiments focus on this missing link. The research approach, based on measuring behavioural and brain responses to rapid tone-pip sequences governed by specifically controlled rules along a variety of feature dimensions enables us to address questions related to (1) how the brain discovers patterns in sound sequences, (2) which neural mechanisms are involved, (3) to what degree the process is automatic or susceptible to attentional state and behavioural goals of the listener. Professor Maria Chait, University College London, UK
Professor Maria Chait, University College London, UKMaria Chait is a Professor of auditory cognitive neuroscience at the Ear Institute, University College London. Professor Chait moved to UCL in 2007, as a Marie Curie research fellow, following a short post-doc at Ecole normale supérieure, Paris. Professor Chait's PhD research (2006) was conducted at the Neuroscience and Cognitive Science program, University of Maryland College Park, USA under the supervision of Jonathan Simon and David Poeppel. Her undergraduate background is in Computer Science, Economics, and East Asian Studies. |
|
14:10 - 14:20 | Discussion | |
14:20 - 14:50 |
Auditory Plasticity to Recognise Species-Specific Vocal Categories in Mice
Both senders and receivers in any species-specific communication system have presumably evolved mechanisms to match the production of biologically meaningful signals to the sensory “filters” for recognizing their importance. How such “sign” stimuli release so-called fixed action patterns is largely thought to be implemented through “innate releasing mechanisms.” Importantly though, such mechanisms do not necessarily imply static systems insensitive to experience and learning, as Konrad Lorenz’s classic imprinting experiments demonstrated. However, the mechanisms for sensory plasticity to support such behaviorally important learning during natural communication are not well understood. Here, I describe studies in mice that have exploited a natural ultrasonic communication system between mouse pups and their mothers to reveal novel forms and mechanisms of neural plasticity within the auditory system, particularly at the auditory cortical level, to support the recognition of infant vocalisations, an ethologically significant function of the auditory system of adult females. Dr Robert C Liu, Emory University, USA
Dr Robert C Liu, Emory University, USADr. Robert C. Liu is an Associate Professor in the Department of Biology at Emory University. He received his PhD in Applied Physics from Stanford University for his theoretical and experimental work in condensed matter physics before transitioning into neuroscience as a Sloan Postdoctoral Fellow at the University of California, San Francisco’s Center for Theoretical Neurobiology. He trained there with Christoph Schreiner, Michael Merzenich and Kenneth Miller in sensory systems and computational neuroscience, and began a neuroethological program of research in sensory systems. At Emory University, his Computational Neuroethology Laboratory studies the mechanisms of sensorineural processing and plasticity in natural, social behavioral contexts. His auditory research investigates how auditory cortical processing changes as species-specific communication sounds acquire behavioral meaning, using a mouse model wherein maternal mice learn the significance of the ultrasonic vocalizations of pups. |
|
14:50 - 15:00 | Discussion | |
15:00 - 15:20 | Tea Break | |
15:20 - 15:50 |
From song to synapse: the neurobiology of vocal communication
The interplay between hearing and vocalization is critical to vocal communication and vocal learning. Recent research using both songbirds and mice has provided useful insights into the neural circuits and mechanisms that mediate this sensorimotor interplay. I will discuss recent progress in understanding how auditory and motor systems interact to enable vocal learning and communication. Professor Richard Mooney, Duke University, USA
Professor Richard Mooney, Duke University, USA
Richard Mooney is the George Barth Geller Professor of Neurobiology in the Department of Neurobiology in the Duke University School of Medicine. He obtained his BS in Biology from Yale University, his Ph.D. in Neurobiology from Caltech, and pursued postdoctoral training at Stanford University before joining the Duke faculty as an assistant professor in 1994. Motivated by a longstanding interest in neuroscience and music, a major focus of his research is on auditory – motor interactions in songbirds, using high resolution electrophysiological and optical methods to address how experience and practice shape the structure and function of neural circuits necessary to learned vocal communication. More recently, his group is also exploring how motor and auditory cortical circuits interact to facilitate hearing and vocal communication in mice.
|
|
16:00 - 16:30 |
Song recognition in crickets meets evolution: phenotypic diversity from a single mechanism
Male crickets produce a species-specific song signal build from pulse trains that attracts conspecific females. Behavioural tests with females of different species demonstrate rather diverse phonotactic preference profiles with selectivity for different temporal features such as pulse rate, pulse duration or pulse duty-cycle. A computational model based on a template matching mechanism can account for this phenotypic diversity and for transitions between preference profiles by small changes in the processing algorithm based on the relative amplitude and timing of excitation and inhibition. A small network of auditory neurons in the cricket’s brain indicates pulse rate recognition by a coincidence-detector based on a delay-line and a post-inhibitory rebound. A specific modelling approach based on Linear/Non-linear models of this network demonstrates that all the computational components of the brain neurons are required for pulse rate recognition. Our combined approach illustrates how the neuronal network can account for rapid transformations between phenotypic preference profiles during evolution. Professor Matthias Hennig, Humboldt-Universität zu Berlin, Germany
Professor Matthias Hennig, Humboldt-Universität zu Berlin, GermanyProfessor Matthias Hennig obtained his PhD in Neurobiology from the Australian National University in 1990, before going on to become a research fellow in Neurobiology at the Max Planck Institute of Behavioural Physiology. Between 1994 – 1997, Professor Hennig moved to Humboldt University of Berlin as a research fellow, before becoming an Assistant Professor in Zoology in 1998. Professor Hennig remains at the Humboldt University of Berlin as part of the Behavioural Physiology Group. His research interests include auditory processing and evolution of acoustic communication signals in insects, decision making in insects and information theoretic approaches to neural signal processing. |
|
16:30 - 17:00 | Discussion |
09:00 - 09:30 |
Acoustic communication and evolution in Drosophila: roles for a nuclear receptor and its regulon
All behaviour is guided, or restricted, by the senses. Sense organs have evolved in multiple ways to extract and pre-process information from the external world. However, molecular mechanisms of sense organ specification and their evolutionary origins have remained unclear. We have used closely-related Drosophila species to explore how ears can contribute to evolution - and how evolution, in turn, has shaped ears. In flies (Diptera), hearing is mediated by Johnston’s Organ (JO) neurons in the second antennal segment (1). In Drosophilids, the spectral tuning of the flies’ antennal ears correlates with the spectral composition of song pulses produced by conspecific males (2). Laser-Doppler vibrometric analysis of sound receiver mechanics and extracellular recordings of compound action potentials from the antennal nerve show that the species-specific auditory tuning is partly the result of variations in the molecular modules for mechanotransduction in JO neurons. RNA-Seq based transcriptomics of the JOs from six closely-related Drosophila species combined with predictive bioinformatics (i-cisTarget and iRegulon) identified a particular type of transcription factor from the nuclear hormone receptor family as important contributor to inter-specific variation in Drosophilid ears. Nuclear hormone receptor proteins are also required for normal sex organ development. The investigated mutants showed sexually dimorphic defects in auditory function (both with regard to auditory mechanics and auditory nerve responses). On the sender side of Drosophila acoustic communication, in turn, mutant males displayed severe defects in song production (both in their propensity to produce songs and with regard to song structure). The duality of its contributions presents this nuclear receptor gene as a potential substrate for genetic coupling in the Drosophila acoustic communication system. Dr Joerg Albert
Dr Joerg Albert |
|
---|---|---|
09:30 - 09:40 | Discussion | |
09:40 - 10:10 |
Making an effort to listen: mechanical amplification by ion channels and myosin motors in hair cells of the inner ear
Human hearing is enhanced by an active process that amplifies the ear's mechanical inputs several hundredfold, sharpens frequency tuning to allow the discrimination of tones differing in frequency by less than 0.2 %, and compresses six orders of magnitude in the amplitude of sounds into only two orders of magnitude in neural output. In addition, spontaneous otoacoustic emissions emerge from ears in a very quiet environment, an indication that the active process can be so exuberant as to become unstable. Cooperativity between mechanoelectrical-transduction channels confers negative stiffness on the hair bundle, which together with myosin-based adaptation motors elicits a dynamical instability that underlies the active process. Experiments on individual hair bundles indicate that the bundle's operation near this instability, a Hopf bifurcation, accounts for the four characteristics of the active process. Professor Jim Hudspeth, The Rockefeller Univeristy, USA
Professor Jim Hudspeth, The Rockefeller Univeristy, USABorn and raised in Houston, Texas, Jim Hudspeth conducted undergraduate studies at Harvard College and received PhD and MD degrees from Harvard Medical School. Following postdoctoral work at the Karolinska Hospital in Stockholm, he served on the faculties of California Institute of Technology, University of California, San Francisco, and University of Texas Southwestern Medical Center. After joining Howard Hughes Medical Institute, Jim moved to Rockefeller University, where he is F. M. Kirby Professor. Dr. Hudspeth conducts research on hair cells, the sensory receptors of the inner ear. He and his colleagues are especially interested in the active process that sensitizes the ear, sharpens its frequency selectivity, and broadens its dynamic range. They also investigate the replacement of hair cells as a potential therapy for hearing loss. Jim is a member of the National Academy of Sciences, the American Academy of Arts and Sciences, and the American Philosophical Society. |
|
10:10 - 10:20 | Discussion | |
10:20 - 10:40 | Coffee | |
10:40 - 11:10 |
Novel synaptic transmission from vestibular hair cells to calyceal afferents serves fast reflexes in amniotes
The vestibular type I hair cell and its distinctive calyceal synapse are found only in the inner ears of reptiles, birds and mammals. Like the cochlea, the type I – calyx synapse may represent adaptations to life on land. Over the past 20 years, evidence has accrued that these unusual-looking synapses are also functionally remarkable, featuring not just chemical (quantal) transmission of vesicle-bound glutamate from ribbon synapses but also a form of non-quantal transmission that depends on currents through ion channels in dense arrays on presynaptic (hair cell) and postsynaptic (calyceal) membranes. Quantal and non-quantal transmission filter the transmitted mechanosensory signal in distinct ways and have been recorded both together and separately, suggesting unexpectedly rich possibilities for shaping vestibular inputs to the brain. Professor Ruth Anne Eatock
Professor Ruth Anne EatockRuth Anne Eatock was introduced to comparative neurobiology of the auditory system by Geoff Manley, who supervised her undergraduate and Master’s theses at McGill University. These studies stimulated an interest in hair cell physiology, and for her doctoral research she worked with Jim Hudspeth at Caltech on sensory adaptation in vertebrate hair cells. As a postdoctoral fellow, she studied the dependence of auditory nerve fiber firing rate on sound pressure level in the alligator lizard, a model system developed by Tom Weiss of MIT and the Eaton-Peabody Laboratory. Ruth Anne held academic positions at the University of Rochester in New York, Baylor College of Medicine in Houston, and Harvard Medical School before arriving at the University of Chicago in 2014. Her research has focused on developing and mature function in vestibular hair cells and afferent synapses of diverse vertebrates, especially rodents. |
|
11:10 - 11:20 | Discussion | |
11:20 - 11:50 |
The role of the auditory brainstem in understanding speech in challenging listening conditions
Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. This talk describes a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. This research employs this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention. Dr Tobias Reichenbach, Impertial College London, UK
Dr Tobias Reichenbach, Impertial College London, UKDr. Tobias Reichenbach is a Senior Lecturer (US equivalent: Associate Professor) at the Department of Bioengineering at Imperial College London. He joined Imperial in 2013 after postdoctoral training in computational neuroscience and the biophysics of hearing with Dr. A. J. Hudspeth at the Rockefeller University in New York. He graduated in 2008 with highest honors from the Ludwig-Maximilians University in Munich, Germany, where he researched on theoretical aspects of non-equilibrium pattern formation and statistical physics in the group of Dr. E. Frey. Dr. Tobias Reichenbach is interested in problems at the interface of physics and biology. He uses ideas from theoretical physics, mathematics, computer science and experimental neruobiology to investigate how biological systems function, with a particular focus on the auditory system. Dr. Reichenbach aims at applying his findings in the developement of novel, biologically-inspired technology. |
|
11:50 - 12:00 | Discussion | |
12:00 - 13:00 | Lunch |
13:00 - 13:30 |
Auditory neural circuits in the fly brain
How does the brain process acoustic information? Mapping the auditory neural circuits is indispensable to answer this question. The fruit fly is ideally suited for such tasks, with its small brain size and a rich repertoire of genetic tools. Moreover, they use acoustic signals to communicate with each other. Toward comprehensive identification of auditory neural circuits in the fly brain, this study systematically identified the auditory sensory neurons and their downstream neurons. The anatomic and functional analyses revealed frequency segregation at the first layer of the auditory pathway and the convergence of frequency information in the subsequent downstream pathways. Second-order auditory neurons have intensive binaural interactions, raising the possibility that the fly is capable of comparing acoustic signals detected at the left and right ears. Based on analysis, this research established the first comprehensive map of primary and secondary auditory neurons in the fly brain, which are characterized by frequency segregation and convergence, binaural interaction, and multimodal pathways. Professor Azusa Kamikouchi
Professor Azusa KamikouchiProfessor Kamikouchi is fascinated by the mystery of the brain. She particularly has a strong interest in the auditory system. One of main questions linked to her research is how acoustic signals are detected, processed, and integrated in the brain. The fruit fly is an ideal model organism for such a task, because of its sophisticated genetic tools to analyze neurons and manipulate neural circuits in the brain. Professor Kamikouchi started a project in 2002 to unravel the anatomic organization of the auditory system of fruit flies with Professor Kei Ito. After she spent three years to explore the function of auditory sensory neurons with Professor Martin Göpfert at the University of Cologne (now in Göttingen), Professor Kamikouchi then went back to Japan and moved to Nagoya University in 2011 as a professor of neuroscience. |
|
---|---|---|
13:30 - 13:40 | Discussion | |
13:40 - 14:10 |
Neural mechanisms for dynamic acoustic communication in flies
Social interactions require continually adjusting behavior in response to sensory feedback. For example, when having a conversation, sensory cues from a partner (e.g., sounds or facial expressions) affect speech patterns in real time. Human speech signals, in turn, are the sensory cues that modify a partner’s actions. What are the underlying computations and neural mechanisms that govern these interactions? To address these questions, lab studies the acoustic communication system of Drosophila. As an advantage, the fly nervous system is relatively simple, with a wealth of genetic tools to interrogate it. Importantly, Drosophila acoustic behaviors are highly quantifiable and robust. During courtship, males produce time-varying songs via wing vibration, while females arbitrate mating decisions. This study discovered that, rather than being a stereotyped fixed action sequence, male song structure and intensity are continually sculpted by interactions with the female, over timescales ranging from tens of milliseconds to minutes – and this research is mapping the underlying circuits and computations. This research has also developed methods to relate song representations in the female brain to changes in her behavior, across multiple timescales. The focus on natural acoustic signals, either as the output of the male nervous system or as the input to the female nervous system, provides a powerful, quantitative handle for studying the basic building blocks of communication. Dr Mala Murthy
Dr Mala Murthy |
|
14:10 - 14:20 | Discussion | |
14:20 - 14:50 |
Reconciling perceptual and physiological measures of frequency selectivity in the mammalian auditory system
Dr Christian Sumner, University of Nottingham, UK |
|
14:50 - 15:00 | Discussion | |
15:20 - 15:50 |
Neural codes for communication signals and sequences in the primate brain
Many animals are not thought to be able to combine their vocalizations into structured sequences, as do songbirds, humans and a few other species. Nonetheless, it remains possible that these many animals are able to recognize ordering relationships in sequences generated by ‘artificial grammars’. This talk will explore how understanding the extent of these hidden receptive learning abilities could clarify the neurobiological origins of language. First, an overview behavioral results with structured sequence learning in three primate species: marmosets, macaques and humans. Then a focus on brain imaging results identifying evolutionarily conserved frontal brain regions in macaques and humans involved in predicting events that occur next in a sequence. Finally, results are presented from a new study involving comparative intracranial recordings in humans and monkeys processing the sequences. Overall, the findings indicate that human and non-human primates possess an evolutionarily conserved neural network involved in processing structured auditory input and they provide hints on how the human brain differentiated for language. Professor Christopher Petkov
Professor Christopher PetkovProfessor Christopher Petkov obtained his PhD from the University of California, Davis, USA, and then conducted a postdoctoral fellowship at the Max Planck Institute for Biological Cybernetics, Germany. In 2008, he founded the Laboratory of Comparative Neuropsychology at Newcastle University to advance the understanding of the neurobiology of human communication. The laboratory’s key objective is to provide the basic science foundation needed to understand cognitive abilities that underpin human language and communication, using neurobiological technologies that bridge brain neuroimaging with neuronal level insights in animal models. The animal work in the laboratory is rooted in the notion that advances in animal welfare and scientific discovery can co-occur. Professor Petkov is currently a Wellcome Trust Investigator and European Research Council Consolidator Grant holder. |
|
15:50 - 16:00 | Discussion | |
16:00 - 16:30 |
Adaptive coding in the central auditory system
If we are to understand how activity in the brain gives rise to auditory perception and guides behaviour, it is essential to consider the way in which neural processing is shaped both by the sensory and behavioural context in which sounds occur and by lifelong changes in experience that refine or degrade perceptual abilities as a result of learning or hearing loss. This talk will consider the neural circuits and strategies that enable the auditory system to adjust to the statistics of the auditory scene, as well as to longer lasting changes in inputs that result from hearing impairments. In addition to providing insights into the adaptive capabilities of the auditory system, findings indicate that different forms of plasticity may represent therapeutic targets for restoring perceptual abilities following hearing loss. Professor Andrew King FMedSci FRS, University of Oxford, UK
Professor Andrew King FMedSci FRS, University of Oxford, UKAndrew King is a Wellcome Principal Research Fellow and Professor of Neurophysiology at the University of Oxford and the Director of the Centre for Integrative Neuroscience in the Department of Physiology, Anatomy and Genetics. He studied physiology at King’s College London and obtained his PhD from the National Institute for Medical Research. He has worked at the University of Oxford since then, with a spell at the Eye Research Institute in Boston, and his research has been supported by fellowships from the SERC, the Lister Institute of Preventive Medicine and the Wellcome Trust. Andrew was awarded the Wellcome Prize in Physiology and is a Fellow of the Royal Society, the Academy of Medical Sciences, and the Physiological Society. His group studies how the auditory brain adapts to the rapidly changing statistics that characterize real-life soundscapes, integrates other sensory and motor-related signals, and learns to compensate for the altered auditory inputs resulting from hearing impairment. |
|
16:30 - 16:40 | Discussion | |
16:40 - 17:00 | Plenary and final comments |