This page is archived

Links to external sources may no longer work as intended. The content may not represent the latest thinking in this area or the Society’s current position on the topic.

From sender to receiver: physics and sensory ecology of hearing in insects and vertebrates

04 - 05 December 2017 09:00 - 17:00

Theo Murphy international scientific meeting organised by Dr Andrei Kozlov and Dr Joerg Albert.

Hearing evolved independently in insects and vertebrates, and the gross anatomy of auditory systems can look very different indeed. For example, grasshoppers have ears on their legs. The biophysics of signal transduction in the ear and the neural processing of sound in the brain, however, share basic similarities across species. This meeting aims to explore and discuss these fundamental principles.

More information on the speakers and programme will be available soon. Recorded audio of the presentations will be available on this page after the meeting has taken place.

Attending the event

This is a residential conference, which allows for increased discussion and networking.

  • Free to attend
  • Advanced registration essential (more information on registration will be available soon)
  • Catering and accommodation available to purchase during registration

Enquiries: contact the Scientific Programmes team

Schedule

09:00 - 09:05 Welcome by the Royal Society
09:05 - 09:30 Mechanical tuning of the hair bundle

In the vertebrate ear, hearing starts with the deflection of the hair bundle.  Remarkably, the hair bundle can oscillate spontaneously, providing frequency-selective amplification of weak inputs. This talk will first discuss how gating of the ion channels that mediate mechano-electrical transduction shapes hair-bundle oscillations.  Although the activation time of the transduction channels is two orders of magnitude lower than the oscillation period, we find that channel kinetics provides a key determinant of the oscillation waveform and frequency.  In auditory organs, morphological gradients suggest that the hair bundle may operate as a tuning fork.  In the second part the talk, will show that the stiffness and tension of the gating springs that pull on the transduction channels increase from the low to the high frequency end of the rat cochlea.  These results reveal that the transduction apparatus of the hair cell is mechanically tuned according to the cell’s characteristic frequency.

Dr Pascal Martin, Institut Curie, France

09:30 - 09:40 Discussion
09:40 - 10:10 Nonlinear dynamics of inner ear hair cells

Hair cells of the inner ear exhibit a highly nonlinear response to external signals, across a broad range of stimuli, and this nonlinearity has been shown to be crucial to the acuity of hearing. The dynamics of the hair bundles has furthermore been shown to be active, and to exhibit motility in the absence of input. To understand the physical mechanisms behind the sensitivity of auditory detection, we explore how hair bundles synchronize their innate oscillations to external stimuli. We demonstrate experimentally that bundles can phase-lock to a broad range of frequencies, in various mode-locking ratios. Further, we demonstrate the presence of chaos in the underlying dynamics of active bundles, and explore its impact on the sensitivity of detection. Secondly, we explore the interaction between active bundle mechanics and the electrical circuit comprised of somatic ion channels, and measure the impact of this coupled system on the overall detection performed by the hair cell. Finally, we show that the presence of innate oscillation is a ubiquitous phenomenon, which occurs in multiple end organs.

Professor Dolores Bozovic, UCLA, USA

10:10 - 10:20 Discussion
10:20 - 10:40 Coffee Break
10:40 - 11:10 MET channel blockers: fundamental insights and potential for otoprotection

The mechano-electrical transducer (MET) channels of sensory hair cells are nonselective cation channels. They have a high permeability but low conductance for calcium ions, which regulate hair-cell adaptation. The MET channel is permeable to large polycations, which block the channel by competing with Ca2+ for binding sites in the permeation pore, but also enter the channel when the hair cells are hyperpolarized. Aminoglycoside antibiotics such as gentamicin, which have loss of hearing and balance as a side effect, enter hair cells by this route. Mutations in TMC proteins result in quantitative changes in the interaction of aminoglycosides with the channel. This suggests that TMC proteins are pore-forming subunits of the MET channel. We are studying various other polycations. Some of these compete with aminoglycosides and provide protection from ototoxicity, while others reveal a gradient in size of the MET channel pore along the length of the mammalian cochlea.

Professor Corné Kros, University of Sussex, UK

11:10 - 11:20 Discussion
11:20 - 11:50 Multiple manifestations of adaptations in mammalian auditory hair cells are driven by stimulus modality

Fast adaptation of hair cell mechanotransduction currents manifests itself differently depending on mode of stimulation. Recent evidence demonstrating a non calcium dependent component of fast adaptation was challenged as being a stimulus artifact because fluid jet responses were different from stiff probe stimulation. Our data demonstrate the biological underpinnings of fast adaptation as elimination of PIP2 from membranes eliminates fast adaptation. Fluid jet stimulation did not reveal a time dependent component of adaptation, yet steady-state adaptation persisted at positive potentials. Shaping the stimulus to be fast, revealed a fast adaptation in the current response that persisted at positive potentials.

Professor Anthony Ricci, Stanford University, USA

13:00 - 13:00 From Hearing to listening- the role of auditory cortex in making sense of sounds

This talk will discuss two recent lab studies that have demonstrated that neural activity in auditory cortex does not merely reflect sound acoustics. The first study sought to determine the co-ordinate frame in which spatial tuning exists in auditory cortex by recording from the auditory cortex of freely moving ferrets and reconstructing spatial receptive fields in either head-centered or world-centered co-ordinate frames. While the majority of neurons (~80%) encode sound location relative to the head, a minority (20%) of neurons represent the location of a sound source in the world, independently of the orientation of the animal. The second study will highlight the impact that visual signals can have on auditory cortical activity and present data suggesting that one role for the early integration of auditory and visual signals in auditory cortex is to support auditory scene analysis.

Dr Jennifer Bizley, University College London, UK

13:30 - 13:40 Discussion
13:40 - 14:10 How the human brain detects patterns in sound sequences

This talk will present ongoing work in Professor Chait's lab using brain imaging (EEG, MEG and fMRI),  behavioural and eye-tracking experimentation to reveal how human listeners discover patterns and statistical regularities in rapid sound sequences. Sensitivity to patterns is fundamental to sensory processing, in particular in the auditory system, and a major component of the influential ‘predictive coding’ theory of brain function.  Supported by growing experimental evidence, the ‘predictive coding’ framework suggests that perception is driven by a mechanism of inference, based on an internal model of the signal source.  However, a key element of this theory - the process through which the brain acquires this model, and its neural underpinnings – remains poorly understood.  Experiments focus on this missing link. The research approach, based on measuring behavioural and brain responses to rapid tone-pip sequences governed by specifically controlled rules along a variety of feature dimensions enables us to address questions related to (1) how the brain discovers patterns in sound sequences, (2) which neural mechanisms are involved, (3) to what degree the process is automatic or susceptible to attentional state and behavioural goals of the listener.

Professor Maria Chait, University College London, UK

14:10 - 14:20 Discussion
14:20 - 14:50 Auditory Plasticity to Recognise Species-Specific Vocal Categories in Mice

Both senders and receivers in any species-specific communication system have presumably evolved mechanisms to match the production of biologically meaningful signals to the sensory “filters” for recognizing their importance. How such “sign” stimuli release so-called fixed action patterns is largely thought to be implemented through “innate releasing mechanisms.” Importantly though, such mechanisms do not necessarily imply static systems insensitive to experience and learning, as Konrad Lorenz’s classic imprinting experiments demonstrated. However, the mechanisms for sensory plasticity to support such behaviorally important learning during natural communication are not well understood. Here, I describe studies in mice that have exploited a natural ultrasonic communication system between mouse pups and their mothers to reveal novel forms and mechanisms of neural plasticity within the auditory system, particularly at the auditory cortical level, to support the recognition of infant vocalisations, an ethologically significant function of the auditory system of adult females.

Dr Robert C Liu, Emory University, USA

14:50 - 15:00 Discussion
15:00 - 15:20 Tea Break
15:20 - 15:50 From song to synapse: the neurobiology of vocal communication

The interplay between hearing and vocalization is critical to vocal communication and vocal learning.  Recent research using both songbirds and mice has provided useful insights into the neural circuits and mechanisms that mediate this sensorimotor interplay.  I will discuss recent progress in understanding how auditory and motor systems interact to enable vocal learning and communication.

Professor Richard Mooney, Duke University, USA

16:00 - 16:30 Song recognition in crickets meets evolution: phenotypic diversity from a single mechanism

Male crickets produce a species-specific song signal build from pulse trains that attracts conspecific females. Behavioural tests with females of different species demonstrate rather diverse phonotactic preference profiles with selectivity for different temporal features such as pulse rate, pulse duration or pulse duty-cycle. A computational model based on a template matching mechanism can account for this phenotypic diversity and for transitions between preference profiles by small changes in the processing algorithm based on the relative amplitude and timing of excitation and inhibition. A small network of auditory neurons in the cricket’s brain indicates pulse rate recognition by a coincidence-detector based on a delay-line and a post-inhibitory rebound. A specific modelling approach based on Linear/Non-linear models of this network demonstrates that all the computational components of the brain neurons are required for pulse rate recognition. Our combined approach illustrates how the neuronal network can account for rapid transformations between phenotypic preference profiles during evolution.

Professor Matthias Hennig, Humboldt-Universität zu Berlin, Germany

16:30 - 17:00 Discussion
09:00 - 09:30 Acoustic communication and evolution in Drosophila: roles for a nuclear receptor and its regulon

All behaviour is guided, or restricted, by the senses. Sense organs have evolved in multiple ways to extract and pre-process information from the external world. However, molecular mechanisms of sense organ specification and their evolutionary origins have remained unclear. We have used closely-related Drosophila species to explore how ears can contribute to evolution - and how evolution, in turn, has shaped ears.

In flies (Diptera), hearing is mediated by Johnston’s Organ (JO) neurons in the second antennal segment (1). In Drosophilids, the spectral tuning of the flies’ antennal ears correlates with the spectral composition of song pulses produced by conspecific males (2). Laser-Doppler vibrometric analysis of sound receiver mechanics and extracellular recordings of compound action potentials from the antennal nerve show that the species-specific auditory tuning is partly the result of variations in the molecular modules for mechanotransduction in JO neurons.

RNA-Seq based transcriptomics of the JOs from six closely-related Drosophila species combined with predictive bioinformatics (i-cisTarget and iRegulon) identified a particular type of transcription factor from the nuclear hormone receptor family as important contributor to inter-specific variation in Drosophilid ears.

Nuclear hormone receptor proteins are also required for normal sex organ development. The investigated mutants showed sexually dimorphic defects in auditory function (both with regard to auditory mechanics and auditory nerve responses). On the sender side of Drosophila acoustic communication, in turn, mutant males displayed severe defects in song production (both in their propensity to produce songs and with regard to song structure). The duality of its contributions presents this nuclear receptor gene as a potential substrate for genetic coupling in the Drosophila acoustic communication system.

Dr Joerg Albert

09:30 - 09:40 Discussion
09:40 - 10:10 Making an effort to listen: mechanical amplification by ion channels and myosin motors in hair cells of the inner ear

Human hearing is enhanced by an active process that amplifies the ear's mechanical inputs several hundredfold, sharpens frequency tuning to allow the discrimination of tones differing in frequency by less than 0.2 %, and compresses six orders of magnitude in the amplitude of sounds into only two orders of magnitude in neural output. In addition, spontaneous otoacoustic emissions emerge from ears in a very quiet environment, an indication that the active process can be so exuberant as to become unstable. Cooperativity between mechanoelectrical-transduction channels confers negative stiffness on the hair bundle, which together with myosin-based adaptation motors elicits a dynamical instability that underlies the active process. Experiments on individual hair bundles indicate that the bundle's operation near this instability, a Hopf bifurcation, accounts for the four characteristics of the active process.

Professor Jim Hudspeth, The Rockefeller Univeristy, USA

10:10 - 10:20 Discussion
10:20 - 10:40 Coffee
10:40 - 11:10 Novel synaptic transmission from vestibular hair cells to calyceal afferents serves fast reflexes in amniotes

The vestibular type I hair cell and its distinctive calyceal synapse are found only in the inner ears of reptiles, birds and mammals.  Like the cochlea, the type I – calyx synapse may represent adaptations to life on land.  Over the past 20 years, evidence has accrued that these unusual-looking synapses are also functionally remarkable, featuring not just chemical (quantal) transmission of vesicle-bound glutamate from ribbon synapses but also a form of non-quantal transmission that depends on currents through ion channels in dense arrays on presynaptic (hair cell) and postsynaptic (calyceal) membranes.  Quantal and non-quantal transmission filter the transmitted mechanosensory signal in distinct ways and have been recorded both together and separately, suggesting unexpectedly rich possibilities for shaping vestibular inputs to the brain.

Professor Ruth Anne Eatock

11:10 - 11:20 Discussion
11:20 - 11:50 The role of the auditory brainstem in understanding speech in challenging listening conditions

Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. This talk describes a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity.  This research employs this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.

Dr Tobias Reichenbach, Impertial College London, UK

11:50 - 12:00 Discussion
12:00 - 13:00 Lunch
13:00 - 13:30 Auditory neural circuits in the fly brain

How does the brain process acoustic information? Mapping the auditory neural circuits is indispensable to answer this question. The fruit fly is ideally suited for such tasks, with its small brain size and a rich repertoire of genetic tools. Moreover, they use acoustic signals to communicate with each other. Toward comprehensive identification of auditory neural circuits in the fly brain, this study systematically identified the auditory sensory neurons and their downstream neurons. The anatomic and functional analyses revealed frequency segregation at the first layer of the auditory pathway and the convergence of frequency information in the subsequent downstream pathways. Second-order auditory neurons have intensive binaural interactions, raising the possibility that the fly is capable of comparing acoustic signals detected at the left and right ears. Based on analysis, this research established the first comprehensive map of primary and secondary auditory neurons in the fly brain, which are characterized by frequency segregation and convergence, binaural interaction, and multimodal pathways.

Professor Azusa Kamikouchi

13:30 - 13:40 Discussion
13:40 - 14:10 Neural mechanisms for dynamic acoustic communication in flies

Social interactions require continually adjusting behavior in response to sensory feedback. For example, when having a conversation, sensory cues from a partner (e.g., sounds or facial expressions) affect speech patterns in real time. Human speech signals, in turn, are the sensory cues that modify a partner’s actions. What are the underlying computations and neural mechanisms that govern these interactions? To address these questions, lab studies the acoustic communication system of Drosophila. As an advantage, the fly nervous system is relatively simple, with a wealth of genetic tools to interrogate it. Importantly, Drosophila acoustic behaviors are highly quantifiable and robust. During courtship, males produce time-varying songs via wing vibration, while females arbitrate mating decisions. This study discovered that, rather than being a stereotyped fixed action sequence, male song structure and intensity are continually sculpted by interactions with the female, over timescales ranging from tens of milliseconds to minutes – and this research is mapping the underlying circuits and computations. This research has also developed methods to relate song representations in the female brain to changes in her behavior, across multiple timescales. The focus on natural acoustic signals, either as the output of the male nervous system or as the input to the female nervous system, provides a powerful, quantitative handle for studying the basic building blocks of communication.

Dr Mala Murthy

14:10 - 14:20 Discussion
14:20 - 14:50 Reconciling perceptual and physiological measures of frequency selectivity in the mammalian auditory system

Dr Christian Sumner, University of Nottingham, UK

14:50 - 15:00 Discussion
15:20 - 15:50 Neural codes for communication signals and sequences in the primate brain

Many animals are not thought to be able to combine their vocalizations into structured sequences, as do songbirds, humans and a few other species. Nonetheless, it remains possible that these many animals are able to recognize ordering relationships in sequences generated by ‘artificial grammars’. This talk will explore how understanding the extent of these hidden receptive learning abilities could clarify the neurobiological origins of language. First, an overview behavioral results with structured sequence learning in three primate species: marmosets, macaques and humans. Then a focus on brain imaging results identifying evolutionarily conserved frontal brain regions in macaques and humans involved in predicting events that occur next in a sequence. Finally, results are presented from a new study involving comparative intracranial recordings in humans and monkeys processing the sequences. Overall, the findings indicate that human and non-human primates possess an evolutionarily conserved neural network involved in processing structured auditory input and they provide hints on how the human brain differentiated for language. 

Professor Christopher Petkov

15:50 - 16:00 Discussion
16:00 - 16:30 Adaptive coding in the central auditory system

If we are to understand how activity in the brain gives rise to auditory perception and guides behaviour, it is essential to consider the way in which neural processing is shaped both by the sensory and behavioural context in which sounds occur and by lifelong changes in experience that refine or degrade perceptual abilities as a result of learning or hearing loss. This talk will consider the neural circuits and strategies that enable the auditory system to adjust to the statistics of the auditory scene, as well as to longer lasting changes in inputs that result from hearing impairments. In addition to providing insights into the adaptive capabilities of the auditory system, findings indicate that different forms of plasticity may represent therapeutic targets for restoring perceptual abilities following hearing loss.

Professor Andrew King FMedSci FRS, University of Oxford, UK

16:30 - 16:40 Discussion
16:40 - 17:00 Plenary and final comments