In 1867 Herman Helmholtz, physiologist, keen musician and soon to become professor of physics in Berlin, added an appendix to the second edition of his text “On the Sensations of Tone"#. He suggested that it was possible to model the cochlea of the inner ear like a miniature piano. Each string – for he saw fibres lying in parallel all along the coiled inner ear structure – could, like a musical instrument, be conceived to resonate at a particular frequency. Short fibres would vibrate only at high frequencies, and longer fibres would respond to low frequencies. The hypothesis was based on work from the school of anatomists at Wurzburg where fifteen years earlier a Sicilian aristocrat, Alfonso Corti, had completed his PhD studying the inner ear structure subsequently named after him.
The basic Helmholtz model of cochlear mechanics remains but with its underlying physics significantly modified by knowledge of the physiology of the cochlear cells1. The field still remains contentious, as the system is highly nonlinear, able to encode sound intensities over a range exceeding eight orders of magnitude. Electrical analogues of the cochlea and signal processing technologies have found natural homes in bioengineering departments, however, and they form the basis of much current work on cochlear mechanics (for a recent example of this approach see2). Physiological biophysics contains numerous examples of such cross-over, with the Hodgkin–Huxley description of the action potential being a prime exhibit. So this is nothing new. However ideas about signal coding, taken from electrical engineering, have been essential drivers for the development of prostheses such as cochlear implants and hearing aids; both technologies are still evolving in ways which take advantage of better machine learning algorithms3. This still has a way to go.
It is not usually a good idea to try to guess where a field is heading. As argued by Stuart Firestein4, the best bet is to forage where we know least. The main reason for still unresolved questions in cochlear mechanics is that the data are sparse. Measurements in a small pea-sized inner ear buried in hard bone are problematic; worse, the vibration patterns are nanometres in amplitude. The major recent technical advance has been the use of optical coherence tomography5, which images the living cochlear structures through bone and can provide measurement of the small vibrations as well. The current images have relatively poor cellular resolution and some improvement is almost certain in the near future.
All the same, drilling down to the elemental molecular events has also been hampered by the limited tissue volumes in the ear. The identification of genes for hearing owes its success to the sequential nature of sound processing and weak points in the pathway show up in mouse and human screens (Figure 1). From a standing start in 1989, there are now over 120 genes known to be responsible for non-syndromic hearing loss, each associated with multiple loci and a further group associated with deafness but with syndromic phenotypes (see https://hereditaryhearingloss.org). Not all of these genes have been linked to a cellular or definitive physiological function. Some are clearly significant for inner ear development. Some determine cochlear maintenance functions (although it is not known for certain how much protein turnover occurs). Some set the order in which the hearing system ages and provide a collection of outstanding challenges, important clinically for an ageing demographic, where the time scales, years, decades, are long compared to what is conventionally and experimentally accessible; so new tricks to tease out the data are required.
Figure 1.

Auditory interconnections: A personal and simplified view of how the auditory research field is currently segmented. It reflects the interaction between subfields and to some degree mirrors the sequential processing of sound itself within the system. This sequential ordering has facilitated gene discovery as any weak link can compromise the whole cascade. The main text describes how new developments may affect the individual subfields.
Surprisingly the ion channel activated by sound has only recently been identified6. Although a mouse mutation in the corresponding gene had been known for over five decades, convincing evidence that it was involved in auditory transduction was elusive. Transduction we now know depends on a protein complex of five distinct proteins, hard to express heterologous systems (so far) and hard to activate except in the native hair cell. The discovery of the mechanosensitive channels Piezo1&2, so important for the sense of touch, and highlighted by a recent Nobel Prize, has not provided a lead for underpinning mammalian hearing, although Piezo2 does seem to be present during a phase of cochlear maturation but then disappears again7. Resolving the transducer complex organization remains an outstanding challenge for the near future. I suspect that near-Angstrom resolution imaging now offered by cryo-e.m. may help solve the structure, even though it may leave open the functional question “Is there an upper limit to hearing?" Is that limit determined by evolutionary fitness in different species or by the inherent biophysics of channel gating and the rate at which the transduction channel can open? This last question remains a challenge for biophysical recording techniques 8.
But to return to Helmholtz, 170 years on: it has always been the case that a “top-down" approach to hearing, exemplified by the elegance of psychoacoustic measurements, has often informed and directed much “bottom-up" molecular and cellular physiology. At the boundary between these two experimental styles, “listening" (rather than “hearing") moves audition into a different realm altogether. Here there is another bottleneck: how do we integrate the data on largely single cell function into the complexity of neural computation? The anatomical nuclei of auditory pathway are not as orderly as the cerebellum or hippocampus, and the pathway up to and beyond the cortex itself, has numerous descending pathways, as though each layer is designed to modify the afferent information. All the information about sound frequency, intensity, temporal structure, auditory space and acoustic streams is processed in real time, using predictive coding strategies that we are only just beginning to piece together. The huge interest and progress in machine learning and artificial intelligence is already challenging neuroscience (AlphaFold is an example of the impact AI has already made on structural biology9). Unravelling of these deeper and more complex processes in hearing and how they mirror other brain functions is a fertile area for study; it is going to require a real collaboration for researchers prepared to cross conventional disciplinary boundaries.
Notes
#On the Sensations of Tone as a Physiological Basis for the Theory of Music (“Die Lehre von den Tonempfindungen als physiologische Grundlage für die Theorie der Musik”)
(First edition 1862; Dover Edition 1954, Translated from the 4th edition by A Ellis, Dover Publications Inc., New York)
Conflict of Interest Declaration
The author holds the position of Editorial Board Member for FUNCTION and is blinded from reviewing or making decisions for the manuscript.
References
- 1.Ashmore J. Cochlear outer hair cell motility. Physiol Rev. 2008;88(1):173–210. [DOI] [PubMed] [Google Scholar]
- 2.Sasmal A, Grosh K. Unified cochlear model for low- and high-frequency mammalian hearing. Proc Natl Acad Sci. 2019;116(28):13983–13988. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Kleinlogel S, Vogl C, Jeschke Met al. Emerging approaches for restoration of hearing and vision. Physiol Rev. Published online March 19, 2020:physrev.00035.2019. doi:10.1152/physrev.00035.2019. [DOI] [PubMed] [Google Scholar]
- 4.Firestein S. Ignorance: How It Drives Science. Oxford University Press; 2012. https://global.oup.com/academic/product/ignorance-9780199828074, ISBN: 9780199828074. [Google Scholar]
- 5.Lee HY, Raphael PD, Xia Aet al. Two-dimensional cochlear micromechanics measured in vivo demonstrate radial tuning within the mouse organ of corti. J Neurosci. 2016;36(31):8160–8173. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Pan B, Akyuz N, Liu XPet al. TMC1 forms the pore of mechanosensory transduction channels in vertebrate inner ear hair cells. Neuron. 2018;99(4):736–753.e6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Beurg M, Fettiplace R. PIEZO2 as the anomalous mechanotransducer channel in auditory hair cells. J Physiol. 2017;595(23):7039–7048. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Shapovalov G, Lester HA. Gating transitions in bacterial ion channels measured at 3 μs resolution. J Gen Physiol. 2004;124(2):151–161. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Jumper J, Evans R, Pritzel Aet al. Highly accurate protein structure prediction with AlphaFold. Nature. 2021;596(7873):583–589. [DOI] [PMC free article] [PubMed] [Google Scholar]
