• NOTE:  This is from the first chapter of Teaching and Researching Listening (Fourth Edition).  The way to understand listening holistically is to conceptualize listening as a coordination of networks:  neurological, linguistic, semantic, and pragmatic.  The basis of the neurological network is hearing.  The initial section of this chapter outlines the hearing process. It is useful for listening instructors to understand the physiological basis of listening — in order to better grasp the tangible “real time” nature of listening.
  • ———–


1.1   Introduction: Listening as a Coordination of Networks

Listening is the perceptual and cognitive process of receiving, comprehending, and interpreting auditory information from the environment, including spoken language and nonspeech sounds. It involves actively attending to and processing incoming sounds in order to make sense ofthe signals conveyed by speakers or the environment.

Listening is supported by coordinated operations within interconnected neural networks throughout the brain. The listening process is initiated when sound waves enter the ear and are transformed into electrical signals sent through the brainstem to the auditory cortex. Language processing involves several interconnected brain circuits branching out from the auditory cortex that work together to decode the linguistic content and create meaning. This processing involves integrating incoming information with existing knowledge and weighing its social and emotional significance. Once the new information is processed, it is encoded into memory (Handel, 2006). Although all humans

DOI: 10.4324/9781003390794-3


possess the same basic neurological structures, there are individual variances in functioning. These variances include processing speed, memory capacity, phonological awareness, and the extent and patterns of neural plasticity.

It is useful for language teachers and applied linguists to understand the nature of neurological processing in order to develop more informed instructional and research approaches. A better understanding of neurological processing allows us to identify and work with natural tendencies and to adapt to indi- vidual differences. With a thorough understanding of the neurological processes involved, we can research and implement targeted instructional practices that can enhance listening abilities in learners.


1.2   Reception: The Role of Hearing

Decoding spoken language is the foundation of listening. Even though listening often involves attending to multimodal inputs, the basis oflistening is attending to sound. Therefore, a natural starting point for an exploration of listening in teaching and research is to consider the fundamental physical and neurological processes involved in listening.

We all experience hearing as if it were a distinct sense, a self-contained system that we can turn on or off at will. However, hearing is part ofan interdependent brain network organization that involves multiple neurological systems, not only auditory stimulation (Gross & Poeppel,2019; Kayser et al., 2012; Willems & Peelen, 2021). To understand the hearing process, we need to take a wider view of the structural and functional systems that support it.

Hearing, one of several sensory systems in the human brain, is governed by the vestibulocochlear nerve, which is the eighth (of twelve) cranial nerves. Hearing is the physiological channel that allows for the reception and conver- sion of sound waves into electrochemical signals and their interpretation by the brain.

Hearing begins with the sensation of sound waves, which are experienced through the ear as minute pressure pulses moving through the air and can be measured in pascals (Pa): Force over an Area: Pa = F/A. The normal threshold for human hearing is 20 micropascals—equivalent to the sound of a mosquito flying about three meters away from the ear. Sound intensity is usually measured in decibels (dB), a logarithmic function of pascals.

Normal hearing in an adult ranges from about 0 dB (the sound of a mosquito at one meter) up to about 120 dB (the sound of a jet at takeoffat close range, the maximum sound that can be heard without trauma to the ears), though sustained exposure of several hours to sounds above 85 dB can damage the ear mech- anism. (For reference, the maximum measurable sound is known to be 194 dB (as dB is a logarithmic function, this is many orders of magnitude greater than the jet-engine sound). Above this level, the sound waves become distorted, cre- ating a vacuum between themselves, a phenomenon called cavitation.


Sound perception begins with the mechanical process of audition. Perception in any sensory modality creates an internal representation of a distal object, which is a sound source in the case of hearing. This representation occurs through detecting and differentiating properties in the energy field around the listener (Goh et al., 2023; Poeppel et al., 2012). For hearing, the energy field is the air surrounding the listener.The listener detects minuscule shifts in intensity, and small movements in the air, in the form of sound waves and converts these movements to a proximal stimulus in the auditory channel in the ear, detecting patterns through a fusion of temporal (sequential) processing and spectral (holistic) processing, involving both hemispheres of the brain (Brosch et al., 2010; Emanuel & Eldar, 2022; Mattson, 2014;Murray et al., 2022; Schreiner et al., 2011).

The anatomy of hearing is complex yet elegant in its efficiency. The human auditory system consists of the outer ear, the middle ear, the inner ear, and the auditory nerves connecting to the brain stem, completed by several mutually dependent subsystems (see Figure 1.1).


Figure 1.1 The Hearing Circuit


Figure 1.1 The Hearing Circuit

Hearing is a circuit involving the ear, the auditory nerve, and the auditory cortex. Sound waves travel down the ear canal and cause the eardrum to vibrate. These vibrations are passed along through the middle ear, which is a sensitive transformer consisting of three small bones (malleus, incus, and stapes) surrounding a small opening in the skull (the oval window). The major function of the middle ear is to ensure the efficient transfer of sounds, which are still in the form of air particles, to the fluids inside the cochlea (the inner ear), where they will be converted to electrical pulses and passed along the auditory nerve to the auditory cortex in the brain for further processing.


level of sound reaching the inner ear. This reflex action, the acoustic reflex or stapedius reflex, occurs when we are presented with suddenloud sounds, such as the thud of a dropped book or the wail of a police siren. It protects the delicate hearing mechanism from damage if the loudness persists.

These pressure pulses are transmitted from the outer ear through the inner ear to the brain stem and then to the auditory cortex of the brain viaan electrochemical conversion in the cochlea. Auditory sensations are considered to reach perception only if they are received and processed bythe auditory cortex. Although we often think of sensory perception as a passive process over which we have little control, the responses of neuronsin the auditory cortex of the brain are strongly modulated by attention (Cohen, 2013; Foley & Bates, 2019; Schreiner et al., 2011).


1.3   Transmission: The Role of the Inner Ear

The cochlea is the focal structure of the inner ear; it is a small, bony structure, about the size of an adult thumbnail, narrow at one end and wide at the other. The cochlea is filled with fluid, and its operation is fundamentally a kind of fluid mechanics. The membranes in the cochlearespond, through a concentrated mass of microscopic fibers, to movements of the fluid, a process called sinus- oidal stimulation.

The cochlea contains thousands of tiny hair cells, which are connected to the auditory nerve fibers that lead through the vestibulocochlear nerve to the audi- tory brainstem. These hair cells respond to the minute movements of the fluid in the membrane and transduce themechanical movements of the fluid into nerve activity.

As with other neural networks in the brain, our auditory nerves have evolved to a high degree of specialization. Each auditory neuron has adifferent charac- teristic frequency (CF) to which it responds, ranging in humans from 20 cycles per second (or Hertz, abbreviated Hz) upward to 20,000 cycles per second. Neurons with high CFs are found in the periphery of the nerve bundle, and there is an orderly decrease in CF toward the center of the nerve bundle. This tonotopic organization preserves the frequency spectrum as it passes along the auditory pathway, which is necessary for the accurate processing of sound (Plack, 2018; Ruben, 2020).

The initiation of the neural activity inside the cochlea is called the excitation pattern (Schurzig et al., 2016). How an individual hearer perceives the excita- tion patterns will be influenced by a wide range of contextual differences, such as the number of overlapping speakers and environmental distractions, as well as by individual listener differences, including language background, familiarity with the context (topic, setting, speaker), and situational expectations. Because of this range of subjective variations, no two listeners are likely to interpret input in precisely the same way (Javel, 2019; Nechaev & Supin, 2013; Sottek & Genuit, 2005).



Figure 1.2 The Transmission Process

When sound waves enter the ear, they create vibrations that are transmitted to the cochlea. The cochlea is filled with fluid and has a flexible membrane called the basilar mem- brane that runs along its length. As the fluid inside the cochlea moves in response to the vibrations, it causes the basilar membrane to flex.This movement causes a conversion of mechanical vibrations into electrical signals. These electrical signals are representations of the specific sound frequencies and intensities that are detected. The electrical signals are then transmitted to the auditory nerve, which carries the signals as neural impulses to the brain’s auditory cortex


The electrical signals generated by the hair cells are then transmitted to the auditory nerve, which carries the signals as neural impulsesupward to the audi- tory cortex (see Figure 1.2).


1.4   Coordination: The Role of the Auditory Cortex

When the electrical signals from the auditory nerve reach the auditory cortex, the auditory cortex triggers a multidirectional connectivitythat rapidly involves multiple areas of the brain.

The primary auditory cortex is a small area located in the temporal lobe in the left hemisphere of the brain. It lies in the back half of the superior tem- poral gyrus (STG) and the transverse temporal gyri (also called Heschl’s gyri).


Figure 1.3 The Auditory Processing Circuit

A basic model of dual-stream auditory processing in the human brain. Neural signals from the cochlea undergo complex processing as they travel through the auditory cortex

This is the initial brain structure for processing incoming auditory information. Anatomically, the transverse temporal gyri are different from all other tem- poral lobe gyri in that they run mediolaterally (toward the center of the brain) rather than dorsoventrally (front to back).

While both the cochlea and the auditory brainstem are responsible for percep- tion and sorting of the basic features of sound, the auditory cortex initiates the processing of complex information. The auditory cortex is organized into separate regions for managing specific aspects of acoustic information (see Figure 1.3).

Electrical impulses travel along the auditory nerve and pass through multiple information-processing centers in the auditory brainstem.Signals from the right ear travel to the auditory cortex, which is in the temporal lobe on the brain’s left side. Signals from the left ear travel to the right auditory cortex.

The auditory cortices sort, process, interpret, and file information about the sound. The comparison and analysis of all the signals thatreach the brain allow you to detect certain sounds and suppress other sounds as background noise.

About The Author

, Neurological processing: The role of hearing, Lateral Communications
Michael Rost, principal author of Pearson English Interactive, has been active in the areas of language teaching, learning technology and language acquisition research for over 25 years. His interest in bilingualism and language education began in the Peace Corps in West Africa and was fuelled during his 10 years as an educator in Japan and extensive touring as a lecturer in East Asia and Latin America. Formerly on the faculty of the TESOL programs at Temple University and the University of California, Berkeley, Michael now works as an independent researcher, author, and speaker based in San Francisco.

More Posts You May Find Interesting