Extracted and modified from Teaching and Researching Listening 3rd edition
1.1 Hearing and listening
In everyday use, the terms hearing and listening and and understanding are often interchanged, the implication being that if you are to “hear” something, then of course are “listening” to it intentionally, and as a result you are able to “understand” it, in some meaningful way. “Did you hear me?”, “Are you listening to me?”, “Do you understand what I’m saying” – they all seem to mean the same thing to us. In an everyday context, the differentiation doesn’t really seem all that important. But collapsing the terminology, or allowing for gray areas of overlap, does a disservice to our appreciation of the unique contribution of each activity (hearing, listening, understanding) in our cognition. And if we don’t differentiate, we lose the ability to evaluate a learner’s progress and mediate where necessary.
A natural starting point for an exploration of listening and understanding is to consider hearing, the basic physical and neurological systems and processes that are involved in experiencing sound. We all experience hearing as if it were a separate sense, a self-contained system that we can turn on or off at will. However, hearing is part of a complex brain network organization that is interdependent with multiple neurological systems (Poeppel & Overath, 2014).
Though hearing cannot be separated from the overall brain network of which it is part, hearing is an identifiable system with an isolable function. Hearing is the primary physiological system that allows for reception and conversion of sound waves. Sound waves are experienced as minute pressure pulses and can be measured in pascals (Force over an Area: p = F/A). The normal threshold for human hearing is about 20 micropascals – equivalent to the sound of a mosquito flying about 3 meters away from the ear. These converted electrical pulses are transmitted instantaneously from the outer ear through the inner ear to the auditory cortex of the brain. As with other sensory phenomena, auditory sensations are considered to reach perception only if they are received and processed by a cortical area in the brain. Although we often think of sensory perception as a passive process, the responses of neurons in the auditory cortex of the brain can be strongly modulated by attention (Barsalou, 2010; Fritz, David, & Shama, 2013).
Beyond this conversion process of external stimuli to auditory perceptions, hearing is the sense that is often identified with our affective experience of participating in events. Unlike our other primary senses, hearing offers unique observational and monitoring capacities that allow us to perceive life’s rhythms and adapt to the ‘vitality contours’ of social events – the affective manner is which social; actions are carried out (Rochat, 2013) – as well as of the tempo of human interaction in real time and the ‘feel’ of human contact and communication (Finnegan, 2014; Murchie, 1999).
In physiological terms, hearing is a neurological circuitry, part of the vestibular system of the brain, which is responsible for spatial orientation (balance) and temporal orientation (timing), as well as interoception, the monitoring of sensate data and perceptual organization of experience from our internal bodily systems (Peterson et al., 2014; Tang et al., 2012). Hearing also plays an important role in animating the brain, with particular harmonies, frequencies and rhythms contributing to calming or overstimulated responses in the brain (Leeds, 2010). Of all our senses, hearing may be said to be the most grounded and most essential to awareness because it occurs in real time, in a temporal continuum. Hearing involves continually grouping incoming sound into pulse-like auditory events that span a period of several seconds (Handel, 2006; Rodger & Craig, 2014). Sound perception is about always anticipating what is about to be heard – hearing forward – as well as retrospectively organizing what has just been heard – hearing backward – in order to assemble coherent packages of sound (Carriani & Micheyl, 2011).
While hearing provides a basis for listening, hearing is only a precursor for listening. While both hearing and listening are initiated through sound perception, the difference between them is essentially a degree of intention (Roth, 2012). Intention is known to involve several levels of cognition, but initially intention is an acknowledgement of a distal source (i.e. not within one’s self), a willingness to be influenced by this source, and a desire to understand it to some degree (Kreigel, 2013).
In psychological terms, perception creates a representation of these distal objects by detecting and differentiating properties in the energy field. In the case of audition, the energy field is the air surrounding the listener. The listener detects shifts in intensity, which are minute movements in the air, in the form of sound waves, and differentiates their patterns through a fusion of temporal processing in the left cortex of the brain and spectral processing in the right. The perceiver immediately designates the patterns in the sound waves to previously learned categories of emotion and cognition (Brosch et al., 2010; Mattson, 2014; Poeppel et al., 2012).
The anatomy of hearing is elegant in its efficiency. The human auditory system consists of the outer ear, the middle ear, the inner ear, and the auditory nerves connecting to the brain stem. Several mutually dependent subsystems complete the system (see Figure 1.1, in Teaching and Researching Listening).
The outer ear consists of the pinna, the part of the ear we can see, and the ear canal. The intricate funnelling patterns of the pinna filter and amplify the incoming sound, in particular the higher frequencies, and allows us the ability to locate the source of the sound.
Sound waves travel down the canal and cause the eardrum to vibrate. These vibrations are passed along through the middle ear, which is a sensitive transformer consisting of three small bones (the ossicles) surrounding a small opening in the skull (the oval window). The major function of the middle ear is to ensure efficient transfer of sounds, which are still in the form of air particles, to the fluids inside the cochlea, where they will be converted to electrical pulses.
In addition to this transmission function, the middle ear has a vital protective function. The ossicles have tiny muscles that, by contracting reflexively, can reduce the level of sound reaching the inner ear. This reflex action occurs when we are presented with sudden loud sounds such as the thud of a dropped book or the wail of a police siren. This contraction protects the delicate hearing mechanism from damage in the event that the loudness persists. Interestingly, the same reflex action also occurs automatically when we begin to speak. In this way the ossicles reflex protects us from receiving too much feedback from our own speech and thus becoming distracted by it.
The cochlea is the focal structure of the ear in auditory perception. The cochlea is a small bony structure, about the size of an adult thumbnail, that is narrow at one end and wide at the other. The cochlea is filled with fluid, and its operation is fundamentally a kind of fluid mechanics. The membranes inside in the cochlea respond mechanically to movements of the fluid, a process called sinusoidal stimulation. Lower frequency sounds stimulate primarily the narrower end of the membrane, and higher frequencies stimulate only the broader end. Each different sound that passes through the cochlea produces varying patterns of movement in the fluid and the membrane.
At the side of the cochlea, nearest the brain stem, are thousands of tiny hair cells, with ends both inside and outside the cochlea. The outer hair cells are connected to the auditory nerve fibers, which lead to the auditory cortex of the brain. These hair cells respond to minute movements of the fluid in the membrane and transduce the mechanical movements of the fluid into nerve activity.
As with other neural networks in the human brain, our auditory nerves have evolved to a high degree of specialization. There are five different types of auditory nerve cells. Each auditory neuron has different Characteristic Frequencies (CF) to which they respond continuously throughout the stimulus presentation. Neurons with high CFs are found in the periphery of the nerve bundle, and there is an orderly decrease in CF toward the center of the nerve bundle. This tonotopic organization preserves the frequency spectrum as it passes along the signal pulses, which is necessary for speedy, accurate processing of sound (Plack, 2014). Responding to their specialized frequencies, these nerves actually create tuning curves that correspond to the actual shape of their cell and pass along very precise information about sound frequency to the superior olivary complex of the central auditory nervous system. This is considered the first area in the “connectome”, or network map of the brain, at which sound from both ears converge (Mendoza, 2011).
The distribution of the neural activity is called the excitation pattern, which is a pattern of up-and-down motions of the tiny cilia in the basilar membrane. This excitation pattern is the mechanical output of the hearing process. For instance, if you hear a specific sequence of sounds, such as /a/+/i/+/l/+/a/+ /i/+/k/, there is a specific excitation pattern produced in response that is, precisely the same a in all hearing humans. While the excitation patterns may be identical physiologically, how the hearer interprets the signal and subsequently responds to it is, of course, subject to a wide range of contextual differences, such as time, place, and number of other participants, co-textual differences, which are the other pieces of language presented in juxtaposition to it, and individual listener differences, including age, gender, culture, and language background (Andrigna et al., 2012; Kok & Heylan, 2010).
These differences virtually ensure that not everyone registers the same thing in any given setting, even though the excitation pattern for a particular stimulus may be neurologically identical in all hearers. On a physical level, the difference in our perception is due to the fact that the individual neurones that make up the auditory nerve fibers are interactive – they are affected by the action of all the other neurones with which they interact. Sometimes, the activity of one neurone is suppressed or amplified by the introduction of a second tone. In addition, since these nerves are physical structures, they are affected by our general health and level of arousal or fatigue (Schnupp et al., 2011). Another condition that interferes with consistent and reliable hearing is that auditory nerves, which are intertwined with the vestibular nerve regulating balance, sometimes fire involuntary when no hearing stimulus is present (Wu et al., 2014).
The physiological circuitry of listening begins when the auditory cortex is stimulated. The primary auditory cortex is a small area located in the temporal lobe of the brain. It lies in the back half of the Superior Temporal Gyrus (STG) and also enters into the transverse temporal gyri (also called Heschl’s gyri). This is the first brain structure to process incoming auditory information. Anatomically, the transverse temporal gyri are different from all other temporal lobe gyri in that they run mediolaterally (towards the center of the brain) rather than dorsiventrally (front to back).
As soon as information reaches the auditory cortex, it is relayed to several other neural centers in the brain, including Wernicke’s area, which is responsible for speech recognition, and lexical and syntactic comprehension, and Broca’s area, which is involved in calculation and responses to language-related tasks.
Imaging studies have shown that many other brain areas are involved in language comprehension as well (see Figure 1.2, in Teaching and Researching Listening). This neurological finding is consistent with language processing research indicating simultaneous parallel processing of different aspects of information (Bullmore & Sporns, 2012; Friederici, 2011)
These studies have shown that all of these areas are involved in aural language comprehension in a cyclical fashion, with certain areas more active while processing particularly complex sentences or disambiguating particularly vague references. Impairments in any one area, often defined as an aphasia (if acquired by way of an injury or aging process), can result in difficulties with lexical comprehension, syntactic processing, global processing of meaning and formulation of an appropriate response (Binder et al., 2009; Vitello, 2014) ).
Comments are closed.
Do you have any video of that? I’d love to find out more details.|
Hey very nice blog!|
This text is worth everyone’s attention. How can I find out more?|