0% found this document useful (0 votes)
69 views

Fig. 9.1a Fig. 9.1b

This document discusses masking and cross-hearing in audiology tests. Masking involves presenting noise to the non-test ear to prevent it from hearing sounds intended for the test ear, avoiding false test results. Cross-hearing occurs when a sound presented to one ear is actually heard by the opposite ear, due to the sound crossing over through bone conduction or traveling around the head. The document explains how cross-hearing affects audiogram results and how masking is used to obtain accurate thresholds by preventing cross-hearing.

Uploaded by

M Noaman Akbar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views

Fig. 9.1a Fig. 9.1b

This document discusses masking and cross-hearing in audiology tests. Masking involves presenting noise to the non-test ear to prevent it from hearing sounds intended for the test ear, avoiding false test results. Cross-hearing occurs when a sound presented to one ear is actually heard by the opposite ear, due to the sound crossing over through bone conduction or traveling around the head. The document explains how cross-hearing affects audiogram results and how masking is used to obtain accurate thresholds by preventing cross-hearing.

Uploaded by

M Noaman Akbar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 25

Course: Speech & Hearing (682)

Semester: Spring, 2020


ASSIGNMENT No. 1
Q.1 Define masking and its use in air-conduction and bone conduction tests. Also discuss rules and types
of masking.
It seems reasonable to assume that sounds presented to the right ear are heard by the right ear, and that sounds
presented to the left ear are heard by the left ear. However, this is not necessarily true. In fact, it is common to
find that the sound being presented to one ear is actually being heard by the opposite ear. This phenomenon is
called cross-hearing or shadow hearing. To avoid confusion it is customary to call the ear currently being
tested the test ear (TE) and to call the opposite ear, which is the one not being tested, the nontest ear (NTE).
Cross-hearing results in a false picture of the patient’s hearing. Even the possibility that the sounds being
presented to the TE are really being heard by the NTE causes the outcome of a test to be suspect, at best. This
chapter explains why this situation occurs, how it is recognized, and the manner in which the NTE is removed
from the test.
 Cross-Hearing and Interaural Attenuation
Suppose we know for a fact that a patient’s right ear is essentially normal and that his left ear is completely
deaf. We would expect the audiogram to show airand bone-conduction thresholds of perhaps 0 dB HL to 10 dB
HL for the right ear and “no response” symbols for both air-conduction and bone-conduction at the maximum
testable levels for the left ear, as in Fig. 9.1a. However, this does not occur. Instead, the actual audiogram will
be more like the one shown in Fig. 9.1b. Here the thresholds for the right ear are just as expected. On the other
hand, the left air-conduction thresholds are in the 55 to 60 dB HL range, and the left bone-conduction thresholds
are the same as for the right ear. How can this be if the left ear is deaf?
Cross-Hearing for Air-Conduction
Let us first address this question for the air-conduction signals. Since the patient cannot hear anything in the left
ear, the level of an air-conduction test tone presented to that ear will be raised higher and higher. Eventually, the
tone presented to the deaf ear will be raised so high that it can actually be heard in the opposite ear, at which
point the patient will finally respond. The patient’s response to the signal directed to his deaf ear (the TE) is the
result of hearing that signal in the other ear (the NTE). Thus, the left ear’s threshold curve in  Fig. 9.1b is due to
cross-hearing, and is often called a shadow curve.
In order for the tone to be heard in the NTE it must be possible for a signal presented to one ear to be
transmitted across the head to the other ear. This phenomenon is called signal crossover. The intensity of the
sound reaching the NTE is less than what was originally presented to the TE because it takes a certain amount
of energy to transmit the signal across the head. The number of dB that are “lost” in the process of signal
crossover is called interaural attenuation (IA) (Chaiklin 1967).
In Fig. 9.1b, the patient’s right air-conduction threshold at 1000 Hz is 10 dB HL. Even though his left ear is
completely deaf, he also responded to a 1000 Hz tone presented from the left earphone at 60 dB HL. This means
that the 60 dB HL tone presented to the left ear must have reached a level of 10 dB HL in the right ear.

1
Course: Speech & Hearing (682)
Semester: Spring, 2020
Consequently, IA at 1000 Hz in this case must be 50 dB (60 dB – 10 dB = 50 dB). Similarly, the amount of IA
at 4000 Hz in this case is 55 dB (60 dB – 5 dB = 55 dB)
Crossover occurs when the signal is physically present in the opposite ear, whereas cross-hearing occurs only
when it is audible. The distinction is clarified using the following example based on our hypothetical patient:
The level of the 1000 Hz tone reaching this person’s right (nontest) ear will always be 50 dB less than the
amount presented from the left earphone due to IA. Consider these three cases:

dB HL at left earphone – IA = dB HL present at right cochlea

(a) 60 dB – 50 dB = 10 dB (at threshold)

(b) 80 dB – 50 dB = 30 dB (20 dB SL)

(c) 55 dB – 50 dB = 5 dB (5 dB below threshold)

2
Course: Speech & Hearing (682)
Semester: Spring, 2020
Fig. 9.1 (a) Imagined (incorrect) audiogram without cross-hearing for a patient who is deaf in the left ear,
showing “no response” for air-conduction or bone-conduction signals. (b) Actual audiogram for such a patient,
reflecting the fact that the signals presented to the left side were heard in the right ear by cross-
hearing. (c) Audiogram obtained when the left thresholds are retested with masking noise in the right ear.
These three examples are shown graphically in Fig. 9.2. In (a) the tone reaches the right ear at 10 dB HL, and
is heard because this is the right ear’s threshold. In (b) the tone reaches the right ear at 30 dB HL and is heard
because this level is 20 dB above the right ear’s threshold (20 dB SL). In both of these cases signal crossover
resulted in cross-hearing. However, the tone in (c) reaches the right ear at only 5 dB HL, which is 5
dB below threshold and is thus inaudible. Here, there is crossover because the signal is present in the NTE but
there is no cross-hearing because it is below threshold.
Assuming the bone-conduction threshold remained at 10 dB HL, how would cross-hearing be affected if the
IA was changed from 50 dB to another value, such as 40 dB or 60 dB? Some time with paper and pencil will
reveal that the cross-hearing situation would change considerably.
Cross-Hearing for Bone-Conduction
The right and left bone-conduction thresholds are the same in Fig. 9.1b even though the right ear is normal and
the left one is deaf. The implication is that the bone-conduction signal presented to the left side of the head is
being received by the right ear. This should come as no surprise, since we found in Chapter 5 that a bone-
conduction vibrator stimulates both cochleae about equally. From the cross-hearing standpoint, we may say that
there is no interaural attenuation (IA = 0 dB) for bone-conduction. Thus, the right and left bone-conduction
signals result in the same thresholds because they are both stimulating the same (right) ear.

3
Course: Speech & Hearing (682)
Semester: Spring, 2020

Fig. 9.2 (a–c) Three crossover conditions (see text).


Overcoming Cross-Hearing with Masking Noise
The above example demonstrates that there are times when the NTE is (or at least may be) responding to the
signals intended for the TE. How can we stop the NTE from hearing the tones being presented to the TE? First,
consider an analogy from vision testing. Looking at an eye chart with two eyes is akin to the cross-hearing
issue. We all know from common experience that to test one eye at a time the optometrist simply blindfolds the
nontest eye. In other words, one eye is tested while the other eye is masked. In effect, we do the same thing in
audiology, except that the auditory “blindfold” is a noise that is directed into the NTE. The noise in the NTE
stops it from hearing the sounds being presented to the TE. Just as the nontest eye is masked by the blindfold, so
is the nontest ear masked by the noise.
Returning to our example, Fig. 9.1c shows the results obtained when the air- and bone-conduction thresholds
of the left (test) ear are retested with appropriate masking noise in the right (nontest) ear. The thresholds here

4
Course: Speech & Hearing (682)
Semester: Spring, 2020
are shown by different symbols than the ones in frames (a) and (b), to distinguish them as masked results.
Because the left ear in this example is completely deaf, the masked thresholds have downward-pointing arrows
indicating no response at the maximum limits of the audiometer. Notice that the masked results in frame (c)
are at the same hearing levels as the ones in frame (a). The important difference is that the unmasked
thresholds in frame (a) could never have actually occurred because of cross-hearing. Note the dramatic
difference between the unmasked results in frame (b) and the patient’s real hearing status, revealed by the
masked thresholds in frame (c).
We see that when cross-hearing occurs it is necessary to retest the TE while directing a masking noise into
the NTE. The purpose of the masking noise is to prevent the NTE from hearing the tone (or other signal)
intended for the TE. Thus, the issue of whether cross-hearing might be occurring is tantamount to the question,
is masking (of the NTE) necessary?
Principal Mechanism of Crossover
Signal crossover (and therefore cross-hearing) for bone-conduction signals obviously occurs via a bone-
conduction route, as depicted in Fig. 9.3a. It occurs because a bone-conduction signal is transmitted to both
cochleae.
Because crossover for air-conduction requires a reasonably substantial signal to be produced by the earphone
(recall that interaural attenuation was ~ 50 dB in the prior example), common sense seems to suggest that air-
conduction signals might reach the opposite ear by an air-conduction route. This might occur by sound escaping
through the earphone cushion on the test side, traveling around the head, and then penetrating the earphone
cushion on the non-test side. Alternatively, earphone vibration on the test side might be transmitted via the
headset to the earphone on the nontest side. In either of these two scenarios, the signal from the test side would
enter the ear canal of the NTE, that is, as an air-conduction signal. As compelling as these explanations may
seem, they are not correct. It has been shown repeatedly that the actual crossover route for air-
conduction signals occurs principally by bone-conduction to the cochlea of the opposite ear (Sparrevohn 1946;
Zwislocki 1953; Studebaker 1962), as depicted in Fig. 9.3b.

5
Course: Speech & Hearing (682)
Semester: Spring, 2020

Fig. 9.3 Signal crossover and cross-hearing occur via the bone-conduction route to the opposite cochlea, as
indicated by the arrows for both (a) bone-conduction and (b) air-conduction.
Interaural Attenuation for Air-Conduction
Cross-hearing of a test signal renders a test invalid. We must therefore identify cross-hearing whenever it occurs
so that we can mask the NTE. The cost of failing to do so is so great that we want to employ masking every
time that cross-hearing is even possible. Once we have obtained the unmasked audiogram, we are left with the
following question: Is the air-conduction signal being presented to the TE great enough to cross the head and
reach the bone-conduction threshold of the NTE? In other words, is this difference greater than the value of
interaural attenuation? The corollary problem is to determine the IA value.
Interaural attenuation for air-conduction using supra-aural earphones typical of the type used in audiological
practice has been studied using a variety of approaches (Littler, Knight, & Strange 1952; Zwislocki 1953;
Liden 1954; Liden, Nilsson, & Anderson 1959a; Chaiklin 1967; Coles & Priede 1970; Snyder 1973; Smith &
Markides 1981; Sklare & Denenberg 1987). Fig. 9.4 shows the mean IA values found in four of these studies,
as well as the maximum and minimum amounts of IA obtained across all four studies. Average IA values are ~
50 to 65 dB, and there is a general tendency for IA to become larger with frequency. The range of IA values is
very wide, and the means are much larger than the minimum IA values. Consequently, we cannot rely on
average IA values as a red flag for cross-hearing in clinical practice because many cases of cross-hearing would
be missed in many patients on the lower side of the IA range. For this reason it is common practice to
use minimum IA values to identify possible cross-hearing, that is, to decide when masking may be needed. As
anticipated from the figure, the minimum IA value typically suggested to rule out crossover for clinical
purposes is 40 dB (Studebaker 1967; Martin 1974, 1980).
Interaural Attenuation for Insert Earphones
The IA values just described are obtained using typical supra-aural audiometric earphones, such as Telephonics
TDH-49 and related receivers. In contrast, insert earphones such as Etymotic ER-3A and EARtone 3A receivers

6
Course: Speech & Hearing (682)
Semester: Spring, 2020
provide much greater amounts of IA (Killion, Wilber, & Gudmundsen 1985; Sklare & Denenberg 1987). This
occurs because the amount of IA is inversely related to the contact area between the earphone and the head
(Zwislocki 1953), and the contact area between the head and earphone is much less for insert receivers than it is
for supra-aural earphones. Fig. 9.5 shows some of the results obtained by Sklare and Denenberg (1987), who
compared the IA produced by TDH-49 (supra-aural) and ER-3A (insert) earphones on the same subjects. They
found that mean IA values were from 81 to 94+ dB up to 1000 Hz and 71 to 77 dB at higher frequencies, for
insert receivers.
As already explained, we are most interested in the minimum IA values, which are shown by the bottoms of
the error lines in the graph. Sklare and Denenberg found that insert receivers produced minimum IA values of
75 to 85 dB at frequencies up to 1000 Hz, and 50 to 65 dB above 1000 Hz. This is substantially greater than the
minimum IA values found for the TDH-49 earphone, which ranged from 45 to 60 dB.

Fig. 9.4 Interaural attenuation values for supra-aural earphones from four representative studies. Lines with
symbols are means for each study. The “minimum” and “maximum’’ lines show the smallest and largest IA
values across all four studies.

7
Course: Speech & Hearing (682)
Semester: Spring, 2020

Fig. 9.5 Interaural attenuation for TDH-49 (supra-aural) versus ER-3A (insert) earphones. Bars show means and
error lines show ranges. Some actual values were higher than shown. (This occurred because some individual
IA values were higher than the limits of the equipment.) (Based on the data of Sklare and Denenberg [1987]).
It should be noted that the IA values just described were obtained using insert receivers that were inserted to
the proper depth into the ear canal. Insert receivers produce much less IA when their insertion into the ear canal
is shallow compared with deep (Killion et al 1985).
Interaural Attenuation for Bone-Conduction
It is commonly held that interaural attenuation is 0 dB for all bone-conduction signals, but this concept requires
qualification. There is essentially no IA for bone-conduction signals presented by a bone-conduction vibrator
using frontal placement (Studebaker 1967). However, IA for the more commonly used mastoid placement of the
bone-conduction oscillator depends on the frequency being tested, and is also variable among patients
(Studebaker 1964, 1967). Interaural attenuation values for bone-conduction signals presented at the mastoid are
~ 0 dB at 250 Hz and rise to ~ 15 dB at 4000 Hz (Studebaker 1967). The author’s experience agrees with
others’ clinical observations that IA for bone-conduction varies among patients from roughly 0 to 15 dB at 2000
and 4000 Hz (Silman & Silverman 1991).
 Clinical Masking
Recall that masking per se means to render a tone (or other signal) inaudible due to the presence of a noise in
the same ear as the tone. Thus, masking the right ear means that a noise is put into the right ear, so that a tone
cannot be heard in the right ear. Clinical masking is an application of the masking phenomenon used to alleviate
cross-hearing. In clinical masking we put noise into the nontest ear because we want to assess the hearing of the
test ear. In other words, the masking noise goes into the NTE, and the test signal goes into the TE. Also, the
noise is delivered to the NTE by air-conduction, regardless of whether the TE is being tested by air- or bone-
conduction. These rules apply in all but the most unusual circumstances. The kinds of masking noises used with

8
Course: Speech & Hearing (682)
Semester: Spring, 2020
various test signals are covered in a later section. In the meantime, it is assumed that the appropriate masking
noise is always being used.
The meaning is clear when an audiologist says that she will “retest the left bone-conduction threshold with
masking noise in the right ear.” However, masking terminology is usually more telegraphic. As such, it suffers
from ambiguity and can be confusing to the uninitiated. It is therefore worthwhile to familiarize oneself with
typical masking phrases and what these really mean. Unmasked air-conduction (or just unmasked air) refers to
an air-conduction threshold that was obtained without any masking noise. Similarly, unmasked bone-
conduction (or unmasked bone) means a bone-conduction threshold obtained without any masking noise. For
example, unmasked right bone means the bone-conduction threshold of the right ear that was obtained without
any masking noise.
Masked air-conduction (or masked air) refers to an air-conduction threshold (in the TE) that was obtained
with masking noise in the opposite ear. Masked bone-conduction (masked bone) denotes a bone-conduction
threshold obtained with masking noise in the NTE. Thus, masked right air is referring to the air-conduction
threshold of the right ear that was obtained while masking noise was being presented to the left (nontest) ear. By
the same token, masked left bone means the bone-conduction threshold of the left ear that was obtained with
masking noise in the right ear.
The process of masking for air-conduction (masking for air) means to put masking noise into the NTE while
testing the TE by air-conduction. Likewise, the operation of masking for bone-conduction (masking for bone)
means to put masking noise into the NTE while testing the TE by bone-conduction.
Instructions for Testing with Masking
The first step in clinical masking is to explain to the patient what is about to happen and what she is supposed to
do. The very idea of being tested with “noise in your ears” can be confusing to some patients, especially when
they are being evaluated for the first time. The author has found that most patients readily accept the situation
when they are told that putting masking noise in the opposite ear is the same as an optometrist covering one eye
while testing the other.
Noises Used for Clinical Masking
What kind of noise should be used to mask the non-test ear? The answer to this question depends on the signal
being masked. If the signal being masked has a wide spectrum, such as speech or clicks, then the masker must
also have a wide spectrum. (The student might wish to refer back to Chapter 1 to review the relevant physical
concepts.) For example, masking for speech tests commonly uses white noise (actually broadband noise), pink
noise, speech-shaped noise, or multitalker babble. Speech-shaped noise has a spectrum that approximates that of
the long-term spectrum of speech. Multitalker babble is made by recording the voices of many people who are
talking simultaneously, resulting in an unintelligible babble.

9
Course: Speech & Hearing (682)
Semester: Spring, 2020
Complex noises (e.g., sawtooth noise) composed of a low fundamental frequency along with many
harmonics were also used in the past. These noises were poor and unreliable maskers, but one should be aware
of them if only for historical perspective.
Pure tones can also be masked by wide-band noises, but this is not desirable. Recall from Chapter 3 that if we
are trying to mask a given pure tone, only a rather limited band of frequencies in a wide-band noise actually
contributes to masking that tone. This is the critical band (ratio). The parts of a wide-band noise that are higher
and lower than the critical band do not help mask the tone, but they do make the noise sound louder. Thus,
wide-band noise is a poor choice for masking pure tones because it is both inefficient and unnecessarily loud.
It would therefore seem that the optimal masking noises for pure tones would be critical bands. In practice,
however, audiometers actually provide masking noise bandwidths that are wider than critical bands. This type
of masking noise is called narrow-band noise (NBN). Audiometric NBNs may approximate bandwidths that
are one-third octaves, one-half octaves, or other widths, and also vary widely in how sharply intensity falls
outside the pass band (i.e., the rejection rate or steepness of the filter skirts). If an NBN is centered around 1000
Hz, then we can call this a 1000 Hz NBN; if it is centered around 2000 Hz, then it is a 2000 Hz NBN, and so
forth. Table 9.1 summarizes the bandwidths for narrow-band masking noises specified by the ANSI S3.6-2010
standard.
When to Mask for Bone-Conduction
It might seem odd to discuss the bone-conduction masking rule before the one for air-conduction (AC) because
this is the reverse of the order used to obtain unmasked thresholds. However, masked thresholds are tested in
the opposite order, bone-conduction (BC) before AC. This is done because the rule for determining when
masking is needed for air-conduction depends upon knowing the true bone-conduction thresholds. This means
that if masking is needed for BC, it must be done first.
Bone-conduction testing presents us with a peculiar dilemma if we take it for granted that we always need to
know which ear is actually responding to a signal. This is so because there is little if any IA for BC, so we
rarely know for sure which cochlea is actually responding to a signal, no matter where the vibrator is placed.
(Although mastoid placement is assumed throughout this book unless specifically indicated, it should be noted
that the bone oscillator and both earphones are usually in place from the outset when forehead placement is
used.)

10
Course: Speech & Hearing (682)
Semester: Spring, 2020

This situation might seem to imply that masking should always be used whenever bone-conduction is tested.
This approach was recommended by ANSI (2004) on the grounds that bone-conduction calibration is based on
data that were obtained with masking the opposite ear.
However, this approach is not encouraged because it has several serious problems in addition to being
unnecessarily conservative at the cost of wasted effort (Studebaker 1964, 1967). When bone-conduction
thresholds are always tested with masking, the opposite ear will always be occluded with an earphone (both ears
would probably be occluded with forehead placement). Thus, one cannot know when or where an occlusion
effect occurs, or how large it is. But you need to know the size of the occlusion effect in the first place to
calculate how much noise is needed for bone-conduction masking. In addition, always having the headset in
place denies the clinician the ability to cross-check for bone-conduction oscillator placement errors, which
cause falsely elevated bone-conduction thresholds. Also, placement problems can be clouded by an occlusion
effect and/or by unwittingly attributing a higher threshold to the masking. The headset itself only exacerbates
vibrator placement problems.
Another questionable technique relies on the Weber test to determine which ear is hearing a bone-conduction
signal. These results are not sufficiently accurate or reliable for this purpose. Even its proponents admit that it is
best to disregard unlikely Weber results (Studebaker 1967).
Because a given unmasked bone-conduction threshold could as likely be coming from either ear, a practical
approach to deciding when to mask for bone-conduction is based on whether knowing which cochlea is actually
responding affects how the audiogram is interpreted. In other words, when does it make a difference whether a
given bone-conduction threshold is coming from one cochlea or the other?
A bone-conduction threshold should be retested with masking in the NTE whenever there is an air-bone-gap
(ABG) within the test ear that is greater than 10 dB, that is, 15 dB or more, which may be written as:

11
Course: Speech & Hearing (682)
Semester: Spring, 2020
ABG > 10 dB.
Because testing is done in 5 dB steps, this rule also can be stated this way: A bone-conduction threshold
should be retested with masking in the NTE whenever the air-bone-gap (ABG) within the test ear is 15 dB or
more, or
ABG ≥ 15 dB
This principle is shown schematically in Fig. 9.6a. This rule is consistent with the one recommended by
Yacullo (1996, 2009), but differs from a stricter approach that calls for masking whenever the ABG is ≥ 10 dB
(Studebaker 1964; ASHA 2005).1 The underlying concept for suggesting the less stringent masking criterion is
as follows: The variability of a clinical threshold is usually taken to be ± 5 dB. Applying this principle to both
the air- and bone- conduction thresholds for the same frequency allows them to be as much as 10 dB apart.
Thus, for practical purposes, an ABG ≤ 10 dB is too small to be clinically relevant.
Q.2 Discuss different properties of sound in relation to children’s classroom experiences. Support your
answer with examples.
Sound is created when something vibrates and sends waves of energy (vibration) into our ears. The
vibrations travel through the air or another medium (solid, liquid or gas) to the ear. The stronger the vibrations,
the louder the sound. Sounds are fainter the further you get from the sound source.
Sound changes depending on how fast or slow an object vibrates to make sound waves. Pitch is the quality of
a sound (high or low) and depends on the speed of the vibrations. Different materials produce different
pitches; if an object vibrates quickly we hear a high-pitched sound, and if an object vibrates slowly we hear a
low-pitched sound. Sounds are usually a mixture of lots of different kinds of sound waves.
This topic is often introduced by asking the children to close their eyes and listen to the sounds they can hear in
the local environment, or play a sound matching game to identify sounds where they listen to a range of sounds
and identify what is making the sound. A video like this one might be used as a lesson starter:
 Teachers may use a slinky, tuning forks, ripples on a pond and science video clips to introduce the
concept of sound waves/vibrations traveling through air and other materials to the ear.
 Teachers will discuss sound safety and why people working with loud noises wear ear defenders.
 Children will explore pitch and loudness using a range of music instruments from around the world, for
example drums, recorders, guitars. Children will investigate how to increase the pitch by changing the
tightness of a drum skin or the length of a string on a string instrument.
 Children may carry out investigations to find sound-insulating materials, for example finding the best
material to make ear muffs or defenders, and learn these work because the sound doesn’t travel through
some materials as well as through others.
 Children may carry out investigations to explore the distance sound will travel.

12
Course: Speech & Hearing (682)
Semester: Spring, 2020
Spoken communication is uniquely human. If the sense of hearing is damaged or absent, individuals with the
loss are denied the opportunity to sample an important feature of their environment, the sounds emitted by
nature and by humans themselves. People who are deaf or hard-of-hearing will have diminished enjoyment for
music or the sound of a babbling brook. We recognize that some deaf and hard-of-hearing children are born to
deaf parents who communicate through American Sign Language. Without hearing, these children have full
access to the language of their home environment and that of the deaf community. However, the majority of
deaf and hard-of-hearing children are born to hearing parents. For these families, having a child with hearing
loss may be a devastating situation. The loss or reduction of the sense of hearing impairs children's ability to
hear speech and consequently to learn the intricacies of the spoken language of their environment. Hearing loss
impairs their ability to produce and monitor their own speech and to learn the rules that govern the use of
speech sounds (phonemes) in their native spoken language if they are born to hearing parents. Consequently, if
appropriate early intervention does not occur within the first 6-12 months, hearing loss or deafness, even if
mild, can be devastating to the development of spoken communication with hearing family and peers, to the
development of sophisticated language use, and to many aspects of educational development, if environmental
compensation does not occur.
Hearing loss can affect the development of children's ability to engage in age-appropriate activities, their
functional speech communication skills, and their language skills. Before we consider the effects of hearing loss
on this development, we will review briefly the extensive literature on the development of speech and language
in children with normal hearing. Although the ages at which certain development milestones occur may vary,
the sequence in which they occur is usually constant (Menyuk, 1972).
Speech Skills
Infants begin to differentiate among various sound intensities almost immediately after birth and, by 1 week of
age, can make gross distinctions between tones. By 6 weeks of age, infants pay more attention to speech than to
other sounds, discriminate between voiced and unvoiced speech sounds, and prefer female to male voices
(Nober and Nober, 1977).
Infants begin to vocalize at birth, and those with normal hearing proceed through the stages of pleasure sounds,
vocal play, and babbling until the first meaningful words begin to occur at or soon after 1 year of age (Bangs,
1968; Menyuk, 1972; Quigley and Paul, 1984; Stark, 1983). Speech-like stress patterns begin to emerge during
the babbling stages (Stark, 1983), along with pitch and intonational contours (Bangs, 1968; Quigley and Paul,
1984; Stark, 1983).
According to Templin (1957), most children (75 percent) can produce all the vowel sounds and diphthongs by 3
years of age; by 7 years of age, 75 percent of children are able to produce all the phonemes, with the exception
of “r.” Consonant blends are usually mastered by 8 years of age, and overall speech production ability is
generally adult-like by that time (Menyuk, 1972; Quigley and Paul, 1984).

13
Course: Speech & Hearing (682)
Semester: Spring, 2020
Language Skills
Language studies have described vocabulary and grammatical development of children with normal hearing.
Studies of grammatical development have focused on both word structure (e.g., prefixes and suffixes), termed
“morphology,” and the rules for arranging words into sentences, termed “syntax.” Vocabulary development up
to young adulthood is estimated at roughly 1,000 word families per year, with vocabulary size estimated at
approximately 4,000-5,000 word families for 5-year-olds and 20,000 word families for 20-year-olds (see
Schmitt, 2000, for discussion). A word family is defined as a word plus its derived and inflectional forms. Most
morphological and syntactic skills are fully developed by the age of 5 years, and grammatical skills are fully
developed by age 8 (Nober and Nober, 1977). By age 10 to 12, most children with normal hearing have reached
linguistic maturity (Quigley and Paul, 1984). In summary, by age 4½ years, children with normal hearing are
producing complex sentences. Although a majority of the speech sounds in English are mastered by age 4, and
most of the grammatical categories by age 5, it is not until age 8 that a normally hearing child has fully
mastered grammar and phonology and has an extensive vocabulary (Nober and Nober, 1977).
Children with Hearing Loss
A review of speech and language development in children with hearing loss is complicated by the heterogeneity
of childhood hearing loss, such as differences in age at onset and in degree of loss; we review these
complicating factors separately following a more general overview. Mental and physical incapacities (mental
retardation, cerebral palsy, etc.) may also coexist with hearing loss. Approximately 25-33 percent of children
with hearing loss have multiple potentially disabling conditions (Holden-Pitt and Diaz, 1998; McCracken, 1994;
Moeller, Coufal, and Hixson, 1990). In addition, independent learning disabilities and language disabilities due
to cognitive or linguistic disorders not directly associated with hearing loss may coexist (Mauk and Mauk,
1992; Sikora and Plapinger, 1994; Wolgemuth, Kamhi, and Lee, 1998). For example, Holden-Pitt and Diaz
(1998) reported the following incidences of additional impairments in a group of children with some degree of
hearing loss: The coexistence of other disabilities with hearing impairment may impact the way in which
sensory aids are fitted or the benefit that children receive from them (Tharpe, Fino-Szumski, and Bess, 2001). A
recent technical report from the American Speech-Language-Hearing Association stated that pediatric cochlear
implant recipients with multiple impairments often demonstrate delayed or reduced communication gains
compared with their peers with hearing loss alone (American Speech-Language-Hearing Association, 2004).
In this chapter, we focus on speech and language development in children with prelingual onset of hearing loss
(before 2 years of age) without comorbidity. However, it should be kept in mind that the presence of multiple
handicapping conditions may place a child at greater risk for the development of communication or emotional
disorders (Cantwell, as summarized by Prizant et al., 1990). In addition, these children may require adaptations
to standard testing routines to accommodate their individual capacities.
Natural acquisition of speech and spoken language is not often seen in individuals with profound hearing loss
unless appropriate intervention is initiated early. One of the primary goals in fitting deaf or hard-of-hearing

14
Course: Speech & Hearing (682)
Semester: Spring, 2020
children with auditory prostheses (hearing aid or cochlear implant) is to improve the ease and the extent to
which they can access and acquire speech and spoken language. It should be kept in mind that the children
under discussion typically are not born to deaf parents; those children may acquire American Sign Language as
their native language.
Q.3 How knowledge of speech perception theories help the speech therapist in speech development of
hearing impaired children? Support your answer in view of Bamford theory.
One view of speech perception is that acoustic signals are transformed into representations for pattern matching
to determine linguistic structure. This process can be taken as a statistical pattern-matching problem, assuming
realtively stable linguistic categories are characterized by neural representations related to auditory properties of
speech that can be compared to speech input. This kind of pattern matching can be termed a passive process
which implies rigidity of processing with few demands on cognitive processing. An alternative view is that
speech recognition, even in early stages, is an active process in which speech analysis is attentionally guided.
Note that this does not mean consciously guided but that information-contingent changes in early auditory
encoding can occur as a function of context and experience. Active processing assumes that attention, plasticity,
and listening goals are important in considering how listeners cope with adverse circumstances that impair
hearing by masking noise in the environment or hearing loss. Although theories of speech perception have
begun to incorporate some active processing, they seldom treat early speech encoding as plastic and
attentionally guided. Recent research has suggested that speech perception is the product of both feedforward
and feedback interactions between a number of brain regions that include descending projections perhaps as far
downstream as the cochlea. It is important to understand how the ambiguity of the speech signal and constraints
of context dynamically determine cognitive resources recruited during perception including focused attention,
learning, and working memory. Theories of speech perception need to go beyond the current corticocentric
approach in order to account for the intrinsic dynamics of the auditory encoding of speech. In doing so, this may
provide new insights into ways in which hearing disorders and loss may be treated either through augementation
or therapy.
In order to achieve flexibility and generativity, spoken language understanding depends on active cognitive
processing (Nusbaum and Schwab, 1986; Nusbaum and Magnuson, 1997). Active cognitive processing is
contrasted with passive processing in terms of the control processes that organize the nature and sequence of
cognitive operations (Nusbaum and Schwab, 1986). A passive process is one in which inputs map directly to
outputs with no hypothesis testing or information-contingent operations. Automatized cognitive systems
(Shiffrin and Schneider, 1977) behave as though passive, in that stimuli are mandatorily mapped onto responses
without demand on cognitive resources. However it is important to note that cognitive automatization does not
have strong implications for the nature of the mediating control system such that various different mechanisms
have been proposed to account for automatic processing (e.g., Logan, 1988). By comparison, active cognitive
systems however have a control structure that permits “information contingent processing” or the ability to

15
Course: Speech & Hearing (682)
Semester: Spring, 2020
change the sequence or nature of processing in the context of new information or uncertainty. In principle,
active systems can generate hypotheses to be tested as new information arrives or is derived (Nusbaum and
Schwab, 1986) and thus provide substantial cognitive flexibility to respond to novel situations and demands.
Active and Passive Processes
The distinction between active and passive processes comes from control theory and reflects the degree to
which a sequence of operations, in this case neural population responses, is contingent on processing outcomes
(see Nusbaum and Schwab, 1986). A passive process is an open loop sequence of transformations that are fixed,
such that there is an invariant mapping from input to output (MacKay, 1951, 1956). Figure 1A illustrates a
passive process in which a pattern of inputs (e.g., basilar membrane responses) is transmitted directly over the
eighth nerve to the next population of neurons (e.g., in the auditory brainstem) and upward to cortex. This is the
fundamental assumption of a number of theories of auditory processing in which a fixed cascade of neural
population responses are transmitted from one part of the brain to the other (e.g., Barlow, 1961). This type of
system operates the way reflexes are assumed to operate in which neural responses are transmitted and
presumably transformed but in a fixed and immutable way (outside the context of longer term reshaping of
responses). Considered in this way, such passive processing networks should process in a time frame that is
simply the sum of the neural response times, and should not be influenced by processing outside this network,
functioning something like a module (Fodor, 1983). In this respect then, such passive networks should operate
“automatically” and not place any demands on cognitive resources. Some purely auditory theories seem to have
this kind of organization (e.g., Fant, 1962; Diehl et al., 2004) and some more classical neural models
(e.g., Broca, 1865; Wernicke, 1874/1977; Lichtheim, 1885; Geschwind, 1970) appear to be organized this way.
In these cases, auditory processes project to perceptual interpretations with no clearly specified role for
feedback to modify or guide processing.
By contrast, active processes are variable in nature, as network processing is adjusted by an error-correcting
mechanism or feedback loop. As such, outcomes may differ in different contexts. These feedback loops provide
information to correct or modify processing in real time, rather than retrospectively. Nusbaum and Schwab
(1986) describe two different ways an active, feedback-based system may be achieved. In one form, as
illustrated in Figure 1B, expectations (derived from context) provide a hypothesis about a stimulus pattern that
is being processed. In this case, sensory patterns (e.g., basilar membrane responses) are transmitted in much the
same way as in a passive process (e.g., to the auditory brainstem). However, descending projections may
modify the nature of neural population responses in various ways as a consequence of neural responses in
cortical systems. For example, top-down effects of knowledge or expectations have been shown to alter low
level processing in the auditory brainstem (e.g., Galbraith and Arroyo, 1993) or in the cochlea (e.g., Giard et al.,
1994). Active systems may occur in another form, as illustrated in Figure 1C. In this case, there may be a strong
bottom-up processing path as in a passive system, but feedback signals from higher cortical levels can change
processing in real time at lower levels (e.g., brainstem). An example of this would be the kind of observation

16
Course: Speech & Hearing (682)
Semester: Spring, 2020
made by Spinelli and Pribram (1966) in showing that electrical stimulation of the inferotemporal cortex
changed the receptive field structure for lateral geniculate neurons or Moran and Desimone’s
(1985) demonstration that spatial attentional cueing changes effective receptive fields in striate and extrastriate
cortex. In either case, active processing places demands on the system’s limited cognitive resources in order to
achieve cognitive and perceptual flexibility. In this sense, active and passive processes differ in the cognitive
and perceptual demands they place on the system.
Although the distinction between active and passive processes seems sufficiently simple, examination of
computational models of spoken word recognition makes the distinctions less clear. For a very simple example
of this potential issue consider the original Cohort theory (Marslen-Wilson and Welsh, 1978). Activation of a
set of lexical candidates was presumed to occur automatically from the initial sounds in a word. This can be
designated as a passive process since there is a direct invariant mapping from initial sounds to activation of a
lexical candidate set, i.e., a cohort of words. Each subsequent sound in the input then deactivates members of
this candidate set giving the appearance of a recurrent hypothesis testing mechanism in which the sequence of
input sounds deactivates cohort members. One might consider this an active system overall with a passive first
stage since the initial cohort set constitutes a set of lexical hypotheses that are tested by the use of context.
However, it is important to note that the original Cohort theory did not include any active processing at the
phonemic level, as hypothesis testing is carried out in the context of word recognition. Similarly, the
architecture of the Distributed Cohort Model (Gaskell and Marslen-Wilson, 1997) asserts that activation of
phonetic features is accomplished by a passive system whereas context interacts (through a hidden layer) with
the mapping of phonetic features onto higher order linguistic units (phonemes and words) representing an
interaction of context with passively derived phonetic features. In neither case is the activation of the features or
sound input to linguistic categorization treated as hypothesis testing in the context of other sounds or linguistic
information. Thus, while the Cohort models can be thought of as an active system for the recognition of words
(and sometimes phonemes), they treat phonetic features as passively derived and not influenced from context or
expectations.
This is often the case in a number of word recognition models. The Shortlist models (Shortlist: Norris, 1994;
Shortlist B: Norris and McQueen, 2008) assume that phoneme perception is a largely passive process (at least it
can be inferred as such by lack of any specification in the alternative). While Shortlist B uses phoneme
confusion data (probability functions as input) and could in principle adjust the confusion data based on
experience (through hypothesis testing and feedback), the nature of the derivation of the phoneme confusions is
not specified; in essence assuming the problem of phoneme perception is solved. This appears to be common to
models (e.g., NAM, Luce and Pisoni, 1998) in which the primary goal is to account for word perception rather
than phoneme perception. Similarly, the second Trace model (McClelland and Elman, 1986) assumed phoneme
perception was passively achieved albeit with competition (not feedback to the input level). It is interesting that
the first Trace model (Elman and McClelland, 1986) did allow for feedback from phonemes to adjust activation

17
Course: Speech & Hearing (682)
Semester: Spring, 2020
patterns from acoustic-phonetic input, thus providing an active mechanism. However, this was not carried over
into the revised version. This model was developed to account for some aspects of phoneme perception
unaccounted for in the second model. It is interesting to note that the Hebb-Trace model (Mirman et al., 2006a),
while seeking to account for aspects of lexical influence on phoneme perception and speaker generalization did
not incorporate active processing of the input patterns. As such, just the classification of those inputs was
actively governed.
This can be understood in the context schema diagrammed in Figure 1. Any process that maps inputs onto
representations in an invariant manner or that would be classified as a finite-state deterministic system can be
considered passive. A process that changes the classification of inputs contingent on context or goals or
hypotheses can be considered an active system. Although word recognition models may treat the recognition of
words or even phonemes as an active process, this active processing is not typically extended down to lower
levels of auditory processing. These systems tend to operate as though there is a fixed set of input features (e.g.,
phonetic features) and the classification of such features takes place in a passive, automatized fashion.
By contrast, Elman and McClelland (1986) did describe a version of Trace in which patterns of phoneme
activation actively changes processing at the feature input level. Similarly, McClelland et al. (2006) described a
version of their model in which lexical information can modify input patterns at the subphonemic level. Both of
these models represent active systems for speech processing at the sublexical level. However, it is important to
point out that such theoretical propositions remain controversial. McQueen et al. (2006) have argued that there
are no data to argue for lexical influences over sublexical processing, although Mirman et al. (2006b) have
countered this with empirical arguments. However, the question of whether there are top-down effects on
speech perception is not the same as asking if there are active processes governing speech perception. Top-
down effects assume higher level knowledge constrains interpretations, but as indicated in Figure 1C, there can
be bottom-up active processing where by antecedent auditory context constrains subsequent perception. This
could be carried out in a number of ways. As an example, Ladefoged and Broadbent (1957) demonstrated that
hearing a context sentence produced by one vocal tract could shift the perception of subsequent isolated vowels
such that they would be consistent with the vowel space of the putative speaker. Some have accounted for this
result by asserting there is an automatic auditory tuning process that shifts perception of the subsequent vowels
(Huang and Holt, 2012; Laing et al., 2012). While the behavioral data could possibly be accounted for by such a
simple passive mechanism, it might also be the case the auditory pattern input produces constraints on the
possible vowel space or auditory mappings that might be expected. In this sense, the question of whether early
auditory processing of speech is an active or passive process is still a point of open investigation and discussion.
It is important to make three additional points in order to clarify the distinction between active and passive
processes. First, a Bayesian mechanism is not on its own merits necessarily active or passive. Bayes rule
describes the way different statistics can be used to estimate the probability of a diagnosis or classification of an
event or input. But this is essentially a computation theoretic description much in the same way Fourier’s

18
Course: Speech & Hearing (682)
Semester: Spring, 2020
theorem is independent of any implementation of the theorem to actually decompose a signal into its spectrum
(cf. Marr, 1982). The calculation and derivation of relevant statistics for a Bayesian inference can be carried out
passively or actively. Second, the presence of learning within a system does not on its own merits confer active
processing status on a system. Learning can occur by a number of algorithms (e.g., Hebbian learning) that can
be implemented passively. However to the extent that a system’s inputs are plastic during processing, would
suggest whether an active system is at work. Finally, it is important to point out that active processing describes
the architecture of a system (the ability to modify processing on the fly based on the processing itself) but not
the behavior at any particular point in time. Given a fixed context and inputs, any active system can and likely
would mimic passive behavior. The detection of an active process therefore depends on testing behavior under
contextual variability or resource limitations to observe changes in processing as a consequence of variation in
the hypothesized alternatives for interpretation (e.g., slower responses, higher error rate or confusions, increase
in working memory load).
Understanding speech perception as an active process suggests that learning or plasticity is not simply a higher-
level process grafted on top of word recognition. Rather the kinds of mechanisms involved in shifting attention
to relevant acoustic cues for phoneme perception (e.g., Francis et al., 2000, 2007) are needed for tuning speech
perception to the specific vocal characteristics of a new speaker or to cope with distortion of speech or noise in
the environment. Given that such plasticity is linked to attention and working memory, we argue that speech
perception is inherently a cognitive process, even in terms of the involvement of sensory encoding. This has
implications for remediation of hearing loss either with augmentative aids or therapy. First, understanding the
cognitive abilities (e.g., working memory capacity, attention control etc.) may provide guidance on how to
design a training program by providing different kinds of sensory cues that are correlated or reducing the
cognitive demands of training. Second, increasing sensory variability within the limits of individual tolerance
should be part of a therapeutic program. Third, understanding the sleep practice of participants using sleep logs,
record of drug and alcohol consumption, and exercise are important to the consolidation of learning. If speech
perception is continuously plastic but there are limitations based on prior experiences and cognitive capacities,
this shapes the basic nature of remediation of hearing loss in a number of different ways.
Finally, we would note that there is a dissociation among the three classes of models that are relevant to
understanding speech perception as an active process. Although cognitive models of spoken word processing
(e.g., Cohort, TRACE, and Shortlist) have been developed to include some plasticity and to account for
different patterns of the influence of lexical knowledge, even the most recent versions (e.g., Distributed Cohort,
Hebb-TRACE, and Shortlist B) do not specifically account for active processing of auditory input. It is true that
some models have attempted to account for active processing below the level of phonemes (e.g., TRACE
I: Elman and McClelland, 1986; McClelland et al., 2006), these models not been related or compared
systematically to the kinds of models emerging from neuroscience research. For example, Friederici
(2012) and Rauschecker and Scott (2009) and Hickok and Poeppel (2007) have all proposed neurally plausible

19
Course: Speech & Hearing (682)
Semester: Spring, 2020
models largely around the idea of dorsal and ventral processing streams. Although these models differ in details,
in principle the model proposed by Friederici (2012) and Rauschecker and Scott (2009) have more extensive
feedback mechanisms to support active processing of sensory input. These models are constructed in a
neuroanatomical vernacular rather than the cognitive vernacular (even the Hebb-TRACE is still largely a
cognitive model) of the others. But both sets of models are notable for two important omissions.
First, while the cognitive models mention learning and even model it, and the neural models refer to some
aspects of learning, these models do not relate to the two-process learning models (e.g., complementary learning
systems (CLS; McClelland et al., 1995; Ashby and Maddox, 2005; Ashby et al., 2007)). Although CLS focuses
on episodic memory and Ashby et al. (2007) focus on category learning, two process models involving either
hippocampus, basal ganglia, or cerebellum as a fast associator and cortico-cortical connections as a slower more
robust learning system, have garnered substantial interest and research support. Yet learning in the models of
speech recognition has yet to seriously address the neural bases of learning and memory except descriptively.
Q.4 Hearing aid has impact on life of hearing impaired children. In your view how this technology can
help a teacher of deaf in educational management of hearing impaired children.
The ability of an individual to carry out auditory tasks in the real world is influenced not only by his or her
hearing abilities, but also by a multitude of situational factors, such as background noise, competing signals,
room acoustics, and familiarity with the situation. Such factors are important regardless of whether one has a
hearing loss, but the effects are magnified when hearing is impaired. For example, when an individual with
normal hearing engages in conversation in a quiet, well-lit setting, visual information from the speaker’s face,
along with situational cues and linguistic context, can make communication quite effortless. In contrast, in a
noisy environment, with poor lighting and limited visual cues, it may be much more difficult to carry on a
conversation or to give and receive information. A person with hearing loss may be able to function very well in
the former situation but may not be able to communicate at all in the latter.
The majority of those with hearing loss acquire it later in life at a time following the acquisition of spoken
language. The prevalence is particularly high among those who are over 65 years of age and among those who
have been exposed to noise. Because hearing loss tends to disrupt interpersonal communication and to interfere
with perception of meaningful environmental sounds, some individuals experience significant levels of distress
as a result of their hearing problems. For example, some express embarrassment and self-criticism when they
have difficulty understanding others or when they make perceptual errors. Others have difficulty accepting their
hearing loss and are unwilling to admit their hearing problems to others. Anger and frustration can occur when
communication problems arise, and many individuals experience discouragement, guilt, and stress related to
their hearing loss. These negative reactions are also associated with reports of negative attitudes and
uncooperative behaviors of others (Demorest and Erdman, 1989).
Interestingly, the association between degree of hearing loss and psychosocial adjustment to hearing loss per se
is not strong (Erdman and Demorest, 1998). Individuals with virtually identical audiograms and clinical test

20
Course: Speech & Hearing (682)
Semester: Spring, 2020
results may differ greatly in their self-reported adjustment problems. This finding is not unique to the impact of
hearing loss on psychosocial adjustment; low (negative) correlations between severity of impairment and degree
of psychosocial adjustment have been found repeatedly in the disability literature for a wide variety of health-
related problems.
Given the high variability in how individuals adjust to their hearing problems, it is not surprising that hearing
loss does not seem to affect basic personality structure (Thomas, 1984). Although many adults are resilient,
acquired hearing difficulties are nevertheless responsible for a high level of general psychological distress for a
significant number of people due in part to isolation, loneliness, and withdrawal (Meadow-Orlans, 1985). This
distress, which may be manifested in heightened anxiety, depression, sleep disturbance, and the like, is observed
not only among those who seek audiological evaluation, but also among those reluctant to acknowledge a
hearing problem (Hallberg and Barrenas, 1995; Hetu, Riverin, Getty, Lalande, and St-Cyr, 1990; Hetu, Riverin,
Lalande, Getty, and St-Cyr, 1988) and among those who have already acquired hearing aids (Thomas, 1984,
1988). This psychological distress can significantly impact the family or significant others as well as the
individual (Schein, Bottum, Lawler, Madory, and Wantuch, 2001).
Similarly to what has been found for psychosocial adjustment, studies to date have consistently demonstrated
that there is no overall association between hearing loss and psychopathology. Rosen (1979) has confirmed this
for individuals with acquired hearing loss, and Pollard (1994) has confirmed it from an analysis of public
mental health records on deaf and hard-of-hearing individuals in the Rochester, New York, vicinity. Despite this
lack of association, it is important to acknowledge that psychological distress can be a factor in adjustment
difficulties.
Knutson et al. (1998) have investigated whether the use of cochlear implants can affect the social adjustment
of those with acquired hearing loss. In a study of psychological change over 54 months of cochlear implant use
by 37 postlingually deafened adults, the researchers used standard psychological measures of affect, social
function, and personality prior to implantation, and then at regularly scheduled intervals after implantation, to
assess the impact of audiological benefit. There was evidence of significant improvement on measures of
loneliness, social anxiety, paranoia, social introversion, and distress. To a lesser extent, improvement was also
noted for depression. Improvement of marital distress and assertiveness took comparatively longer to emerge.
One caveat is that because of the complexities of individual life issues and personality attributes, it is not
possible to attribute the improvement in psychological measures solely to the influence of audiological benefits.
How well the improvement noted on self-report measures translates into actual social and job situations has not
been determined.
Untreated hearing loss causes delays in the development of speech and language, and those delays then lead to
learning problems, often resulting in poor school performance.

21
Course: Speech & Hearing (682)
Semester: Spring, 2020
Unfortunately, since poor academic performance is often accompanied by inattention and sometimes poor
behavior, children with hearing loss are often misidentified as having learning disabilities such as ADD and
ADHD.
According to the American Speech-Language Hearing Association (ASHA), children who have mild to
moderate hearing loss but do not get help are very likely to be behind their hearing peers by anywhere from one
to four grade levels.
And for those with more severe hearing loss, intervention services are even more crucial; those who do not
receive intervention usually do not progress beyond the third-grade level.
Frustration and confusion can also play a big part in poor academic performance. Though he might have
perfectly normal speech, a child with only mild hearing loss can still have trouble hearing a teacher from a
distance or amid background noise. Imagine the difficulty and confusion of not being able to hear the high-
frequency consonants that impart meaning in the English language (ch, f, k, p, s, sh, t and th) and you can begin
to understand some of the academic struggles a child with hearing loss faces on a daily basis. "Chick" and
"thick" may sound identical to a child with hearing loss, for example. 
In addition to academic struggles in school, children with hearing loss can also experience trouble socially.
Communication is vital to social interactions and healthy peer relationships; without the ability to communicate
effectively they often experience feelings of isolation and unhappiness.
If a child with hearing loss is excluded from social interactions or is unwilling to participate in group activities
due to fear of embarrassment, the result is that she can become socially withdrawn, leading to further
unhappiness. Children with hearing loss are also slower to mature socially, which hinders peer relationships.
Teachers are in a unique position to help students by arming themselves with the knowledge as to how a student
with a hearing loss receives and understands information, as well as comprehensive knowledge of an individual
student’s capabilities and level of comprehension. Since early intervention is key, signs teachers can watch for
in the classroom include:
 Inattentiveness
 Inappropriate responses to questions
 Daydreaming
 Trouble following directions
 Speech problems
A child who is struggling in school, especially if she has a family history of hearing loss or has had recurring
ear infections, should be seen by a hearing care professional for an evaluation.
Depending on the results a proper course of intervention can then be recommended. Intervention is crucial
because a child who is supported both at school and at home has the best chance of success, academic and
otherwise.

22
Course: Speech & Hearing (682)
Semester: Spring, 2020
If you believe your child is suffering from hearing loss, take her to a pediatrician or your local hearing
healthcare professional today. Check out our hearing care directory for one near you.
Q.5 Define auditory training. How storytelling and dramatic participation can be used to promote the
auditory discrimination and spontaneous vocalization?
Auditory training is an intervention method used in rehabilitative audiology that aims to help individuals with
hearing loss use their residual hearing maximally. It emphasizes the development of listening skills to improve
the recognition and interpretation of speech sounds despite limited hearing ability.
Storytelling is one of the simplest and perhaps most compelling forms of dramatic and imaginative activity. A
good place to start is by telling stories to your pupils and encouraging them to share stories with one another.
All of us can become engaging storytellers with a little practice. There may also be members of staff who are
particularly skilled at telling stories, or you could invite a professional storyteller (such as Hugh Lupton in the
video below) into the school. Listen to each other, watch videos of storytelling and encourage the children to
identify techniques they could use in their own stories.
Awareness regarding personal hygiene helps people to have a full and healthy life in personal and social
contexts, and following personal hygiene instructions can help one to maintain a suitable physical, mental, and
social health level and to better accomplish the necessary tasks in one's family and society. Poor health among
school children is resulted from the lack of awareness of the health benefits of personal hygiene, personal
hygiene education, and increasing health knowledge are the most effective methods to prevent or reduce many
of the problems in the field of health. Personal hygiene principals provide one with a suitable framework that
can be used to maintain personal health throughout one's life and early education of these principals to children
at a suitable age helps in strengthening these principals in their minds.
Among various parts of the society children are one of the most important factors in improving the society's
health situation due to their important role in learning and transferring personal hygiene principals. Teaching
hygienic behaviors to children and improving their awareness in regard to personal hygiene plays an important
role in preventing various diseases during their lives. An important factor to consider in health education is the
demographic characteristics of the target audience such as gender, age, education, social class, economical
background, job, health, and housing situations and other such factors. For example, children between ages of 6
and 9 prefer learning through experience, and therefore, books can be suitable tools for their education. To this
end, the educator needs to use previous experience and personal judgment to select a method which is suitable
to the characteristics of the target audience. In general, there are two types of education methods: formal and
informal. Informal education is usually carried out in home or society by parents and other acquaintances while
formal education is the duty of the education system including preschools, elementary schools, high schools,
colleges, and universities and is carried out by teachers and educators.
Storytelling and creative drama are two of the informal education methods that can indirectly increase the
children's knowledge and are thus useful for teaching personal hygiene. Storytelling includes live recitation and

23
Course: Speech & Hearing (682)
Semester: Spring, 2020
directing of stories in poetry or prose for listeners. The stories used in this method can be conversations, songs,
rhymes, or stories presented with or without music or other helping tools. On the other hand, creative drama is a
method in which the teacher recites a poetry, shows a picture, or plays a certain music for the students and then
analyzes it along with the students and together they simultaneously create scenes, scenarios, characters, and
conversations related to the initial material. Creative drama is an organized experience in which children
recreate a problem or part of children's literature with the help of their teacher and then analyze and discuss the
play afterward. This type of play does not need scenario, décor, makeup, or audience and the audience is the
players themselves. Generally, the necessary equipment for creative drama is limited and only needs a qualified
supervisor and enough space for the play to take place.
Several methods have been investigated for increasing the awareness of children and adults through education
in Iran and other countries. These studies include school-based methods, methods deepening on parents’
cooperation, and other traditional or indirect methods. The results of several researches show the educational
programs to be effective in improving the awareness of children about personal hygiene. Furthermore, the
results of some studies showed that storytelling and creative drama are effective tools for children training.
Unlike vision, human auditory sensitivity is adult-like within a few days of birth (Adelman, Levi, Linder, and
Sohmer, 1990; Klein, 1984; Sininger, Abdala, and Cone-Wesson, 1997). Consequently, hearing loss degree and
configuration are judged by the same standards for newborns as for adults.
The basic hearing evaluation for persons of any age is the pure-tone audiogram. Thresholds are also measured
using speech stimuli. Establishing thresholds for tonal and speech stimuli by air and bone conduction using
standard adult procedures is possible with children who have a developmental age of 4-5 years. Prior to that
age, procedures must be modified to meet developmental demands. For all pediatric assessments, multiple-
procedure test batteries are recommended to ensure the consistency of results.
Pure-tone or frequency-specific threshold tests in infants and children are classified either as physiological tests,
in which a response is determined by some objectively measured change in physiological status, or behavioral
tests, in which an overt response is elicited from children in response to sound and their responses are judged by
an audiologist. Physiological tests do not actually measure perception of sound but can generally predict hearing
thresholds or the range of hearing with a great deal of precision. The most valuable of these tests for threshold
prediction for infants less than 6 months old is the ABR. A promising but as yet less proven technique for
threshold prediction in these very young children is the auditory steady-state response (ASSR). Other
physiological measures that correlate with hearing levels and support the test battery include tympanometry,
acoustic middle ear muscle reflex, and OAEs.
However, during the 0- to 6-month age period, it is possible to obtain unconditioned responses to sound, such as
a change in sucking behavior, startle reflex, or eye widening. This test paradigm is known as behavioral
observation audiometry (BOA). These responses will be suprathreshold and cannot rule out mild or moderate

24
Course: Speech & Hearing (682)
Semester: Spring, 2020
hearing loss. BOA is nonetheless a valuable part of the test battery for infants under age 6 months to
substantiate overall impressions.
Children with normal vision at the developmental level of typical 6-month-olds naturally turn their heads to find
the source of an interesting sound. VRA takes advantage of that fact by reinforcing head turns with a pleasant
visual stimulus, usually an animated toy that is lit to become visible for a short time following a head turn that is
time-locked to the presentation of an auditory stimulus. Tones and speech can be used. The test must be
administered quickly after appropriate conditioning to maintain the child's interest. A variety of visual
reinforcers can be used to elicit head turns in response to near-threshold level stimuli. VRA can be administered
using insert earphones for an ear-specific response or with a bone-conduction vibrator. If a child will not
tolerate earphones, the stimuli can be presented through a speaker into the sound field of a sound-treated
chamber. This procedure limits the conclusions of the tests to hearing in the better ear and cannot determine a
unilateral hearing loss. Generally, normally hearing 6-month-old infants will respond to stimuli of 20 dB HL or
better (Widen and O'Grady, 2002).
VRA may no longer hold the interest of children who have reached the developmental status of a 2-year-old. In
that case, the children's interest can usually be maintained by involving them in a play activity. Play audiometry
involves making a game of hearing sounds. Children respond to the sound presentation, for example, by
dropping a block into a bucket or stacking a ring on a peg. Devices are available that dispense a tangible reward,
such as a piece of candy or a token, when an appropriate response to sound is given. This is known as tangible
reinforcement audiometry (TROCA). As long as the interest of the child can be maintained, these techniques
will yield accurate audiometric threshold evaluations.

25

You might also like