Psycholinguistics Handout 2020
Psycholinguistics Handout 2020
Psycholinguistics is a young and rapidly growing field. There are now so many
psycholinguistic subspecializations. Here, it would be introduced by outlining the domain
in psycholinguistic inquiry with the general and the most frequent discussion areas and the
basic issues of concern to Psycholinguistics.
Linguistics is the discipline that describes the structure of language, including its
grammar, sound system, and vocabulary. The fields of psycholinguistics is concerned with
discovering the psychological processes by which humans acquire and use language.
Conventionally, psycholinguistics adresses three major concerns (Gleason & Ratner,1998:
3):
1. Language Acquisition: How people acquire and learn language.
2. Language Comprehension: How people understand spoken and written language.
3. Language Production: How people produce language.
The relationship between language and thought has been a subject of considerable
interest over the years. Arguments on the subject presented started as early as period 19th
Century Ancient Greek. The 1940s saw the rise and popularity of Sapir-Whorf hypothesis
that suggested that speakers of different languages think differently. Later on scientists
raised criticism on the theory with most adopting the view that language and theory are
universal. Particular concern has been directed in establishing whether there is a difference
in how two people who speak differently also think differently. Recent research such as
Leva (2011) however, suggests that the language people use may affect the way they think
such that language conditions an individuals perception of issues and the actions. This
empirical evidence presented suggests that language shapes thinking putting to task the
previously held theories on language universalism. The debate on the issue however
continues. An understanding of this relationship has considerable implications for politics,
education, marketing and other areas that involve human interaction. This paper aims to
evaluate this relationship between language and thought. To achieve this, the paper aims to
evaluate five major aspects; whether language dictates thinking, whether language
organize thinking, whether people speaking different languages also think differently,
whether multilingual individuals have broader thinking as compared to monolinguals and
evaluating whether thought can exist without language. The paper is organized in two
major sections; an evaluation of the various existing theories and concepts and subsequent
section that specifically addresses the above specified questions of research.
a. Whorfianism
The Whorfian hypothesis was originally based on the works of German educator Wilhelm
von Humboldt in the 19th century who argued that; "Language is the formative organ of
thought. Intellectual activity, entirely mental, entirely internal, and to some extent passing
without trace, becomes through sound, externalized in speech and perceptible to the senses.
Thought and language are therefore one and inseparable from each other." (as quoted in
Skotko, 1997). The Whorfian hypothesis is one of the theories of linguistic relativism,
which argues that languages differ in basic approaches to reality. The theory was advanced
in the 1940s by American Linguists, Edward Sapir, and Benjamin Whorf, from their study
on the Native American tribes, particularly the Hopi, showed how people using different
languages think differently (Leva, 2011). The Whorfian hypothesis advances that language
is just a part of culture that is affected by the people who carrying it, but it also, in return,
affects the culture and thought. His theory was based on his belief that how people
perceive the world is affected by the structure of the language they use. Whorf argues that
where a language has no word for a concept, then the people using the language cannot
understand that concept. Whorf argues, The world is represented in a kaleidoscopic flux
of impressions which has to be organized largely by linguistic systems in our minds (as
cited in Andrew & Keil, 2001, p. 444). This hypothesis has two major aspects; linguistic
relativism and linguistic determination. Linguistic relativism considers the structural
differences between languages, paralleled by non-linguistic cognitive differences in
languages that manifest through the thinking of the speakers. Linguistic determination
considers that the structure of language such as vocabulary, grammar and other aspects,
strongly influences or determines the way its native speakers perceive or reason about the
world (Andrew & Keil, 2001, p.444). The theory puts weight on the unconscious influence
that language has on habitual thoughts (Skotko, 1997) highlighting that language comes
first, and influences thought.
Recent decades have witnessed research and demonstrations indicating that language
affects cognition. A major contributor in this approach is Lee Vygotsky (2009), a Russian
psychologist whose theory related thinking and speaking. He touched on the
developmentally changing relationship between languages and thought considering the two
as dynamically related. He argued that there is an emergence of vague thought, which is
then completed in language. He considers it as an unfolding process that goes back and
forth from thought to word and word to thought. He argued that thought is originally non-
verbal and language is non-intellectual and only come meet at around age two when
thought turns verbal and speech turns rational. The concepts of thinking and cognitive are a
result from the specific culture that an individual grows in. thus language has a major
influence in the way a child as it acquires thought through words. His works have had
substantial influence in study of mind and language. Cognitive science perceives the
relationship between language and thought in a different way from Whorfism adopting the
approach that thought does not equal language and thus both thought and perception are
influenced by language, that in contrast language is influenced by thought as expressed by
Geeorge Lakoff that; Language is based on cognition (Flohr, n.d). This approach
perceives cognition and languages as existing side by side. It argues that human interaction
with each other generates thoughts. To interact however, language is a vital element thus
the theory suggests that language and thought are integrated and can therefore not be
perceived separately. This is demonstrated by studies showing that changing how people
talk changes how they think, learning new color words enhances a persons ability to
discriminate color, and learning new ways of talking about time imparts a new way of
thinking about it (Leva, 2011).
The Whorfian theory was subjected to various criticisms from psychology. First, as argued
by Steven Pinker, Wason and Jorhnson Laird is the lack of evidence that a language
influences a particular way of thinking towards the world for its speakers (Skotko, 1997;
Leva, 2011). By the 1970s, the Sapir-Whorf hypothesis had lost favor with most scientists
with most adopting theories that considered language and thought as universal (Leva,
2011). The critics agreed that language expresses thought but criticized the idea that
language can influence content and thought. The theory has been criticized for its extremist
view that people who use a language cannot understand a concept if it is lacking in their
language. Despite criticism on Whorfs linguistic deterministic theory, recent research
indicates that people who speak different languages really do think differently and that
language does influence an individuals perception of reality. Leva (2011) argues that
language shapes the basic aspects of Guy Duetscher observes that language influences an
individuals minds not out of what it allows the individual to think but rather on what it
obliges the individual to think (Jones, 2010). In real sense, speakers of a certain language
do understand a concept even if it is not in their language as highlighted for example by the
German word schadenfreude which has no equivalent in the English language, English
speakers do understand its meaning which is rejoicing from the bad luck of others. Another
example is the Mandarin Chinese who although do not have words describing present,
past, and future tenses, they nevertheless understand this concept. Language influences and
enforces our thought process. Lev Vygotsky argued that thought is not only expressed in
words but comes into existence through them (as cited in Perkins, 1997). In a related view
Perkins (1997) points out that, Although thinking involves much more than we can say,
we would have far less access to that "more" without the language of thinking (368).
Research by Stephen Levinson shows for example that people who speak languages that
rely on absolute directions perform better in keeping track of their locations even in
unfamiliar grounds when place in the same locations with local folks although they may
not speak the same language (Leva, 2011). How an individual perceives such aspects as
time and space are affected by language. An example is that most European languages
express time as horizontal whereas Mandarin Chinese express it as vertical. Similarly,
English express duration of time in terms of length for example a short meeting whereas
the Spanish use the concept of size for example little (Jones, 2010). Such other aspects
include action orientation or conditional references that depict anecdotal hints of possible
effects. An example of this is the cause and effect aspect difference exhibited from a video
shown to English and Japanese speakers. For English speakers, they are likely to express it
as she broke the glass with reference to the person who broke the glass irrespective of if
it was accidental, whereas for the Japanese speakers it is expressed as the glass broke
itself with less emphasis on the doer than if the action was intentional (Jones, 2010).
These differences in language also affect how people construe what happened and affect
eyewitness memory (Leva, 2011). For example in the above example, English speakers
would on request to remember tend to remember the accidents more agentive thus
identifying the door more easily than their Japanese counterparts. Language does not only
influence memory, but also the degree of ease in learning new things (Leva, 2011).
Children speaking a language for example that mentions base 10 structures more clearly
than for example in English learn the base 10 insight sooner. The number of syllables the
number word has also affects such aspects as remembering the phone number. People rely
on language even in doing small things and the categories and distinctions in languages
considerably influence an individuals mental life. As expressed by Leva, 2011, What
researchers have been calling "thinking" this whole time actually appears to be a collection
of both linguistic and nonlinguistic processes. As a result, there may not be a lot of adult
human thinking where language does not play a role.
3. Language Acquisition
At birth, the infant vocal tract is in some ways more like that of an ape than that of
an adult human. In particular, the tip of the velum reaches or overlaps with the tip of the
epiglottis. As the infant grows, the tract gradually reshapes itself in the adult pattern.
During the first two months of life, infant vocalizations are mainly expressions of
discomfort (crying and fussing), along with sounds produced as a by-product of reflexive
or vegetative actions such as coughing, sucking, swallowing and burping. There are some
nonreflexive, nondistress sounds produced with a lowered velum and a closed or nearly
closed mouth, giving the impression of a syllabic nasal or a nasalized vowel. During the
period from about 2-4 months, infants begin making "comfort sounds", typically in
response to pleasurable interaction with a caregiver. The earliest comfort sounds may be
grunts or sighs, with later versions being more vowel-like "coos". The vocal tract is held in
a fixed position. Initially comfort sounds are brief and produced in isolation, but later
appear in series separated by glottal stops. Laughter appears around 4 months. During the
period from 4-7 months, infants typically engage in "vocal play", manipulating pitch (to
produce "squeals" and "growls"), loudness (producing "yells"), and also manipulating tract
closures to produce friction noises, nasal murmurs, "raspberries" and "snorts".
Both vocal play and babbling are produced more often in interactions with caregivers, but
infants will also produce them when they are alone.
No other animal does anything like babbling. It has often been hypothesized that vocal
play and babbling have the function of "practicing" speech-like gestures, helping the infant
to gain control of the motor systems involved, and to learn the acoustical consequences of
different gestures (www.ling.upenn.edu).
At about ten months, infants start to utter recognizable words. Some word-like
vocalizations that do not correlate well with words in the local language may consistently
be used by particular infants to express particular emotional states: one infant is reported to
Young children often use words in ways that are too narrow or too broad: "bottle" used
only for plastic bottles; "teddy" used only for a particular bear; "dog" used for lambs, cats,
and cows as well as dogs; "kick" used for pushing and for wing-flapping as well as for
kicking. These underextensions and overextensions develop and change over time in an
individual child's usage (www.ling.upenn.edu).
Clever experiments have shown that most infants can give evidence (for instance, by gaze
direction) of understanding some words at the age of 4-9 months, often even before
babbling begins. In fact, the development of phonological abilities begins even earlier.
Newborns can distinguish speech from non-speech, and can also distinguish among speech
sounds (e.g. [t] vs. [d] or [t] vs. [k]); within a couple of months of birth, infants can
distinguish speech in their native language from speech in other languages.
Early linguistic interaction with mothers, fathers and other caregivers is almost certainly
important in establishing and consolidating these early abilities, long before the child is
giving any indication of language abilities.
Childrens earliest word uses sometimes coincide with adult usage but may also depart
from it in quite striking ways. Both 19 th- and 20th-century diarists, for example noted
numerous occasions where children overextended their words and used them for referring
to things that would not be covered by adult word. For example, a two year-old might
overextended the word dog to refer to cats, sheep, horses and a variety of other four-
legged mammals. Why do children do this? One of possibility is that they do not yet
distinguish among the mammal types they are referring to this way. Another possibility is
for communicative reasons. They may well know that their word is not the right one, but
they dont have or cant readily access the right word, so they make do with a term close by
(Clark, Eve. V, 2003: 88).
In the beginning, infants add active vocabulary somewhat gradually. Here are measures of
active vocabulary development in two studies. The Nelson study was based on diaries kept
by mothers of all of their children's utterances, while the Fenson study is based on asking
mothers to check words on a list to indicate which they think their child produces.
There is often a spurt of vocabulary acquisition during the second year. Early words
are acquired at a rate of 1-3 per week (as measured by production diaries); in many cases
the rate may suddenly increase to 8-10 new words per week, after 40 or so words have
been learned. However, some children show a more steady rate of acquisition during these
early stages. The rate of vocabulary acquisition definitely does accelerate in the third year
and beyond: a plausible estimate would be an average of 10 words a day during pre-school
and elementary school years.
During the second year, word combinations begin to appear. Novel combinations (where
we can be sure that the result is not being treated as a single word) appear sporadically as
early as 14 months. At 18 months, 11% of parents say that their child is often combining
words, and 46% say that (s)he is sometimes combining words. By 25 months, almost all
children are sometimes combining words, but about 20% are still not doing so "often."
1. Doggy bark
2. Ken water (for "Ken is drinking water")
3. Hit doggy
Some combinations with certain closed-class morphemes begin to occur as well: "my
turn", "in there", etc. However, these are the closed-class words such as pronouns and
prepositions that have semantic content in their own right that is not too different from that
of open-class words. The more purely grammatical morphemes -- verbal inflections and
verbal auxiliaries, nominal determiners, complementizers etc. -- are typically absent.
Since the earliest multi-unit utterances are almost always two morphemes long -- two
being the first number after one! -- this period is sometimes called the "two-word stage".
Quite soon, however, children begin sometimes producing utterances with more than two
elements, and it is not clear that the period in which most utterances have either one or two
lexical elements should really be treated as a separate stage.
In the early multi-word stage, children who are asked to repeat sentences may simply leave
out the determiners, modals and verbal auxiliaries, verbal inflections, etc., and often
pronouns as well. The same pattern can be seen in their own spontaneous utterances:
At about the age of two, children first begin to use grammatical elements. In English, this
includes finite auxiliaries ("is", "was"), verbal tense and agreement affixes ("-ed" and '-s'),
nominative pronouns ("I", "she"), complementizers ("that", "where"), and determiners
("the", "a"). The process is usually a somewhat gradual one, in which the more telegraphic
patterns alternate with adult or adult-like forms, sometimes in adjacent utterances:
Over a year to a year and a half, sentences get longer, grammatical elements are less often
omitted and less often inserted incorrectly, and multiple-clause sentences become
commoner (www.ling.upenn.edu).
5. Language Comprehension
Deriving meaning from spoken language involves much more than knowing the meaning
of words and understanding what is intended when those words are put together in a
certain way. The following categories of capacity, knowledge, skill, and dispositions are all
brought to bear in fully comprehending what another person says.
b. Hearing and Auditory Processing Understanding a spoken utterance assumes that the
listeners hearing is adequate and that the spoken sounds are correctly perceived as
phonemes of English (or whatever language is spoken). Phonemes are the smallest units of
spoken language that make a difference to meaning corresponding roughly to the letters
in a word (e.g., the sounds that t, a, and n make in the word tan). Auditory processing
of language also includes the ability to integrate the separate sounds of a word into the
perception of a meaningful word and of sequences of meaningful words.
c. Word Knowledge and World Knowledge Word knowledge includes knowing the
meaning of words (e.g., understanding them when they are spoken), including multiple
meanings of ambiguous words. Knowing the meaning of a word is more than knowing
what (if anything) that word refers to. Rather it is possession of a large set of meaning
associations that comprise the words full meaning. For example knowing the meaning of
the word horse includes knowing that horses are animals, that they engage in specific
types of activities, that they have many uses, that they have specific parts, that they have a
certain size, shape, and other attributes, that they are characteristically found in specific
places, and the like. Understanding spoken language requires an adequate vocabulary,
which is a critical component of the semantics of a language. Word meanings may be
concrete (e.g., ball refers to round objects that bounce) or abstract (e.g., justice refers
to fairness in the pursuit or distribution of various types of goods and services). World
knowledge includes understanding the realities in the world objects and their attributes,
actions and their attributes, people, relationships, and the like that words refer to and
describe. For example, if a student has no knowledge of computers, then it is impossible to
fully understand the word computer.
d. Knowledge of Word Organization Syntax (or grammar) refers to the rules that govern
the organization of words in a sentence or utterance. Comprehending an utterance requires
an ability to decipher the meaning implicit in the organization of words. For example,
Tom fed the dog and The dog fed Tom have different meanings despite containing
exactly the same words. Morphology (a component of grammar) refers to rules that govern
meaning contained in the structure of the words themselves. Changes within words (e.g.,
adding an s to dog to get dogs, or adding an ed to kick to get kicked) affects
meaning. Comprehending an utterance requires an ability to decipher the meaning
associated with such modifications of the words.
e. Discourse
Just as there are rules that govern how speakers put words together in a sentence to
communicate their intended meaning, there are also rules that govern how sentences (or
thoughts) are organized to effectively tell stories, describe objects and people, give
directions, explain complex concepts or events, influence peoples beliefs and actions, and
the like. These are called rules of discourse. Effective comprehension of extended
language (e.g., listening to a story or a lecture) assumes that the listener has some idea of
what to listen for and in what order that information might come.
Pragmatics refers to the rules governing the use of language in context (including social
context) for purposes of sending and receiving varied types of messages, maintaining a
flow of conversation, and adhering to social rules that apply to specific contexts of
interaction. On the comprehension side of communication, the first of these three types of
rules is most critical. For example, comprehending the sentence, I will do it requires
deciding whether the speaker intends to make a promise, a prediction, or a threat. Similarly
Wed love to have you over for dinner could be an invitation, a statement of an abstract
desire, or an empty social nicety. Or Johnny, I see youve been working hard at cleaning
your room could be a description of hard work or a mothers ironic criticism of Johnny for
not working on his room. In each case, correct interpretation of the utterance requires
consideration of context information, knowledge of the speaker, understanding of events
that preceded the interaction, and general social knowledge
g. Indirect Meanings include metaphor (e.g., Hes a real spitfire), sarcasm and irony
(e.g., You look terrific said to a person who appears to be very sick), idioms or other
figures of speech (e.g., People who live in glass houses shouldnt throw stones),
hyperbole (e.g., The story I wrote is about a million pages long!), and personification
(e.g., Careful! Not studying for a test can jump up and bite you!). Comprehending
indirect meanings often requires abstract thinking and consideration of context cues.
Students with brain injury often have significant difficulty deciphering the meaning of such
indirect communication unless the specific use of words was familiar before the injury.
Understanding new metaphors, figures of speech and the like makes significant demands
on cognitive processing (e.g., working memory, reasoning), discussed next.
The perception of spoken words would seem to be an extremely difficult task. Speech
is distributed in time, a fleeting signal that has few reliable cues to the boundaries between
segments and words. The paucity of cues leads to what is called the segmentation problem,
or the problem of how listeners hear a sequence of discrete units even though the acoustic
signal itself is continuous. Other features of speech could cause difficulty for listeners as
well. Certain phonemes are omitted in conversational speech, others change their
pronunciations depending on the surrounding sounds (e.g., /n/ may be pronounced as [m]
in lean bacon), and many words have everyday pronunciations (e.g., going to frequently
becomes gonna). Despite these potential problems, we usually seem to perceive speech
automatically and with little effort. Whether we do so using procedures that are unique to
speech and that form a specialized speech module (Liberman & Mattingly, 1985), or
whether we do so using more general capabilities, it is clear that humans are well adapted
for the perception of speech.
Listeners attempt to map the acoustic signal onto a representation in the mental
lexicon beginning almost as the signal starts to arrive. The cohort model, first proposed by
Marslen-Wilson and Welsh (1978), illustrates how this may occur. According to this
theory, the first few phonemes of a spoken word activate a set or cohort of word candidates
that input. These candidates compete with one another for activation. As more acoustic
input is analyzed, candidates that are no longer consistent with the input drop out of the set.
This process continues until only one word candidate matches the input; the best fitting
word may be chosen if no single candidate is a clear winner. Supporting this view, listeners
sometimes glance first at a picture of a candy when instructed to pick up the candle
(Allopenna, Magnuson, & Tanenhaus,1998). This result suggests that a set of words
beginning with /k{n/ is briefly activated. Listeners may glance at a picture of a handle, too,
suggesting that the cohort of word candidates also includes words that rhyme with the
target. Indeed, later versions of the cohort theory (Marslen-Wilson, 1987; 1990) have
relaxed the insistence on perfectly matching input from the very first phoneme of a word.
Other models (McClelland & Elman, 1986; Norris, 1994) also advocate continuous
mapping between spoken input and lexical representations, with the initial portion of the
spoken word exerting a strong but not exclusive influence on the set of candidates. The
cohort model and the model of McClelland and Elman (1986) are examples of interactive
models, those in which higher processing levels have a direct, top-down influence on
lower levels. In particular, lexical knowledge can affect the perception of phonemes. A
number of researchers have found evidence for interactivity in the form of lexical effects
on the perception of sublexical units. Wurm and Samuel (1997), for example, reported that
listeners knowledge of words can lead to the inhibition of certain phonemes. Samuel
(1997) found additional evidence of interactivity by studying the phenomenon of phonemic
restoration. This refers to the fact that listeners continue to hear phonemes that have
been removed from the speech signal and replaced by noise. Samuel discovered that the
restored phonemes produced by lexical activation lead to reliable shifts in how listeners
labeled ambiguous phonemes.
Modular models, which do not allow top-down perceptual effects, have had varying
success in accounting for some of the findings just described. The race model of Cutler and
Norris (1979; see also Norris, McQueen, & Cutler, 2000) is one example of such a model.
The model has two routes that race each other -- a pre-lexical route, which computes
phonological information from the acoustic signal, and a lexical route, in which the
phonological information associated with a word becomes available when the word itself is
accessed. When word-level information appears to affect a lower-level process, it is
assumed that the lexical route won the race. Importantly, though, knowledge about words
never influences perception at the lower (phonemic) level. There is currently much
discussion about whether all of the experimental findings suggesting top-down effects can
be explained in these terms or whether interactivity is necessary (see Norris et al., 2000,
and the associated commentary).
Although it is a matter of debate whether higher-level linguistic knowledge affects
the initial stages of speech perception, it is clear that our knowledge of language and its
patterns facilitates perception in some ways. For example, listeners use phonotactic
information such as the fact that initial /tl/ is illegal in English to help identify phonemes
and word boundaries (Halle, Segui, Frauenfelder, & Meunier, 1998). As another example,
listeners use their knowledge that English words are often stressed on the first syllable to
help parse the speech signal into words (Norris, McQueen, & Cutler, 1995). These types of
knowledge help us solve the segmentation problem in a language that we know, even
though we perceive an unknown language as an undifferentiated string.
6. Language Production
Language production refers to the process involved in creating and expressing meaning
through language. According to Levelt (1989), Language production contains four
successive stages : (1) conceptualization, (2) formulation , 3) articulation, (4) self-
monitoring (Scovel 1998:27)
-Speech errors
The scientific study of speech errors, commonly called slips of the tongue or
tongue-slips, can provide useful clues to the processes of language production: they
can tell us where a speaker stops to think.
____________________________________________________________
Type Example
____________________________________________________________
(1) Shift Thats so shell be ready incase she decide to hits it. (decides to hit it).
(2) Exchange Fancy getting your model resnosed. (getting your nose remodeled).
(3) Anticipation Bake my bike. (take my bike).
(4) Perseveration He pulled a pantrum. (tantrum).
(5) Addition I didnt explain this clarefully enough. (carefully enough).
(6) Deletion Ill just get up and mutter intelligibly. (unintelligibly).
(7) Substitution At low speeds its too light. (heavy).
(8) Blend That child is looking to be spaddled. (spanked\paddled)
Explanations of errors
(1) in Shifts, one speech segment disappears from its appropriate place and appears
somewhere else.
(2) Exchanges are, in fact, double shifts, in which two linguistic units exchange
places.
(3) Anticipations occur when a later segment takes the place of an earlier one. They
are different from shifts in that the segment that intrudes on another also remains in
its correct place and thus is used twice.
(4) Perseverations appear when an earlier segment replaces a later item.
(5) Additions add linguistic material.
(6) Deletions leave something out.
(7) Substitutions occur when one segment is replaced by an intruder. These are
different from the previously described slips in that the source of the intrusion may
not be in the sentence.
(8) Blends apparently occur when more than one word is being considered and the
two intended items fuse or blend into a single item.
Memory is the process by which information is encoded, stored, and retrieved. Encoding
allows information that is from the outside world to reach our senses in the forms of
chemical and physical stimuli. In this first stage we must change the information so that we
may put the memory into the encoding process. Storage is the second memory stage or
process. This entails that we maintain information over periods of time. Finally the third
process is the retrieval of information that we have stored. We must locate it and return it
to our consciousness. Some retrieval attempts may be effortless due to the type of
information.
From an information processing perspective there are three main stages in the formation
and retrieval of memory:
The concept of language and brain is the discussion area of Psycholinguistics which
cover all the domains processing of Psycholinguistic inquiry where the language is
acquired, processed, and produced through the brains activities.
Many people assume the physical basis of language lies in the lips, the tongue, or
the ear. But deaf and mute people can also possess language fully. People who have no
capacity to use their vocal cords may still be able to comprehend language and use its
written forms. And human sign language, which is based on visible gesture rather than the
creation of sound waves, is an infinitely creative system just like spoken forms of
language. But the basis of sign language is not in the hand, just as spoken language is not
based in the lips or tongue. There are many examples of aphasics who lose both the ability
to write as well as to express themselves using sign-language, yet they never lose manual
dexterity in other tasks, such as sipping with a straw or tying their shoes.
Language is brain stuff--not tongue, lip, ear, or hand stuff. The language organ is
the mind. More specifically, the language faculty seems to be located in certain areas of the
left hemispheric cortex in most healthy adults. A special branch of linguistics, called
neurolinguistics, studies the physical structure of the brain as it relates to language
production and comprehension.
1). In all humans, the right hemisphere controls the left side of the body; the left
hemisphere controls the right side of the body. This arrangement--called contralateral
neural control is not limited to humans but is also present in all vertibrates--fish, frogs,
lizards, birds and mammals. On the other hand, in invertibrates such as worms, the right
hemisphere controls the right side, the left hemisphere controls the left side. The
contralateral arrangement of neural control thus might be due to an ancient evolutionary
change which occurred in the earliest vertibrates over half a billion years ago. The earliest
vertibrate must have undergone a 180° turn of the brain stem on the spinal chord so that the
pathways from brain to body side became crossed. The probability that such a primordial
twist did occur is also born out by the fact that invertibrates have their main nerve
pathways on their bellies and their circulatory organs on their backs, while all vertibrates
have their heart in front and their spinal chord in back--just as one would expect if the 180°
twist of the brain stem vis-a-vis the body did take place.
2). Another crucial feature of brain physiology is that each hemisphere has somewhat
unique functions (unlike other paired organs such as the lungs, kidneys, breasts or testicles
which have identical functions). In other words, hemisphere function is asymmetrical.
This is most strikingly the case in humans, where the right hemisphere in addition to
controlling the left side of the body, also controls spatial acuity, while the left hemisphere
in addition to controlling the right side of the body, controls abstract reasoning and
physical tasks which require a step-by-step progression. It is important to note that in
adults, the left hemisphere also controls language; even in most left-handed patients,
lateralization of language skills in the left hemisphere is completed by the age of puberty.
Now, why should specialized human skills such as language and abstract reasoning have
developed in the left hemisphere instead of the right? Why didn't these skills develop
equally in both hemispheres. The answer seems to combine the principle of functional
economy with increased specialization. In nature, specialization for particular tasks often
leads to physical asymmetry of the body, witness the lobster's claws, where limbs or other
of the body differentiate to perform a larger variety of tasks with greater sophistication (the
same might be said to have happened in human society with the rise of different trades and
the division of labor).
Because of this specialization, one hemisphere in most individuals for some reason
it is the right hemisphere came to control matters relating to 3D spatial acuity, the
awareness of position in space in all directions simultaneously. Thus, in modern humans,
artistic ability tends to be centered in various areas of the right hemisphere.
The left hemisphere, on the other hand, came to control patterns that progress step-by-
step in a single dimension, such as our sense of time progression, or the logical steps
required in performing feats of manual dexterity such as the process of fashioning a stone
axe. This connects with right-handedness. Most humans are born with a lopsided
preference for performing skills of manual dexterity with the right hand. The hand
controlled by the left hemisphere. The left hand holds an object in space while the right
hand manipulates that object to perform tasks which require a step-by-step progression.
Obviously, this is a better arrangement than if both hands were equally clumsy at
performing complex, multi-step tasks, or if both sides of the brain were equally mediocre
at thinking abstractly or at processing information about one's three-dimensional
surroundings. So, human hemispheric asymmetry seems to have developed to serve very
practical purposes.
How do we know that the left hemisphere controls language in most adults. There is a
great deal of physical evidence for the left hemisphere as the language center in the
majority of healthy adults.
1) Tests have demonstrated increased neural activity in parts of the left hemisphere
when subjects are using language. (PET scans--Positron Emission Tomography, where
patient injects mildly radioactive substance, which is absorbed more quickly by the more
active areas of the brain). The same type of tests have demonstrated that artistic endeavor
draws normally more heavily on the neurons of the right hemispheric cortex.
2) In instances when the corpus callosum is severed by deliberate surgery to ease
epileptic seizures, the subject cannot verbalize about object visible only in the left field of
vision or held in the left hand.) Remember that in some individuals there seems to be
language only in the right brain; in a few individuals, there seems to be a separate language
center in each hemisphere.)
3.) Another clue has to do with the evidence from studies of brain damage. A person
with a stroke in the right hemisphere loses control over parts of the left side of the body,
sometimes also suffers a dimunition of artistic abilities. But language skills are not
impaired even if the left side of the mouth is crippled, the brain can handle language as
before. A person with a stroke in the left hemisphere loses control of the right side of the
body; also, 70% of adult patients with damage to the left hemisphere will experience at
least some language loss which is not due only to the lack of control of the muscles on the
right side of the mouth--communication of any sort is disrupted in a variety of ways that
are not connected with the voluntary muscles of the vocal apparatus. The cognitive loss of
language is called aphasia, only 1% of adults with damage to the right hemisphere
experience any permanent language loss ((www.pandora.cii.wwu.edu)
Dyslexia is a very broad term defining a learning disability that impairs a person's
fluency or comprehension accuracy in being able to read, and which can manifest
itself as a difficulty with phonological awareness, phonological decoding,
orthographic coding, auditory short-term memory, or rapid naming. Dyslexia is
separate and distinct from reading difficulties resulting from other causes, such as a
non-neurological deficiency with vision or hearing, or from poor or inadequate
reading instruction. It is believed that dyslexia can affect between 5 and 10 percent
of a given population although there have been no studies to indicate an accurate
percentage. There are three proposed cognitive subtypes of dyslexia: auditory,
visual and attentional. Reading disabilities, or dyslexia, is the most common
learning disability, although in research literature it is considered to be a receptive
language-based learning disability. Researchers at MIT found that people with
dyslexia exhibited impaired voice-recognition abilities. Accomplished adult
dyslexics may be able to read with good comprehension, but they tend to read more
slowly than non-dyslexics and may perform more poorly at nonsense word reading
(a measure of phonological awareness) and spelling. Dyslexia is not an intellectual
disability, since dyslexia and IQ are not interrelated, as a result of cognition
developing independently (Ferrer,et.al, 2010: 93)
Reflection
References
- Fodor, J. D., & Ferreira, F. (Eds.). (1998). Sentence reanalysis. Dordrecht: Kluwer.
Website Materials