0% found this document useful (0 votes)
403 views26 pages

Psycholinguistics Handout 2020

Psycholinguistics is the study of the psychological and neurobiological factors that enable humans to acquire, use, comprehend and produce language. It emerged in the 1960s informed by Chomsky's work in linguistics. Psycholinguistics considers both the structure of language and properties of the human mind. It studies language acquisition, comprehension, and production. There are debates around whether language processing is modular or interactive, and on the relationship between linguistics and psycholinguistics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
403 views26 pages

Psycholinguistics Handout 2020

Psycholinguistics is the study of the psychological and neurobiological factors that enable humans to acquire, use, comprehend and produce language. It emerged in the 1960s informed by Chomsky's work in linguistics. Psycholinguistics considers both the structure of language and properties of the human mind. It studies language acquisition, comprehension, and production. There are debates around whether language processing is modular or interactive, and on the relationship between linguistics and psycholinguistics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 26

Introduction to Psycholinguistics

Compiled by Najla Farhani, M.Hum

1. Definition and Scope of Psycholinguitics


1.1. Introduction
Psychologists have long been interested in language, but psycholinguistics as a
field of study did not emerge until the 1960s. It was motivated by Chomsky ’s work in
linguistics, and by his claim that the special properties of language require special
mechanisms to handle it (e.g., Chomsky, 1959). The special feature of language on which
Chomsky focused was its productivity. Possessed with a grammar, or syntax, humans can
produce and understand novel sentences that carry novel messages. We do this in a way
that is exquisitely sensitive to the structure of the language. For example, we interpret The
umpire helped the child to third base and The umpire helped the child on third base as
conveying distinct messages, although the sentences differ in just one small word. We
know that He showed her baby the pictures and He showed her the baby pictures describe
quite different events, even though the difference in word order is slight. We can even
make some sense of Colorless green ideas sleep furiously (Chomsky, 1971), which is
semantically anomalous but syntactically well formed.
Early psycholinguists described our comprehension and production of language in
terms of the rules that were postulated by linguists (Fodor, Bever, & Garrett, 1974). The
connections between psychology and linguistics were particularly close in the area of
syntax, with psycholinguists testing the psychological reality of various proposed linguistic
rules. As the field of psycholinguistics developed, it became clear that theories of sentence
comprehension and production cannot be based in any simple way on linguistic theories;
psycholinguistic theories must consider the properties of the human mind as well as the
structure of the language.
Psycholinguistics has thus become its own area of inquiry, informed by but not
totally dependent on linguistics. Although Chomsky and the early psycholinguists focused
on the creative side of language, language also has its rote side. For example, we store a
great deal of information about the properties of words in our mental lexicon, and we
retrieve this information when we understand or produce language. On some views,
different kinds of mechanisms are responsible for the creative and the habitual aspects of
language. For example, we may use morpheme-based rules to decompose a complex word
like rewritable the first few times we encounter it, but after several exposures we may
begin to store and access the word as a unit (Caramazza, Laudanna, & Romani, 1988;
Schreuder & Baayen, 1995). Dual-route views of this kind have been proposed in several
areas of psycholinguistics. According to such models, frequency of exposure determines
our ability to recall stored instances but not our ability to apply rules. Another idea is that a
single set of mechanisms can handle both the creative side and the rote side of language.
Connectionist theories (see Rumelhart & McClelland, 1986) take this view. Such theories
claim, for instance, that readers use the same system of links between spelling units and
sound units to generate the pronunciations of novel written words like tove and to access
the pronunciations of familiar words, be they words that follow typical spelling-to-sound
correspondences, like stove, or words that are exceptions to these patterns, like love (e.g.,
Plaut, McClelland, Seidenberg, &Patterson, 1996; Seidenberg & McClelland, 1989). In
this view, similarity and frequency both play important roles in processing, with novel
items being processed based on their similarity to known ones. The patterns are statistical
and probabilistic rather than all-or-none.
Early psycholinguists, following Chomsky, tended to see language as an
autonomous system, insulated from other cognitive systems. In this modular view (see J.A.
Fodor, 1983), the initial stages of word and sentence comprehension are not influenced by
higher levels of knowledge. Information about context and about real-world constraints
comes into play only after the first steps of linguistic processing have taken place, giving
such models a serial quality. On an interactive view, in contrast, knowledge about
linguistic context and about the world plays an immediate role in the comprehension of
words and sentences. In this view, many types of information are used in parallel, with the
different sources of information working cooperatively or competitively to yield an
interpretation. Such ideas are often expressed in connectionist terms. Modular and
interactive views may also be distinguished in discussions of language production, where
one issue is whether there is a syntactic component that operates independently of
conceptual and phonological factors.
Another tension in current-day psycholinguistics concerns the proper role of
linguistics in the field. Work on syntactic processing, especially in the early days of
psycholinguistics, was very much influenced by developments in linguistics. Links
between linguistics and psycholinguistics have been less close in other areas, but they do
exist. For instance, work on phonological processing has been influenced by linguistic
accounts of prosody (the melody, rhythm, and stress pattern of spoken language) and of the
internal structure of syllables, and some work on word recognition and language
production has been influenced by linguistic analyses of morphology (the study of
morphemes and their combination). Although most psycholinguists believe that linguistics
provides an essential foundation for their field, some advocates of interactive approaches
have moved away from a reliance on linguistic rules and principles and toward a view of
language in terms of probabilistic patterns.

1.2. Definitions of Psycholinguistics

Psycholinguistics is a young and rapidly growing field. There are now so many
psycholinguistic subspecializations. Here, it would be introduced by outlining the domain
in psycholinguistic inquiry with the general and the most frequent discussion areas and the
basic issues of concern to Psycholinguistics.

William O'Grady, et al (2001) stated that In general, psycholinguistic studies have


revealed that many of the concepts employed in the analysis of sound structure, word
structure, and sentence structure also play a role in language processing. However, an
account of language processing also requires that we understand how these linguistic
concepts interact with other aspects of human processing to enable language production
and comprehension.
While John Field (2003) stated that Psycholinguistics draws on ideas and
knowledge from a number of associated areas, such as phonetics, semantics and pure
linguistics. There is a constant exchange of information between psycholinguists and those
working in neurolinguistics, who study how language is represented in the brain. There are
also close links with studies in artificial intelligence.
Moreover, Friedmann Pulvermüller (2009) stated that Psycholinguistics has
classically focused on button press tasks and reaction time experiments from which
cognitive processes are being inferred. The advent of neuro-imaging opened new research
perspectives for the psycholinguist as it became possible to look at the neuronal mass
activity that underlies language processing. Studies of brain correlates of psycholinguistic
processes can complement behavioral results, and in some cases. It can lead to direct
information about the basis of psycholinguistic processes.
From all definitions above, it can be concluded that Psycholinguistics is the study
of the psychological and neurobiological factors that enable humans to acquire, use,
comprehend and produce language. Initial forays into psycholinguistics were largely
philosophical ventures, due mainly to a lack of cohesive data on how the human brain
functioned. Psycholinguistics covers the cognitive processes that make it possible to
generate a grammatical and meaningful sentence out of vocabulary and grammatical
structures, as well as the processes that make it possible to understand utterances, words,
text, etc. In short, psycholinguists seek to understand how language is done.

1.3. The Domain of Psycholinguistic Inquiry

Linguistics is the discipline that describes the structure of language, including its
grammar, sound system, and vocabulary. The fields of psycholinguistics is concerned with
discovering the psychological processes by which humans acquire and use language.
Conventionally, psycholinguistics adresses three major concerns (Gleason & Ratner,1998:
3):
1. Language Acquisition: How people acquire and learn language.
2. Language Comprehension: How people understand spoken and written language.
3. Language Production: How people produce language.

2. Language and Thought

The relationship between language and thought has been a subject of considerable
interest over the years. Arguments on the subject presented started as early as period 19th
Century Ancient Greek. The 1940s saw the rise and popularity of Sapir-Whorf hypothesis
that suggested that speakers of different languages think differently. Later on scientists
raised criticism on the theory with most adopting the view that language and theory are
universal. Particular concern has been directed in establishing whether there is a difference
in how two people who speak differently also think differently. Recent research such as
Leva (2011) however, suggests that the language people use may affect the way they think
such that language conditions an individual’s perception of issues and the actions. This
empirical evidence presented suggests that language shapes thinking putting to task the
previously held theories on language universalism. The debate on the issue however
continues. An understanding of this relationship has considerable implications for politics,
education, marketing and other areas that involve human interaction. This paper aims to
evaluate this relationship between language and thought. To achieve this, the paper aims to
evaluate five major aspects; whether language dictates thinking, whether language
organize thinking, whether people speaking different languages also think differently,
whether multilingual individuals have broader thinking as compared to monolinguals and
evaluating whether thought can exist without language. The paper is organized in two
major sections; an evaluation of the various existing theories and concepts and subsequent
section that specifically addresses the above specified questions of research.

2.1. Main Theories and Concepts

a. Whorfianism

The Whorfian hypothesis was originally based on the works of German educator Wilhelm
von Humboldt in the 19th century who argued that; "Language is the formative organ of
thought. Intellectual activity, entirely mental, entirely internal, and to some extent passing
without trace, becomes through sound, externalized in speech and perceptible to the senses.
Thought and language are therefore one and inseparable from each other." (as quoted in
Skotko, 1997). The Whorfian hypothesis is one of the theories of linguistic relativism,
which argues that languages differ in basic approaches to reality. The theory was advanced
in the 1940s by American Linguists, Edward Sapir, and Benjamin Whorf, from their study
on the Native American tribes, particularly the Hopi, showed how people using different
languages think differently (Leva, 2011). The Whorfian hypothesis advances that language
is just a part of culture that is affected by the people who carrying it, but it also, in return,
affects the culture and thought. His theory was based on his belief that how people
perceive the world is affected by the structure of the language they use. Whorf argues that
where a language has no word for a concept, then the people using the language cannot
understand that concept. Whorf argues, “The world is represented in a kaleidoscopic flux
of impressions which has to be organized largely by linguistic systems in our minds” (as
cited in Andrew & Keil, 2001, p. 444). This hypothesis has two major aspects; linguistic
relativism and linguistic determination. Linguistic relativism considers the structural
differences between languages, paralleled by non-linguistic cognitive differences in
languages that manifest through the thinking of the speakers. Linguistic determination
considers that the structure of language such as vocabulary, grammar and other aspects,
strongly influences or determines the way its native speakers perceive or reason about the
world (Andrew & Keil, 2001, p.444). The theory puts weight on the unconscious influence
that language has on habitual thoughts (Skotko, 1997) highlighting that language comes
first, and influences thought.

b. Cognition and Language

Recent decades have witnessed research and demonstrations indicating that language
affects cognition. A major contributor in this approach is Lee Vygotsky (2009), a Russian
psychologist whose theory related thinking and speaking. He touched on the
developmentally changing relationship between languages and thought considering the two
as dynamically related. He argued that there is an emergence of vague thought, which is
then completed in language. He considers it as an unfolding process that goes back and
forth “from thought to word and word to thought. He argued that thought is originally non-
verbal and language is non-intellectual and only come meet at around age two when
thought turns verbal and speech turns rational. The concepts of thinking and cognitive are a
result from the specific culture that an individual grows in. thus language has a major
influence in the way a child as it acquires thought through words. His works have had
substantial influence in study of mind and language. Cognitive science perceives the
relationship between language and thought in a different way from Whorfism adopting the
approach that thought does not equal language and thus both thought and perception are
influenced by language, that in contrast language is influenced by thought as expressed by
Geeorge Lakoff that; “Language is based on cognition” (Flohr, n.d). This approach
perceives cognition and languages as existing side by side. It argues that human interaction
with each other generates thoughts. To interact however, language is a vital element thus
the theory suggests that language and thought are integrated and can therefore not be
perceived separately. This is demonstrated by studies showing that changing how people
talk changes how they think, learning new color words enhances a person’s ability to
discriminate color, and learning new ways of talking about time imparts a new way of
thinking about it (Leva, 2011).

c. Language Dictates Thinking

The Whorfian theory was subjected to various criticisms from psychology. First, as argued
by Steven Pinker, Wason and Jorhnson Laird is the lack of evidence that a language
influences a particular way of thinking towards the world for its speakers (Skotko, 1997;
Leva, 2011). By the 1970s, the Sapir-Whorf hypothesis had lost favor with most scientists
with most adopting theories that considered language and thought as universal (Leva,
2011). The critics agreed that language expresses thought but criticized the idea that
language can influence content and thought. The theory has been criticized for its extremist
view that people who use a language cannot understand a concept if it is lacking in their
language. Despite criticism on Whorf’s linguistic deterministic theory, recent research
indicates that people who speak different languages really do think differently and that
language does influence an individual’s perception of reality. Leva (2011) argues that
language shapes the basic aspects of Guy Duetscher observes that language influences an
individual’s minds not out of what it allows the individual to think but rather on what it
obliges the individual to think (Jones, 2010). In real sense, speakers of a certain language
do understand a concept even if it is not in their language as highlighted for example by the
German word “schadenfreude’ which has no equivalent in the English language, English
speakers do understand its meaning which is rejoicing from the bad luck of others. Another
example is the Mandarin Chinese who although do not have words describing present,
past, and future tenses, they nevertheless understand this concept. Language influences and
enforces our thought process. Lev Vygotsky argued that thought is not only expressed in
words but comes into existence through them (as cited in Perkins, 1997). In a related view
Perkins (1997) points out that, “Although thinking involves much more than we can say,
we would have far less access to that "more" without the language of thinking ” (368).
Research by Stephen Levinson shows for example that people who speak languages that
rely on absolute directions perform better in keeping track of their locations even in
unfamiliar grounds when place in the same locations with local folks although they may
not speak the same language (Leva, 2011). How an individual perceives such aspects as
time and space are affected by language. An example is that most European languages
express time as horizontal whereas Mandarin Chinese express it as vertical. Similarly,
English express duration of time in terms of length for example a “short” meeting whereas
the Spanish use the concept of size for example “little” (Jones, 2010). Such other aspects
include action orientation or conditional references that depict anecdotal hints of possible
effects. An example of this is the cause and effect aspect difference exhibited from a video
shown to English and Japanese speakers. For English speakers, they are likely to express it
as “she broke the glass” with reference to the person who broke the glass irrespective of if
it was accidental, whereas for the Japanese speakers it is expressed as “the glass broke
itself” with less emphasis on the doer than if the action was intentional (Jones, 2010).
These differences in language also affect how people construe what happened and affect
eyewitness memory (Leva, 2011). For example in the above example, English speakers
would on request to remember tend to remember the accidents more agentive thus
identifying the door more easily than their Japanese counterparts. Language does not only
influence memory, but also the degree of ease in learning new things (Leva, 2011).
Children speaking a language for example that mentions base 10 structures more clearly
than for example in English learn the base 10 insight sooner. The number of syllables the
number word has also affects such aspects as remembering the phone number. People rely
on language even in doing small things and the categories and distinctions in languages
considerably influence an individual’s mental life. As expressed by Leva, 2011, “What
researchers have been calling "thinking" this whole time actually appears to be a collection
of both linguistic and nonlinguistic processes. As a result, there may not be a lot of adult
human thinking where language does not play a role”.

3. Language Acquisition

Language acquisition is the process by which humans acquire the capacity to


perceive and comprehend language, as well as to produce and use words to communicate.
The capacity to successfully use language requires one to acquire a range of tools including
syntax, phonetics, and an extensive vocabulary. This language might be vocalized as with
speech or manual as in sign.
Language acquisition usually refers to first language acquisition, which studies
infants' acquisition of their native language. This is distinguished from second language
acquisition, which deals with the acquisition (in both children and adults) of additional
languages.
Possessing a language is the quintessentially human trait: all normal humans speak,
no nonhuman animal does. Language is the main vehicle by which we know about other
people's thoughts, and the two must be intimately related. Every time we speak we are
revealing something about language, so the facts of language structure are easy to come
by; these data hint at a system of extraordinary complexity. Nonetheless, learning a first
language is something every child does successfully in a matter of a few years and without
the need for formal lessons. With language so close to the core of what it means to be
human, it is not surprising that children's acquisition of language has received so much
attention. Anyone with strong views about the human mind would like to show that
children's first few steps are steps in the right direction.

4.Stages of language acquisition in children


In nearly all cases, children's language development follows a predictable sequence.
However, there is a great deal of variation in the age at which children reach a given
milestone. Furthermore, each child's development is usually characterized by gradual
acquisition of particular abilities: thus "correct" use of English verbal inflection will
emerge over a period of a year or more, starting from a stage where verbal inflections are
always left out, and ending in a stage where they are nearly always used correctly.
There are also many different ways to characterize the developmental sequence. On
the production side, one way to name the stages is as follows, focusing primarily on the
unfolding of lexical and syntactic knowledge (www.ling.upenn.edu):

Stage Typical age Description


Babbling 6-8 months Repetitive CV patterns
One-word stage
(better one-morpheme or
9-18 months Single open-class words or word stems
one-unit)
or holophrastic stage
"mini-sentences" with simple semantic
Two-word stage 18-24 months
relations
Telegraphic stage "Telegraphic" sentence structures of lexical
or early multiword stage 24-30 months rather than functional or grammatical
(better multi-morpheme) morphemes
Later multiword stage 30+ months Grammatical or functional structures emerge

- Vocalizations in the first year of life

At birth, the infant vocal tract is in some ways more like that of an ape than that of
an adult human. In particular, the tip of the velum reaches or overlaps with the tip of the
epiglottis. As the infant grows, the tract gradually reshapes itself in the adult pattern.
During the first two months of life, infant vocalizations are mainly expressions of
discomfort (crying and fussing), along with sounds produced as a by-product of reflexive
or vegetative actions such as coughing, sucking, swallowing and burping. There are some
nonreflexive, nondistress sounds produced with a lowered velum and a closed or nearly
closed mouth, giving the impression of a syllabic nasal or a nasalized vowel. During the
period from about 2-4 months, infants begin making "comfort sounds", typically in
response to pleasurable interaction with a caregiver. The earliest comfort sounds may be
grunts or sighs, with later versions being more vowel-like "coos". The vocal tract is held in
a fixed position. Initially comfort sounds are brief and produced in isolation, but later
appear in series separated by glottal stops. Laughter appears around 4 months. During the
period from 4-7 months, infants typically engage in "vocal play", manipulating pitch (to
produce "squeals" and "growls"), loudness (producing "yells"), and also manipulating tract
closures to produce friction noises, nasal murmurs, "raspberries" and "snorts".

At about seven months, "canonical babbling" appears: infants start to make


extended sounds that are chopped up rhythmically by oral articulations into syllable-like
sequences, opening and closing their jaws, lips and tongue. The range of sounds produced
are heard as stop-like and glide-like. Fricatives, affricates and liquids are more rarely
heard, and clusters are even rarer. Vowels tend to be low and open, at least in the
beginning. Repeated sequences are often produced, such as [bababa] or [nanana], as well
as "variegated" sequences in which the characteristics of the consonant-like articulations
are varied. The variegated sequences are initially rare and become more common later on.

Both vocal play and babbling are produced more often in interactions with caregivers, but
infants will also produce them when they are alone.

No other animal does anything like babbling. It has often been hypothesized that vocal
play and babbling have the function of "practicing" speech-like gestures, helping the infant
to gain control of the motor systems involved, and to learn the acoustical consequences of
different gestures (www.ling.upenn.edu).

- One word (holophrastic) stage

At about ten months, infants start to utter recognizable words. Some word-like
vocalizations that do not correlate well with words in the local language may consistently
be used by particular infants to express particular emotional states: one infant is reported to

have used to express pleasure, and another is said to have used to


express "distress or discomfort". For the most part, recognizable words are used in a
context that seems to involve naming: "duck" while the child hits a toy duck off the edge of
the bath; "sweep" while the child sweeps with a broom; "car" while the child looks out of
the living room window at cars moving on the street below; "papa" when the child hears
the doorbell.

Young children often use words in ways that are too narrow or too broad: "bottle" used
only for plastic bottles; "teddy" used only for a particular bear; "dog" used for lambs, cats,
and cows as well as dogs; "kick" used for pushing and for wing-flapping as well as for
kicking. These underextensions and overextensions develop and change over time in an
individual child's usage (www.ling.upenn.edu).

-Perception vs. production

Clever experiments have shown that most infants can give evidence (for instance, by gaze
direction) of understanding some words at the age of 4-9 months, often even before
babbling begins. In fact, the development of phonological abilities begins even earlier.
Newborns can distinguish speech from non-speech, and can also distinguish among speech
sounds (e.g. [t] vs. [d] or [t] vs. [k]); within a couple of months of birth, infants can
distinguish speech in their native language from speech in other languages.
Early linguistic interaction with mothers, fathers and other caregivers is almost certainly
important in establishing and consolidating these early abilities, long before the child is
giving any indication of language abilities.

- Early Word Extension

Children’s earliest word uses sometimes coincide with adult usage but may also depart
from it in quite striking ways. Both 19 th- and 20th-century diarists, for example noted
numerous occasions where children overextended their words and used them for referring
to things that would not be covered by adult word. For example, a two year-old might
overextended the word “dog” to refer to cats, sheep, horses and a variety of other four-
legged mammals. Why do children do this? One of possibility is that they do not yet
distinguish among the mammal types they are referring to this way. Another possibility is
for communicative reasons. They may well know that their word is not the right one, but
they don’t have or can’t readily access the right word, so they make do with a term close by
(Clark, Eve. V, 2003: 88).

- Rate of vocabulary development

In the beginning, infants add active vocabulary somewhat gradually. Here are measures of
active vocabulary development in two studies. The Nelson study was based on diaries kept
by mothers of all of their children's utterances, while the Fenson study is based on asking
mothers to check words on a list to indicate which they think their child produces.

Nelson 1973 Fenson 1993


Milestone
(18 children) (1,789 children)
15 months 13 months
10 words
(range 13-19) (range 8-16)
20 months 17 months
50 words
(range 14-24) (range 10-24)
186 words 310 words
Vocabulary at 24 months
(range 28-436) (range 41-668)

There is often a spurt of vocabulary acquisition during the second year. Early words
are acquired at a rate of 1-3 per week (as measured by production diaries); in many cases
the rate may suddenly increase to 8-10 new words per week, after 40 or so words have
been learned. However, some children show a more steady rate of acquisition during these
early stages. The rate of vocabulary acquisition definitely does accelerate in the third year
and beyond: a plausible estimate would be an average of 10 words a day during pre-school
and elementary school years.

- Combining words: the emergence of syntax (two words)

During the second year, word combinations begin to appear. Novel combinations (where
we can be sure that the result is not being treated as a single word) appear sporadically as
early as 14 months. At 18 months, 11% of parents say that their child is often combining
words, and 46% say that (s)he is sometimes combining words. By 25 months, almost all
children are sometimes combining words, but about 20% are still not doing so "often."

- Early multi-unit utterances (telegraphic)

In some cases, early multiple-unit utterances can be seen as concatenations of individual


naming actions that might just as well have occured alone: "mommy" and "hat" might be
combined as "mommy hat"; "shirt" and "wet" might be combined as "shirt wet". However,
these combinations tend to occur in an order that is appropriate for the language being
learned:

1. Doggy bark
2. Ken water (for "Ken is drinking water")
3. Hit doggy

Some combinations with certain closed-class morphemes begin to occur as well: "my
turn", "in there", etc. However, these are the closed-class words such as pronouns and
prepositions that have semantic content in their own right that is not too different from that
of open-class words. The more purely grammatical morphemes -- verbal inflections and
verbal auxiliaries, nominal determiners, complementizers etc. -- are typically absent.

Since the earliest multi-unit utterances are almost always two morphemes long -- two
being the first number after one! -- this period is sometimes called the "two-word stage".
Quite soon, however, children begin sometimes producing utterances with more than two
elements, and it is not clear that the period in which most utterances have either one or two
lexical elements should really be treated as a separate stage.

In the early multi-word stage, children who are asked to repeat sentences may simply leave
out the determiners, modals and verbal auxiliaries, verbal inflections, etc., and often
pronouns as well. The same pattern can be seen in their own spontaneous utterances:

1. "I can see a cow" repeated as "See cow" (Eve at 25 months)


2. "The doggy will bite" repeated as "Doggy bite" (Adam at 28 months)
3. Kathryn no like celery (Kathryn at 22 months)
4. Baby doll ride truck (Allison at 22 months)
5. Pig say oink (Claire at 25 months)
6. Want lady get chocolate (Daniel at 23 months)
7. "Where does Daddy go?" repeated as "Daddy go?" (Daniel at 23 months)
8. "Car going?" to mean "Where is the car going?" (Jem at 21 months)

The pattern of leaving out most grammatical/functional morphemes is called "telegraphic",


and so people also sometimes refer to the early multi-word stage as the "telegraphic stage".

- Acquisition of grammatical elements and the corresponding structures

At about the age of two, children first begin to use grammatical elements. In English, this
includes finite auxiliaries ("is", "was"), verbal tense and agreement affixes ("-ed" and '-s'),
nominative pronouns ("I", "she"), complementizers ("that", "where"), and determiners
("the", "a"). The process is usually a somewhat gradual one, in which the more telegraphic
patterns alternate with adult or adult-like forms, sometimes in adjacent utterances:

1. She's gone. Her gone school. (Domenico at 24 months)


2. He's kicking a beach ball. Her climbing up the ladder there. (Jem at 24 months).
3. I teasing Mummy. I'm teasing Mummy. (Holly at 24 months)
4. I having this. I'm having 'nana. (Olivia at 27 months).
5. I'm having this little one. Me'll have that. (Betty at 30 months).
6. Mummy haven't finished yet, has she? (Olivia at 36 months).

Over a year to a year and a half, sentences get longer, grammatical elements are less often
omitted and less often inserted incorrectly, and multiple-clause sentences become
commoner (www.ling.upenn.edu).

5. Language Comprehension

5.1. Spoken Words Comprehension

Deriving meaning from spoken language involves much more than knowing the meaning
of words and understanding what is intended when those words are put together in a
certain way. The following categories of capacity, knowledge, skill, and dispositions are all
brought to bear in fully comprehending what another person says.

a. Communication Awareness :Communication awareness includes knowing (1) that


spoken language has meaning and purpose, (2) that spoken words, the organization of the
words, their intonation, loudness, and stress patterns, gestures, facial expression,
proximity, and posture all contribute to meaning, (3) that context factors need to be taken
into consideration in interpreting what people mean to communicate, (4) that it is easy to
misinterpret another’s communication, and (5) that it often requires effort to correctly
interpret another person’s intended meaning and that correct interpretation is worth the
effort!

b. Hearing and Auditory Processing Understanding a spoken utterance assumes that the
listener’s hearing is adequate and that the spoken sounds are correctly perceived as
phonemes of English (or whatever language is spoken). Phonemes are the smallest units of
spoken language that make a difference to meaning – corresponding roughly to the letters
in a word (e.g., the sounds that ‘t’, ‘a’, and ‘n’ make in the word ‘tan’). Auditory processing
of language also includes the ability to integrate the separate sounds of a word into the
perception of a meaningful word and of sequences of meaningful words.

c. Word Knowledge and World Knowledge Word knowledge includes knowing the
meaning of words (e.g., understanding them when they are spoken), including multiple
meanings of ambiguous words. Knowing the meaning of a word is more than knowing
what (if anything) that word refers to. Rather it is possession of a large set of meaning
associations that comprise the word’s full meaning. For example knowing the meaning of
the word “horse” includes knowing that horses are animals, that they engage in specific
types of activities, that they have many uses, that they have specific parts, that they have a
certain size, shape, and other attributes, that they are characteristically found in specific
places, and the like. Understanding spoken language requires an adequate vocabulary,
which is a critical component of the semantics of a language. Word meanings may be
concrete (e.g., “ball” refers to round objects that bounce) or abstract (e.g., “justice” refers
to fairness in the pursuit or distribution of various types of goods and services). World
knowledge includes understanding the realities in the world – objects and their attributes,
actions and their attributes, people, relationships, and the like – that words refer to and
describe. For example, if a student has no knowledge of computers, then it is impossible to
fully understand the word ‘computer’.

d. Knowledge of Word Organization Syntax (or grammar) refers to the rules that govern
the organization of words in a sentence or utterance. Comprehending an utterance requires
an ability to decipher the meaning implicit in the organization of words. For example,
“Tom fed the dog” and “The dog fed Tom” have different meanings despite containing
exactly the same words. Morphology (a component of grammar) refers to rules that govern
meaning contained in the structure of the words themselves. Changes within words (e.g.,
adding an ‘s’ to ‘dog’ to get ‘dogs’, or adding an ‘ed’ to ‘kick’ to get ‘kicked’) affects
meaning. Comprehending an utterance requires an ability to decipher the meaning
associated with such modifications of the words.

e. Discourse

Just as there are rules that govern how speakers put words together in a sentence to
communicate their intended meaning, there are also rules that govern how sentences (or
thoughts) are organized to effectively tell stories, describe objects and people, give
directions, explain complex concepts or events, influence people’s beliefs and actions, and
the like. These are called rules of discourse. Effective comprehension of extended
language (e.g., listening to a story or a lecture) assumes that the listener has some idea of
what to listen for and in what order that information might come.

f. Social Knowledge and Pragmatics

Pragmatics refers to the rules governing the use of language in context (including social
context) for purposes of sending and receiving varied types of messages, maintaining a
flow of conversation, and adhering to social rules that apply to specific contexts of
interaction. On the comprehension side of communication, the first of these three types of
rules is most critical. For example, comprehending the sentence, “I will do it” requires
deciding whether the speaker intends to make a promise, a prediction, or a threat. Similarly
“We’d love to have you over for dinner” could be an invitation, a statement of an abstract
desire, or an empty social nicety. Or “Johnny, I see you’ve been working hard at cleaning
your room” could be a description of hard work or a mother’s ironic criticism of Johnny for
not working on his room. In each case, correct interpretation of the utterance requires
consideration of context information, knowledge of the speaker, understanding of events
that preceded the interaction, and general social knowledge

g. Indirect Meanings include metaphor (e.g., “He’s a real spitfire”), sarcasm and irony
(e.g., “You look terrific” said to a person who appears to be very sick), idioms or other
figures of speech (e.g., “People who live in glass houses shouldn’t throw stones”),
hyperbole (e.g., “The story I wrote is about a million pages long!”), and personification
(e.g., “Careful! Not studying for a test can jump up and bite you!”). Comprehending
indirect meanings often requires abstract thinking and consideration of context cues.
Students with brain injury often have significant difficulty deciphering the meaning of such
indirect communication unless the specific use of words was familiar before the injury.
Understanding new metaphors, figures of speech and the like makes significant demands
on cognitive processing (e.g., working memory, reasoning), discussed next.

Cognitive Functions that Support Language Comprehension

 Attention: Comprehending spoken language requires the ability to focus attention


simultaneously on the speaker’s words and nonverbal behavior (e.g., gesture, facial
expression, body posture), to maintain that focus over time, to focus simultaneously
on ones own response, and to flexibly shift attentional focus as topics change.
 Working Memory: Comprehending spoken language requires the ability to hold
several pieces of information in mind at the same time, possibly including the
words that the speaker just uttered, previous turns in the conversation, other
information about the speaker, the topic, and the context, and the like.
 Speed of Processing: Because the units of spoken language arrive in rapid
succession, comprehension requires the ability to process information quickly.
 Organization: Comprehending spoken language requires that the listener put
together (i.e., organize) the various comments that the speaker makes, together with
the listener’s own comments, background information, and the like. This assumes
considerable organizational skill.
 Reasoning: Comprehending a speaker’s intended meaning is often a reasoning
process. For example, if a speaker says, “I’m really busy today” and later in the
conversation says, “I can’t come over to your house after school today,” the listener
should be able to reason that the speaker is not being rude in rejecting an invitation,
but rather is unable to come over because of his busy schedule.
 Abstract thinking ability: Comprehending abstract language, metaphors, figures
of speech, and the like often requires a reasonable level of abstract thinking ability.
(See Indirect Meanings, above.)
 Perspective Taking: Comprehending the intent underlying a speaker’s message
critically relies on the ability to take that person’s perspective. For example, when a
speaker says, “Don’t worry; it’s not a problem,” he just might intend to
communicate that it is a huge problem! Correctly interpreting this message requires
“mind reading” – getting inside the speaker’s frame of reference and understanding
the issues and the words from that person’s perspective.
 Comprehension Monitoring and Strategic Behavior: Effective comprehension
of spoken language requires routine monitoring of comprehension, detection of
possible comprehension failures, a desire to fix breakdowns, and a strategic ability
to repair the breakdown, for example by saying things like, “I’m not sure I
understand what you mean; could you explain?”

The perception of spoken words would seem to be an extremely difficult task. Speech
is distributed in time, a fleeting signal that has few reliable cues to the boundaries between
segments and words. The paucity of cues leads to what is called the segmentation problem,
or the problem of how listeners hear a sequence of discrete units even though the acoustic
signal itself is continuous. Other features of speech could cause difficulty for listeners as
well. Certain phonemes are omitted in conversational speech, others change their
pronunciations depending on the surrounding sounds (e.g., /n/ may be pronounced as [m]
in lean bacon), and many words have “everyday” pronunciations (e.g., going to frequently
becomes gonna). Despite these potential problems, we usually seem to perceive speech
automatically and with little effort. Whether we do so using procedures that are unique to
speech and that form a specialized speech module (Liberman & Mattingly, 1985), or
whether we do so using more general capabilities, it is clear that humans are well adapted
for the perception of speech.
Listeners attempt to map the acoustic signal onto a representation in the mental
lexicon beginning almost as the signal starts to arrive. The cohort model, first proposed by
Marslen-Wilson and Welsh (1978), illustrates how this may occur. According to this
theory, the first few phonemes of a spoken word activate a set or cohort of word candidates
that input. These candidates compete with one another for activation. As more acoustic
input is analyzed, candidates that are no longer consistent with the input drop out of the set.
This process continues until only one word candidate matches the input; the best fitting
word may be chosen if no single candidate is a clear winner. Supporting this view, listeners
sometimes glance first at a picture of a candy when instructed to “pick up the candle ”
(Allopenna, Magnuson, & Tanenhaus,1998). This result suggests that a set of words
beginning with /k{n/ is briefly activated. Listeners may glance at a picture of a handle, too,
suggesting that the cohort of word candidates also includes words that rhyme with the
target. Indeed, later versions of the cohort theory (Marslen-Wilson, 1987; 1990) have
relaxed the insistence on perfectly matching input from the very first phoneme of a word.
Other models (McClelland & Elman, 1986; Norris, 1994) also advocate continuous
mapping between spoken input and lexical representations, with the initial portion of the
spoken word exerting a strong but not exclusive influence on the set of candidates. The
cohort model and the model of McClelland and Elman (1986) are examples of interactive
models, those in which higher processing levels have a direct, “top-down” influence on
lower levels. In particular, lexical knowledge can affect the perception of phonemes. A
number of researchers have found evidence for interactivity in the form of lexical effects
on the perception of sublexical units. Wurm and Samuel (1997), for example, reported that
listeners’ knowledge of words can lead to the inhibition of certain phonemes. Samuel
(1997) found additional evidence of interactivity by studying the phenomenon of phonemic
restoration. This refers to the fact that listeners continue to “hear” phonemes that have
been removed from the speech signal and replaced by noise. Samuel discovered that the
restored phonemes produced by lexical activation lead to reliable shifts in how listeners
labeled ambiguous phonemes.
Modular models, which do not allow top-down perceptual effects, have had varying
success in accounting for some of the findings just described. The race model of Cutler and
Norris (1979; see also Norris, McQueen, & Cutler, 2000) is one example of such a model.
The model has two routes that race each other -- a pre-lexical route, which computes
phonological information from the acoustic signal, and a lexical route, in which the
phonological information associated with a word becomes available when the word itself is
accessed. When word-level information appears to affect a lower-level process, it is
assumed that the lexical route won the race. Importantly, though, knowledge about words
never influences perception at the lower (phonemic) level. There is currently much
discussion about whether all of the experimental findings suggesting top-down effects can
be explained in these terms or whether interactivity is necessary (see Norris et al., 2000,
and the associated commentary).
Although it is a matter of debate whether higher-level linguistic knowledge affects
the initial stages of speech perception, it is clear that our knowledge of language and its
patterns facilitates perception in some ways. For example, listeners use phonotactic
information such as the fact that initial /tl/ is illegal in English to help identify phonemes
and word boundaries (Halle, Segui, Frauenfelder, & Meunier, 1998). As another example,
listeners use their knowledge that English words are often stressed on the first syllable to
help parse the speech signal into words (Norris, McQueen, & Cutler, 1995). These types of
knowledge help us solve the segmentation problem in a language that we know, even
though we perceive an unknown language as an undifferentiated string.

5.2. Printed Word Recognition


Speech is as old as our species and is found in all human civilizations; reading and
writing are newer and less widespread. These facts lead us to expect that readers would use
the visual representations that are provided by print to recover the phonological and
linguistic structure of the message. Supporting this view, readers often access phonology
even when they are reading silently and even when reliance on phonology would tend to
hurt their performance.
In one study, people were asked to quickly decide whether a word belonged to a
specified category (Van Orden, 1987). They were more likely to misclassify a homophone
like meet as a food than to misclassify a control item like melt as a food. In other studies,
readers were asked to quickly decide whether a printed sentence makes sense. Readers
with normal hearing were found to have more trouble with sentences such as He doesn’t
like to eat meet than with sentences such as He doesn’t like to eat melt. Those who were
born deaf, in contrast, did not show a difference between the two sentence types (Treiman
& Hirsh-Pasek, 1983).
The English writing system, in addition to representing the sound segments of a
word, contains clues to the word’s stress pattern and morphological structure. Consistent
with the view that print serves as a map of linguistic structure, readers take advantage of
these clues as well. For example, skilled readers appear to have learned that a word that has
more letters than strictly necessary in its second syllable (e.g., -ette rather than -et) is likely
to be an exception to the generalization that English words are typically stressed on the
first syllable. In a lexical decision task, where participants must quickly decide whether a
letter string is a real word, they perform better with words such as cassette, whose stressed
second syllable is spelled with -ette, than with words such as palette, which has final -ette
but first-syllable stress (Kelly, Morris, & Verrekia, 1998). Skilled readers also use the
clues to morphological structure that are embedded in English orthography. For example,
they know that the prefix re- can stand before free morphemes such as print and do,
yielding the two-morpheme words reprint and redo. Encountering vive in a lexical decision
task, participants may wrongly judge it to be a word because of their familiarity with
revive (Taft & Forster, 1975).
Although there is good evidence that phonology and other aspects of linguistic
structure are retrieved in reading (see Frost, 1998 for a review), there are a number of
questions about how linguistic structure is derived from print. One idea, which is embodied
in dual-route theories such as that of Coltheart, Rastle, Perry, Langdon, and Ziegler (2001),
is that two different processes are available for converting orthographic representations to
phonological representations. A lexical route is used to look up the phonological forms of
known words in the mental lexicon; this procedure yields correct pronunciations for
exception words such as love. A nonlexical route accounts for the productivity of reading:
It generates pronunciations for novel letter strings (e.g., tove) as well as for regular words
(e.g., stove) on the basis of smaller units. This latter route gives incorrect pronunciations
for exception words, so that these words may be pronounced slowly or erroneously (e.g.,
love said as /lov/) in speeded word naming tasks. In contrast, connectionist theories claim
that a single set of connections from orthography to phonology can account for
performance on both regular words and exception words.
Because spoken words are spread out in time, as discussed earlier, spoken word
recognition is generally considered a sequential process. With many printed words, though,
the eye takes in all of the letters during a single fixation (Rayner & Pollatsek, 1989). The
connectionist models of reading cited earlier maintain that all phonemes of a word are
activated in parallel. Current dual-route theories, in contrast, claim that the assembly
process operates in a serial fashion such that the phonological forms of the leftmost
elements are delivered before those for the succeeding elements (Coltheart et al., 2001).
Still another view (Berent & Perfetti, 1995) is that consonants, whatever their position, are
translated into phonological form before vowels. These issues are the subject of current
research and debate (see Lee, Rayner, & Pollatsek, 2001; Lukatela & Turvey, 2000; Rastle
& Coltheart, 1999; Zorzi, 2000). Progress in determining how linguistic representations are
derived from print will be made as researchers move beyond the short, monosyllabic words
that have been the focus of much current research and modeling. In addition, experimental
techniques that involve the brief presentation of stimuli and the tracking of eye movements
are contributing useful information. These methods supplement the naming tasks and
lexical decision tasks that are used in much of the research on single word reading.
Although many questions remain to be answered, it is clear that the visual
representations provided by print rapidly make contact with the representations stored in
the mental lexicon. Once this contact has been made, it matters little whether the initial
input was by eye or by ear. The principles and processing procedures are much the same
(Treiman, et.al, 2003: 527-548)

6. Language Production

Language production refers to the process involved in creating and expressing meaning
through language. According to Levelt (1989), Language production contains four
successive stages : (1) conceptualization, (2) formulation , 3) articulation, (4) self-
monitoring (Scovel 1998:27)

 First, we must conceptualize what we wish to communicate;


 Second, we formulate this thought into a linguistic plan;
 Third, we execute the plan through the muscles in the speech system; Articulation
of speech sounds is the third and a very important stage of production. Once we
have organized our thoughts into a linguistic plan, this information must be sent
from the brain to the muscles in the speech system so that they can then execute the
required movements and produce the desired sounds. We depend on vocal organs
to produce speech sounds so as to express ourselves. In the production of speech
sounds, the lungs, larynx and lips may work at the same time and thus form co-
articulation. The process of speech production is so complicated that it is still a
mystery in psycholinguistics though psycholinguists have done some research with
high-tech instruments and have known much about speech articulation.
 Finally, we monitor our speech, assessing whether it is what we intended to say
and whether we said it the way we intended to. Self-regulation is the last stage of
speech production. To err is human. No matter who he is, he would make mistakes
in conversation or in writing. So each person would do some self-correction over
and over again while conversing.

-Speech errors

 Speech errors are made by speakers unintentionally.


 They are very common and occur in everyday speaking.
 In formulation speech, we are often influenced by the sound system of language.
For example, big and fat--- pig fat; fill the pool---fool the pill.

slips of the tongue or tongue-slips,

 The scientific study of speech errors, commonly called slips of the tongue or
tongue-slips, can provide useful clues to the processes of language production: they
can tell us where a speaker stops to think.

Examples of the eight types of errors

 ____________________________________________________________
 Type Example
 ____________________________________________________________
 (1) Shift That’s so she’ll be ready incase she decide to hits it. (decides to hit it).
 (2) Exchange Fancy getting your model resnosed. (getting your nose remodeled).
 (3) Anticipation Bake my bike. (take my bike).
 (4) Perseveration He pulled a pantrum. (tantrum).
 (5) Addition I didn’t explain this clarefully enough. (carefully enough).
 (6) Deletion I’ll just get up and mutter intelligibly. (unintelligibly).
 (7) Substitution At low speeds it’s too light. (heavy).
 (8) Blend That child is looking to be spaddled. (spanked\paddled)

Explanations of errors

 (1) in Shifts, one speech segment disappears from its appropriate place and appears
somewhere else.
 (2) Exchanges are, in fact, double shifts, in which two linguistic units exchange
places.
 (3) Anticipations occur when a later segment takes the place of an earlier one. They
are different from shifts in that the segment that intrudes on another also remains in
its correct place and thus is used twice.
 (4) Perseverations appear when an earlier segment replaces a later item.
 (5) Additions add linguistic material.
 (6) Deletions leave something out.
 (7) Substitutions occur when one segment is replaced by an intruder. These are
different from the previously described slips in that the source of the intrusion may
not be in the sentence.
 (8) Blends apparently occur when more than one word is being considered and the
two intended items “fuse” or blend into a single item.

An outstanding hypothesis concerning the basis for such errors


 An outstanding hypothesis concerning the basis for such errors has been Freud’s
view that errors occur because we have more than a single plan for production and
that one such plan competes with and dominates the other.

- Generation of sentences in spoken language production


We will now consider how speakers generate longer utterances, such as
descriptions of scenes or events. The first step is again conceptual preparation – deciding
what to say. Evidently, conceptual preparation is more complex for longer than for shorter
utterances. To make a complicated theoretical argument or to describe a series of events,
the speaker needs a global plan (see Levelt, 1989). Each part of the global plan must be
further elaborated, perhaps via intermediate stages, until a representational level is reached
that consists of lexical concepts.
This representation, the message, forms the input to linguistic planning. Utterances
comprising several sentences are rarely laid out entirely before linguistic planning begins.
Instead, all current theories of sentence generation assume that speakers prepare utterances
incrementally. That is, they initiate linguistic planning as soon as they have selected the
first few lexical concepts and prepare the rest later, either while they are speaking or
between parts of the utterance. Speakers can probably choose conceptual planning units of
various sizes, but the typical unit for many situations appears to correspond roughly to a
clause (Bock & Cutting, 1992).
When speakers plan sentences, they retrieve words. However, because sentences
are not simply sets of words but have syntactic structure, speakers must apply syntactic
knowledge to generate sentences. Following Garrett (1975), models of sentence production
generally assume that two distinct sets of processes are involved in generating syntactic
structure (Bock & Levelt, 1994; Levelt, 1989). The first set, often called functional
planning processes, assigns grammatical functions, such as subject, verb, or direct object,
to lemmas. These processes rely primarily on information from the message level and the
syntactic properties of the retrieved lemmas. The second set of processes, often called
positional encoding, uses the retrieved lemmas and the functions they have been assigned
to generate syntactic structures that capture the dependencies among constituents and their
order.
As we have noted, grammatical encoding begins with the assignment of lemmas to
grammatical functions. This mapping process is largely determined by conceptual
information. In studies of functional encoding, speakers are often asked to describe
pictures of scenes or events or to recall sentences from memory; the recall task involves
the reconstruction of the surface structure of the utterance on the basis of stored conceptual
information. Many such studies have focused on the question of which part of the
conceptual structure will be assigned the role of grammatical subject (e.g., McDonald,
Bock, & Kelly, 1993).
When the positional representation for an utterance fragment has been generated, the
corresponding phonological form can be built. For each word, phonological segments and,
where necessary, information about the word’s stress pattern are retrieved from the mental
lexicon. But the phonological form of a phrase is not just a concatenation of the forms of
words as pronounced in isolation. Instead, the stored word forms are combined into new
prosodic units (Nespor & Vogel, 1986). We have already discussed the syllable, a small
prosodic unit. The next larger unit is the phonological word. Phonological words often
correspond to lexical words. However, a morphologically complex word may comprise
several phonological words, and unstressed items such as conjunctions and pronouns
combine with preceding or following content words into single phonological words.
The next level in the prosodic hierarchy is the phonological phrase. Phonological
phrases often correspond to syntactic phrases, but long syntactic phrases may be divided
into several phonological phrases. Like the phonological word, the phonological phrase is a
domain of application for certain phonological rules. These include the rule of English that
changes the stress patterns of words to generate an alternating pattern and the rule that
lengthens the final syllable of the phrase. Finally, phonological phrases combine into
intonational phrases.
There are phonological rules governing how words are pronounced in different
environments. For these rules to apply, the individual segments must be available to the
processor. In connected speech, the decomposition of morphemes and the re-assembly into
phonological forms is not a vacuous process but yields phonological forms that differ from
those stored in the mental lexicon.

-Written language production


Many of the steps in the production of written language are similar to those in the
production of spoken language. A major difference is that, once a lemma and its
morphological representation have been accessed, it is the orthographic rather than the
phonological form that must be retrieved and produced. Phonology plays an important role
in this process, just as it does in the process of deriving meaning from print in reading.
Support for this view comes from a study in which speakers of French were shown
drawings of such objects as a seal (phoque) and a pipe (pipe) and were asked to write their
names as quickly as they could (Bonin, Peereman, & Fayol, in press). The time needed to
initiate writing was longer for items such as phoque, for which the initial phoneme has an
unusual spelling (/f/ is usually spelled as f in French), than for items such as pipe, for
which the initial phoneme is spelled in the typical manner. Thus, even when a to-be-spelled
word is not presented orally, its phonological form appears to be involved in the selection
of the spelling. A number of the same issues that were raised earlier about the derivation of
phonology from orthography in reading arise with respect to the derivation of orthography
from phonology in spelling. For instance, issues about grain size apply to spelling as well
as to reading. Kessler and Treiman (2001) have shown that the spelling of an English
segment becomes more predictable when neighboring segments are taken into account.
The largest effects involve the vowel nucleus and the coda, suggesting that rimes have a
special role in English spelling. Feedback between production and comprehension is
another issue that arises in spelling as well as in reading: We may read a spelling back to
check whether it is correct. Writing differs from speaking in that writers often have more
time available for conceptual preparation and planning. They may have more need to do so
as well, as the intended reader of a written text is often distant in time and space from the
writer. Monitoring and revising, too, typically play a greater role in writing than in
speaking. For these reasons, much of the research on writing (see Kellogg, 1994; Levy &
Ransdell, 1996) has concentrated on the preparation and revision processes rather than on
the sentence generation and lexical access processes that have been the focus of spoken
language production research(Treiman, et.al, 2003: 527-548).

7. Language and Memory

Memory is the process by which information is encoded, stored, and retrieved. Encoding
allows information that is from the outside world to reach our senses in the forms of
chemical and physical stimuli. In this first stage we must change the information so that we
may put the memory into the encoding process. Storage is the second memory stage or
process. This entails that we maintain information over periods of time. Finally the third
process is the retrieval of information that we have stored. We must locate it and return it
to our consciousness. Some retrieval attempts may be effortless due to the type of
information.

From an information processing perspective there are three main stages in the formation
and retrieval of memory:

 Encoding or registration: receiving, processing and combining of received


information
 Storage: creation of a permanent record of the encoded information
 Retrieval, recall or recollection: calling back the stored information in response to
some cue for use in a process or activity

8. Language and Brain

The concept of language and brain is the discussion area of Psycholinguistics which
cover all the domains’ processing of Psycholinguistic inquiry where the language is
acquired, processed, and produced through the brain’s activities.
Many people assume the physical basis of language lies in the lips, the tongue, or
the ear. But deaf and mute people can also possess language fully. People who have no
capacity to use their vocal cords may still be able to comprehend language and use its
written forms. And human sign language, which is based on visible gesture rather than the
creation of sound waves, is an infinitely creative system just like spoken forms of
language. But the basis of sign language is not in the hand, just as spoken language is not
based in the lips or tongue. There are many examples of aphasics who lose both the ability
to write as well as to express themselves using sign-language, yet they never lose manual
dexterity in other tasks, such as sipping with a straw or tying their shoes.
Language is brain stuff--not tongue, lip, ear, or hand stuff. The language organ is
the mind. More specifically, the language faculty seems to be located in certain areas of the
left hemispheric cortex in most healthy adults. A special branch of linguistics, called
neurolinguistics, studies the physical structure of the brain as it relates to language
production and comprehension.

- Structure of the human brain.


The human brain displays a number of physiological and structural characteristics
that must be understood before beginning a discussion of the brain as language organ.
First, the cerebrum, consisting of a cortex (the outer layer) and a subcortex, is also divided
into two hemispheres joined by a membrane called the corpus callosum. There are a few
points which must be made about the functioning of these two cerebral hemispheres.

1). In all humans, the right hemisphere controls the left side of the body; the left
hemisphere controls the right side of the body. This arrangement--called contralateral
neural control is not limited to humans but is also present in all vertibrates--fish, frogs,
lizards, birds and mammals. On the other hand, in invertibrates such as worms, the right
hemisphere controls the right side, the left hemisphere controls the left side. The
contralateral arrangement of neural control thus might be due to an ancient evolutionary
change which occurred in the earliest vertibrates over half a billion years ago. The earliest
vertibrate must have undergone a 180° turn of the brain stem on the spinal chord so that the
pathways from brain to body side became crossed. The probability that such a primordial
twist did occur is also born out by the fact that invertibrates have their main nerve
pathways on their bellies and their circulatory organs on their backs, while all vertibrates
have their heart in front and their spinal chord in back--just as one would expect if the 180°
twist of the brain stem vis-a-vis the body did take place.

2). Another crucial feature of brain physiology is that each hemisphere has somewhat
unique functions (unlike other paired organs such as the lungs, kidneys, breasts or testicles
which have identical functions). In other words, hemisphere function is asymmetrical.
This is most strikingly the case in humans, where the right hemisphere in addition to
controlling the left side of the body, also controls spatial acuity, while the left hemisphere
in addition to controlling the right side of the body, controls abstract reasoning and
physical tasks which require a step-by-step progression. It is important to note that in
adults, the left hemisphere also controls language; even in most left-handed patients,
lateralization of language skills in the left hemisphere is completed by the age of puberty.
Now, why should specialized human skills such as language and abstract reasoning have
developed in the left hemisphere instead of the right? Why didn't these skills develop
equally in both hemispheres. The answer seems to combine the principle of functional
economy with increased specialization. In nature, specialization for particular tasks often
leads to physical asymmetry of the body, witness the lobster's claws, where limbs or other
of the body differentiate to perform a larger variety of tasks with greater sophistication (the
same might be said to have happened in human society with the rise of different trades and
the division of labor).
Because of this specialization, one hemisphere in most individuals for some reason
it is the right hemisphere came to control matters relating to 3D spatial acuity, the
awareness of position in space in all directions simultaneously. Thus, in modern humans,
artistic ability tends to be centered in various areas of the right hemisphere.
The left hemisphere, on the other hand, came to control patterns that progress step-by-
step in a single dimension, such as our sense of time progression, or the logical steps
required in performing feats of manual dexterity such as the process of fashioning a stone
axe. This connects with right-handedness. Most humans are born with a lopsided
preference for performing skills of manual dexterity with the right hand. The hand
controlled by the left hemisphere. The left hand holds an object in space while the right
hand manipulates that object to perform tasks which require a step-by-step progression.
Obviously, this is a better arrangement than if both hands were equally clumsy at
performing complex, multi-step tasks, or if both sides of the brain were equally mediocre
at thinking abstractly or at processing information about one's three-dimensional
surroundings. So, human hemispheric asymmetry seems to have developed to serve very
practical purposes.
How do we know that the left hemisphere controls language in most adults. There is a
great deal of physical evidence for the left hemisphere as the language center in the
majority of healthy adults.
1) Tests have demonstrated increased neural activity in parts of the left hemisphere
when subjects are using language. (PET scans--Positron Emission Tomography, where
patient injects mildly radioactive substance, which is absorbed more quickly by the more
active areas of the brain). The same type of tests have demonstrated that artistic endeavor
draws normally more heavily on the neurons of the right hemispheric cortex.
2) In instances when the corpus callosum is severed by deliberate surgery to ease
epileptic seizures, the subject cannot verbalize about object visible only in the left field of
vision or held in the left hand.) Remember that in some individuals there seems to be
language only in the right brain; in a few individuals, there seems to be a separate language
center in each hemisphere.)
3.) Another clue has to do with the evidence from studies of brain damage. A person
with a stroke in the right hemisphere loses control over parts of the left side of the body,
sometimes also suffers a dimunition of artistic abilities. But language skills are not
impaired even if the left side of the mouth is crippled, the brain can handle language as
before. A person with a stroke in the left hemisphere loses control of the right side of the
body; also, 70% of adult patients with damage to the left hemisphere will experience at
least some language loss which is not due only to the lack of control of the muscles on the
right side of the mouth--communication of any sort is disrupted in a variety of ways that
are not connected with the voluntary muscles of the vocal apparatus. The cognitive loss of
language is called aphasia, only 1% of adults with damage to the right hemisphere
experience any permanent language loss ((www.pandora.cii.wwu.edu)

9. Language and Speech Disorders


Language and Speech Disorders are also the discussion areas of Psycholinguistics.
They show that Psycholinguistic development, effectiveness, and linguistic activities of
humans do not always go smoothly where they also find some troubles and disorders.
Language disorders are usually considered distinct from speech disorders, even
though they are often used synonymously. Speech disorders refer to problems in producing
the sounds of speech or with the quality of voice, where language disorders are usually an
impairment of either understanding words or being able to use words and does not have to
do with speech production.

a. Language Disorders or Impairments


Language disorders or language impairments are disorders that involve the
processing of linguistic information. Problems that may be experienced can involve
grammar (syntax and/or morphology), semantics (meaning), or other aspects of language.
These problems may be receptive (involving impaired language comprehension),
expressive (involving language production), or a combination of both. Examples include
specific language impairment and aphasia, among others. Language disorders can affect
both spoken and written language, and can also affect sign language; typically, all forms of
language will be impaired.
Some kinds of language disorders are:

 Broca's aphasia--emissive aphasia--agrammatic aphasia: difficulty in encoding, in


building up a context, difficulty in using the grammatical matrix of phrase
structure, difficulty in using the elements and patterns of language without concrete
meaning. Broca's area apparently houses the elements of language that have
function but no specific meaning--the syntactic rules and phonological patterns, as
well as the function words--that is, the grammatical glue which holds the context
together.
 Wernicke's aphasia--receptive aphasia--jargon aphasia: difficulty in decoding, in
breaking down a context into smaller units, as well as in selecting and using the
elements of language with concrete meaning. Wernicke's area apparently houses
the elements of language that have specific meaning--the content words, the
lexemes--that is, the storehouse of prefabricated, meaningful elements which a
speaker selects when filling in a context.

 Auditory Processing Disorder (APD), also known as Central Auditory Processing


Disorder (CAPD) is an umbrella term for a variety of disorders that affect the way
the brain processes auditory information. Individuals with APD usually have
normal structure and function of the outer, middle and inner ear (peripheral
hearing). However, they cannot process the information they hear in the same way
as others do, which leads to difficulties in recognizing and interpreting sounds,
especially the sounds composing speech. It is thought that these difficulties arise
from dysfunction in the central nervous system (i.e., brain) (Dawes & Bishop,
2009: 440)

 Dyslexia is a very broad term defining a learning disability that impairs a person's
fluency or comprehension accuracy in being able to read, and which can manifest
itself as a difficulty with phonological awareness, phonological decoding,
orthographic coding, auditory short-term memory, or rapid naming. Dyslexia is
separate and distinct from reading difficulties resulting from other causes, such as a
non-neurological deficiency with vision or hearing, or from poor or inadequate
reading instruction. It is believed that dyslexia can affect between 5 and 10 percent
of a given population although there have been no studies to indicate an accurate
percentage. There are three proposed cognitive subtypes of dyslexia: auditory,
visual and attentional. Reading disabilities, or dyslexia, is the most common
learning disability, although in research literature it is considered to be a receptive
language-based learning disability. Researchers at MIT found that people with
dyslexia exhibited impaired voice-recognition abilities. Accomplished adult
dyslexics may be able to read with good comprehension, but they tend to read more
slowly than non-dyslexics and may perform more poorly at nonsense word reading
(a measure of phonological awareness) and spelling. Dyslexia is not an intellectual
disability, since dyslexia and IQ are not interrelated, as a result of cognition
developing independently (Ferrer,et.al, 2010: 93)

 Pragmatic language impairment (PLI) is an impairment in understanding


pragmatic areas of language. This type of impairment was previously called
semantic-pragmatic disorder (SPD). Pragmatic language impairments are related to
autism and Asperger syndrome, but also could be related to other non autistic
disabilities such as ADHD and mental retardation. People with these impairments
have special challenges with the semantic aspect of language (the meaning of what
is being said) and the pragmatics of language (using language appropriately in
social situations). In 1983, Rapin and Allen suggested the term "semantic pragmatic
disorder" to describe the communicative behavior of children who presented traits
such as pathological talkativeness, deficient access to vocabulary and discourse
comprehension, atypical choice of terms and inappropriate conversational skills
(Rapin & Allen, 1983: 155). They referred to a group of children who presented
with mild autistic features and specific semantic pragmatic language problems.
More recently, the term "pragmatic language impairment" (PLI) has been proposed
(Bishop, 2000: 99).

b. Speech Disorders or Impediments

Speech disorders or speech impediments are a type of communication disorders where


'normal' speech is disrupted. This can mean stuttering, lisps, etc. Someone who is unable to
speak due to a speech disorder is considered mute.
Classifying speech into normal and disordered is more problematic than it first seems.
By a strict classification, only 5% to 10% of the population has a completely normal
manner of speaking (with respect to all parameters) and healthy voice; all others suffer
from one disorder or another. Some of them are:
 Stuttering affects approximately 1% of the adult population
 Dysprosody is the rarest neurological speech disorder. It is characterized by
alterations in intensity, in the timing of utterance segments, and in rhythm, cadence,
and intonation of words. The changes to the duration, the fundamental frequency,
and the intensity of tonic and atonic syllables of the sentences spoken, deprive an
individual's particular speech of its characteristics. The cause of dysprosody is
usually associated with neurological pathologies such as brain vascular accidents,
cranioencephalic traumatisms, and brain tumors
 Muteness is complete inability to speak
 Speech sound disorders involve difficulty in producing specific speech sounds
(most often certain consonants, such as /s/ or /r/), and are subdivided into
articulation disorders (also called phonetic disorders) and phonemic disorders.
Articulation disorders are characterized by difficulty learning to produce sounds
physically. Phonemic disorders are characterized by difficulty in learning the sound
distinctions of a language, so that one sound may be used in place of many.
However, it is not uncommon for a single person to have a mixed speech sound
disorder with both phonemic and phonetic components.
 Voice disorders are impairments, often physical, that involve the function of the
larynx or vocal resonance.
 Dysarthria is a weakness or paralysis of speech muscles caused by damage to the
nerves and/or brain. Dysarthria is often caused by strokes, parkinsons disease, ALS,
head or neck injuries, surgical accident, or cerebral palsy.
 Apraxia of speech may result from stroke or be developmental, and involves
inconsistent production of speech sounds and rearranging of sounds in a word
("potato" may become "topato" and next "totapo"). Production of words becomes
more difficult with effort, but common phrases may sometimes be spoken
spontaneously without effort. It is now considered unlikely that childhood apraxia
of speech and acquired apraxia of speech are the same thing, though they share
many characteristics (www.wikipedia.org/speechdisorders)

Reflection

After drawing and discussing some important areas in Psycholinguistics, we can


come to the understanding of the frame of how human acquires, processes and makes
language. Language is so basic to our existence that life without words is difficult to
envision. Because of speaking, listening, reading and writing are such fundamental aspects
of our daily lives, they seem to be ordinary skills. Executed easily and effortlessly,
language use guides us trough our day. It facilitates our relationships with others and helps
us to understand the world events and arts and sciences. Psycholinguistics and many
psycholinguistic researches, in this case, lead us to know the process of those complicated
and long language processes and activities in “slow motion”.

References

- Bishop DVM (2000). Pragmatic language impairment: A correlate of SLI, a distinct


subgroup, or part of the autistic continuum? In DVM Bishop & LB Leonard (Eds.),
Speech and language impairments in children: Causes, characteristics,
intervention and outcome (pp. 99–113). Hove, UK: Psychology Press.

- Chomsky, N. (1959). Review of Skinner’s Verbal Behavior. Language, 35, 26-58.

- Clark, Eve. V. (2003). First Language Acquisition. Cambridge University Press

- Ferrer E, Shaywitz BA, Holahan JM, Marchione K, Shaywitz SE (January 2010).


"Uncoupling of reading and IQ over time: empirical evidence for a definition of
dyslexia". Psychol Sci 21 (1): 93–101

- Field, John. (2003). Psycholinguistics: A Resource Book for Students. Routledge

- Fodor, J. D., & Ferreira, F. (Eds.). (1998). Sentence reanalysis. Dordrecht: Kluwer.

- Gaskell, Gareth. (2009). The Oxford Handbook of Psycholinguistics, ed. Oxford


University Press

- Gleason, B.J & Ratner, B. N.(1998). Psycholinguistics. Harcourt Brace College

- O'Grady, William, et al. (2001) Contemporary Linguistics: An Introduction. Bedford/St.


Martin's

- Rapin I, Allen D (1983). Developmental language disorders: Nosologic considerations. In


U. Kirk (Ed.), Neuropsychology of language, reading, and spelling (pp. 155–184). :
Academic Press

Website Materials

- http://www.ling.upenn.edu/courses/Fall_2003 (Retrieved, March, 30, 2012)


- http://pandora.cii.wwu.edu/vajda/ling201/test4materials/language_and_the_brain.htm
(Retrieved, April, 11, 2012)

- http://www.wikipedia.org (Retrieved, April, 8, 2012)

You might also like