nihms291532
nihms291532
Author Manuscript
Enfance. Author manuscript; available in PMC 2013 March 22.
Published in final edited form as:
NIH-PA Author Manuscript
Abstract
Imagine a child who has never seen or heard language. Would such a child be able to invent a
language? Despite what one might guess, the answer is "yes". This chapter describes children who
are congenitally deaf and cannot learn the spoken language that surrounds them. In addition, the
children have not been exposed to sign language, either by their hearing parents or their oral
schools. Nevertheless, the children use their hands to communicate––they gesture––and those
gestures take on many of the forms and functions of language (Goldin-Meadow 2003a). The
properties of language that we find in these gestures are just those properties that do not need to be
handed down from generation to generation, but can be reinvented by a child de novo. They are
the resilient properties of language, properties that all children, deaf or hearing, come to language-
NIH-PA Author Manuscript
90% of deaf children are not born to deaf parents who could provide early access to sign
language. Rather, they are born to hearing parents who, quite naturally, expose their children
to speech. Unfortunately, it is extremely uncommon for deaf children with severe to
profound hearing losses to acquire spoken language without intensive and specialized
instruction. Even with instruction, their acquisition of speech is markedly delayed (Conrad
1979; Mayberry 1992).
The ten children my colleagues and I studied were severely to profoundly deaf (Goldin-
Meadow 2003a). Their hearing parents had decided to educate them in oral schools where
sign systems are neither taught nor encouraged. At the time of our observations, the children
ranged in age from 1;2 to 4;10 (years;months) and had made little progress in oral language,
occasionally producing single words but never combining those words into sentences. In
addition, they had not been exposed to a conventional sign system of any sort (e.g.,
American Sign Language or a manual code of English). The children thus knew neither sign
nor speech.
Goldin-Meadow Page 2
Under such inopportune circumstances, these deaf children might be expected to fail to
communicate, or perhaps to communicate only in non-symbolic ways. The impetus for
symbolic communication might require a language model, which all of these children
NIH-PA Author Manuscript
lacked. However, this turns out not to be the case. Many studies have shown that deaf
children will spontaneously use gestures – called “homesigns” – to communicate if they are
not exposed to a conventional sign language (Fant 1972; Lenneberg 1964; Moores 1974;
Tervoort 1961). Children who use gesture in this way are clearly communicating. But are
they communicating in a language-like way? The focus of my work has been to address this
question. I do so by identifying linguistic constructions that the deaf children use in their
gesture systems. These properties of language, which the children are able to fashion
without benefit of linguistic input, are what I call the “resilient” properties of language
(Goldin-Meadow 1982; 2003a).
Words
The deaf children’s gesture words have many properties that are found in the words of all
natural languages. The gestures are stable in form, although they needn’t be. It would be
easy for the children to make up a new gesture to fit every new situation (and, indeed, that
appears to be what hearing speakers do when they gesture along with their speech, cf.
McNeill 1992). But that’s not what the deaf children do. They develop a stable store of
forms that they use in a range of situations – they develop a lexicon, an essential component
of all languages (Goldin-Meadow, Butcher, Mylander and Dodge 1994).
Moreover, the gestures the children develop are composed of parts that form paradigms, or
systems of contrasts. When the children invent a gesture form, they do so with two goals in
mind – the form must not only capture the meaning they intend (a gesture-world relation),
but it must also contrast in a systematic way with other forms in their repertoire (a gesture-
gesture relation). In addition, the parts that form these paradigms are categorical. For
example, one child used a Fist handshape to represent grasping a balloon string, a drumstick,
and handlebars – grasping actions requiring considerable variety in diameter in the real
world. The child did not distinguish objects of varying diameters within the Fist category,
but did use his handshapes to distinguish objects with small diameters as a set from objects
NIH-PA Author Manuscript
with large diameters (e.g., a cup, a guitar neck, the length of a straw), which were
represented by a CLarge hand. The manual modality can easily support a system of analog
representation, with hands and motions reflecting precisely the positions and trajectories
used to act on objects in the real world. But the children don’t choose this route. They
develop categories of meanings that, although essentially iconic, have hints of arbitrariness
about them (the children don’t, for example, all have the same form-meaning pairings for
handshapes, Goldin-Meadow, Mylander and Butcher 1995; Goldin-Meadow, Mylander and
Franklin 2007).
Finally, the gestures the children develop are differentiated by grammatical function. Some
serve as nouns, some as verbs, some as adjectives. As in natural languages, when the same
gesture is used for more than one grammatical function, that gesture is marked
(morphologically and syntactically) according to the function it plays in the particular
sentence (Goldin-Meadow et al 1994). For example, if a child were to use a twisting gesture
in a verb role, that gesture would likely be produced near the jar to be twisted open (i.e., it
would be inflected), it would not be abbreviated, and it would be produced after a pointing
gesture at the jar. In contrast, if the child were to use the twisting gesture in a noun role, the
NIH-PA Author Manuscript
gesture would likely be produced in neutral position near the chest (i.e., it would not be
inflected), it would be abbreviated (produced with one twist rather than several), and it
would occur before the pointing gesture at the jar.
Sentences
The deaf children’s gesture sentences have a variety of sentential properties found in all
natural languages. Underlying each sentence is a predicate frame that determines how many
arguments can appear along with the verb in the surface structure of that sentence (Goldin-
Meadow 1985). For example, four slots underlie a gesture sentence about transferring an
object, one for the verb and 3 for the arguments (actor, patient, recipient). In contrast, three
slots underlie a gesture sentence about eating an object, one for the verb and 2 for the
arguments (actor, patient).
Moreover, the arguments of each sentence are marked according to the thematic role they
play. There are three types of markings that are resilient (Goldin-Meadow and Mylander
1984; Goldin-Meadow et al 1994):
1. Deletion – The children consistently produce and delete gestures for arguments as a
NIH-PA Author Manuscript
function of thematic role; for example, they are more likely to delete a gesture for
the object or person playing the role of transitive actor (soldier in “soldier beats
drum”) than they are to delete a gesture for an object or person playing the role of
intransitive actor (soldier in “soldier marches to wall”) or patient (drum in “soldier
beats drum”).
2. Word order – The children consistently order gestures for arguments as a function
of thematic role; for example, they place gestures for intransitive actors and
patients in the first position of their two-gesture sentences (soldier-march; drum-
beat).
3. Inflection – The children mark with inflections gestures for arguments as a function
of thematic role; for example, they displace a verb gesture in a sentence toward the
object that is playing the patient role in that sentence (the “beat” gesture would be
articulated near, but not on, a drum).
In addition, recursion, which gives natural languages their generative capacity, is a resilient
property of language. The children form complex gesture sentences out of simple ones
(Goldin-Meadow 1982). For example, one child pointed at me, produced a “wave” gesture,
pointed again at me, and then produced a “close” gesture to comment on the fact that I had
NIH-PA Author Manuscript
waved before closing the door – a complex sentence containing two propositions: “Susan
waves” (proposition 1) and “Susan closes door” (proposition 2). The children systematically
combine the predicate frames underlying each simple sentence, following principles of
sentential and phrasal conjunction. When there are semantic elements that appear in both
propositions of a complex sentence, the children have a systematic way of reducing
redundancy, as do all natural languages (Goldin-Meadow 1982; 1987).
Language use
The deaf children use their gestures for many of the central functions that all natural
languages serve. They use gesture to make requests, comments, and queries about things and
events that are happening in the situation – that is, to communicate about the here-and-now.
Importantly, however, they also use their gestures to communicate about the non-present –
displaced objects and events that take place in the past, the future, or in a hypothetical world
(Butcher, Mylander and Goldin-Meadow 1991; Morford and Goldin-Meadow 1997).
NIH-PA Author Manuscript
In addition to these rather obvious functions that language serves, the children use their
gestures to communicate with themselves – to self-talk (Goldin-Meadow 2003a). They also
use their gestures to refer to their own or to others’ gestures – for metalinguistic purposes
(Singleton, Morford and Goldin-Meadow 1993). And finally, the children use their gestures
to tell stories about themselves and others – to narrate (Phillips, Goldin-Meadow and Miller
2001). They tell stories about events they or others have experienced in the past, events they
hope will occur in the future, and events that are flights of imagination. For example, in
response to a picture of a car, one child produced a “break” gesture, an “away” gesture, a
pointing gesture at his father, a “car-goes-onto-truck” gesture. He paused and produced a
“crash” gesture and repeated the “away” gesture. The child was telling us that his father’s
car had crashed, broken, and gone onto a tow truck. Note that, in addition to producing
gestures to describe the event itself, the child produced what we have called a narrative
marker – the “away” gesture, which marks a piece of gestural discourse as a narrative in the
same way that “once upon a time” is often used to signal a story in spoken discourse.
The deaf children we study are not exposed to a conventional sign language and thus cannot
be fashioning their gestures after such a system. They are, however, exposed to the gestures
that their hearing parents use when they speak. These gestures are likely to serve as relevant
input to the gesture systems that the deaf children construct. The question is what does this
input look like and how do the children use it?
We first ask whether the gestures that the hearing parents use with their deaf children exhibit
the same structure as their children’s gestures. If so, these gestures could serve as a model
for the deaf children's system. If not, we have an opportunity to observe how the children
transform the input they do receive into a system of communication that has many of the
properties of language.
The hearing parents’ gestures are not structured like their deaf children’s
Hearing parents gesture when they talk to young children (Bekken 1989; Shatz 1982;
Iverson, Capirci, Longobardi and Caselli 1999) and the hearing parents of our deaf children
are no exception. The deaf children’s parents were committed to teaching them to talk and
therefore talked to their children as often as they could. And when they talked, they
gestured.
NIH-PA Author Manuscript
We looked at the gestures that the hearing mothers produced when talking to their deaf
children. However, we looked at them not as they were meant to be looked at, but as a deaf
child might look at them. We turned off the sound and analyzed the gestures using the same
analytic tools that we used to describe the deaf children’s gestures (Goldin-Meadow and
Mylander 1983; 1984). We found that the hearing mothers’ gestures do not have structure
when looked at from a deaf child’s point of view.
We find no evidence of structure at any level in the mothers’ gestures. With respect to
gestural “words,” the mothers did not have a stable lexicon of gestures (Goldin-Meadow et
al 1994); nor were their gestures composed of categorical parts that formed paradigms
(Goldin-Meadow et al 1995) or varied with grammatical function (Goldin-Meadow et al
1994). With respect to gestural “sentences,” the mothers rarely concatenated their gestures
into strings and thus provided little data from which we (or their deaf children, for that
matter) could abstract predicate frames or deletion, word order, and inflectional marking
patterns (Goldin-Meadow and Mylander 1984). Whereas all of the children produce
complex sentences displaying recursion, only some of the mothers did and they first
NIH-PA Author Manuscript
produced these sentence types later than their children (Goldin-Meadow 1982). With respect
to gestural use, the mothers did not make displaced reference with their gestures (Butcher et
al 1991), nor did we find evidence of any of the other uses to which the children put their
gestures, including story-telling (e.g., Phillips et al 2001).
Of course, it may be necessary for the deaf children to see hearing people gesturing in
communicative situations in order to get the idea that gesture can be appropriated for the
purposes of communication. However, in terms of how the children structure their gestured
communications, there is no evidence that this structure comes from the children’s hearing
mothers. Thus, although the deaf children may be using hearing peoples’ gestures as a
starting point, they go well beyond that point – transforming the gestures they see into a
system that looks very much like language.
different from those that accompany English and Mandarin. As described by Talmy (1985),
Spanish and Turkish are verb-framed languages whereas English and Mandarin are satellite-
framed languages (Talmy 1985). This distinction depends primarily on the way in which the
path of a motion is packaged. In a satellite-framed language, both path and manner can be
encoded within a verbal clause; manner is encoded in the verb itself (flew) and path is coded
as an adjunct to the verb, a satellite (e.g., down in the sentence "the bird flew down"). In a
verb-framed language, path is bundled into the verb while manner is introduced
constructionally outside the verb, in a gerund, a separate phrase, or clause (e.g., if English
were a verb-framed language, the comparable sentence would be “the bird exits flying”).
One effect of this typological difference is that manner can, depending upon pragmatic
context (Allen et al 2005; Papafragou and Gleitman 2006), be omitted from sentences in
verb-framed languages (Slobin 1996).
These four cultures – Spanish, Turkish, American, and Chinese – thus offer an excellent
opportunity to examine the effects of hearing speakers' gestures on the gesture systems
developed by deaf children. Our plan in future work is to take advantage of this opportunity.
If deaf children in all four cultures develop gesture systems with the same structure despite
wide differences in the gestures they see, we will have strong evidence of the biases children
themselves must bring to a communication situation. If, however, the children differ in the
gesture systems they construct, we will be able to explore how a child’s construction of a
NIH-PA Author Manuscript
language-like gesture system can be influenced by the gestures he or she sees. We have
already found that American deaf children exposed only to the gestures of their hearing
English-speaking parents create gesture systems that are very similar in structure to the
gesture systems constructed by Chinese deaf children exposed to the gestures of their
hearing Mandarin-speaking parents (Goldin-Meadow and Mylander 1998). The question
now is whether these children’s gesture systems are different from those of Spanish and
Turkish deaf children of hearing parents.
We did just that – although the participants in our study were undergraduates at the
University of Chicago, not the deaf children’s hearing mothers (Goldin-Meadow, McNeill
and Singleton 1996). We asked English-speakers who had no previous experience with sign
language to describe a series of videotaped scenes using their hands and not their mouths.
We then compared the resulting gestures to the gestures these same adults produced when
asked to describe the scenes using speech.
We found that when using gesture on its own, the adults frequently produced discrete
gestures and combined those gestures into strings. Moreover, the strings were reliably
ordered, with gestures for certain semantic elements occurring in particular positions in the
string; that is, there was structure across the gestures at the sentence level. In addition, the
verb-like action gestures that the adults produced when using gesture on its own could be
divided into handshape and motion parts, with the handshape of the action frequently
conveying information about the objects in its semantic frame; that is, there was some
structure within the gesture at the word level. Importantly, these properties did not appear in
the gestures that these same adults produced along with speech. Thus, only when asked to
use gesture on its own did the adults produced gestures characterized by segmentation and
combination. Moreover, they constructed these gesture combinations with essentially no
NIH-PA Author Manuscript
The adults might have gotten the inspiration to order their gestures from their own English
language. However, the particular order that they used in their gestures did not follow
canonical English word order. For example, adults were asked to describe a doughnut-
shaped object that arcs out of an ashtray. When using gesture without speech, the adults
produced a gesture for the ashtray first, followed by a gesture for the doughnut, and finally a
gesture for the arcing-out action (Goldin-Meadow et al 1996; Gershkoff-Stowe and Goldin-
Meadow 2002). Note that a typical description of this scene in English would follow a
different order: “The doughnut arcs out of the ashtray.”
To explore the generality of this phenomenon, we asked speakers of four languages differing
in their predominant word orders (English, Turkish, Spanish, Chinese) to describe events
using gesture without speech). We found that the word orders the speakers used in their
everyday speech did not influence their gestures –– speakers of all four languages used the
same gesture order. For example, to describe a captain swinging a pail, the adults produced a
gesture for the captain (Actor), then produced a gesture for the pail (Patient), and finally a
gesture for the swinging action (Act), that is, an Actor-Patient-Act (ArPA) order. The ArPA
NIH-PA Author Manuscript
order was also found when a different group of speakers of the same four languages were
asked to reconstruct the events using transparent pictures. The adults were given no
indication that the order in which they stacked the transparencies was the focus of the study;
in fact, the background of each transparency was clear so that the final product looked the
same independent of the order in which the transparencies were stacked. Nevertheless, the
adults tended to pick up the transparency for the Actor, followed by the transparency for the
Patient, and finally the transparency for the Act, thus again displaying the ArPA order
(Goldin-Meadow et al 2008). Note that the deaf children inventing their own homesign
systems tended to place gestures for Patients before gestures for Acts (the children
frequently omitted gestures for Actors in transitive relations). Moreover, ArPA is the order
currently emerging in a sign language created spontaneously without any apparent external
influence. Al-Sayyid Bedouin Sign Language arose within the last 70 years in an isolated
community with a high incidence of profound prelingual deafness. In the space of one
generation, the language assumed grammatical structure, including ArPA order (Sandler,
Meir, Padden and Aronoff 2005).
Although the adults in our studies incorporated many linguistic properties into the gestures
they produced when using gesture on its own, they did not develop all of the properties
NIH-PA Author Manuscript
found in natural language, or even all of the properties found in the gesture systems of the
deaf children. In particular, they failed to develop a system of internal contrasts in their
gestures. When incorporating handshape information into their action gestures, they rarely
used the same handshape to represent an object, unlike the deaf child whose handshapes for
the same objects were consistent in form and in meaning (Singleton, Morford and Goldin-
Meadow 1993). Thus, a system of contrasts in which the form of a symbol is constrained by
its relationship to other symbols in the system (as well as by its relationship to its intended
referent) is not an immediate consequence of symbolically communicating information to
another. The continued experience that the deaf children had with a stable set of gestures (cf.
Goldin-Meadow et al 1994) may be required for a system of contrasts to emerge in those
gestures.
In sum, when gesture is called upon to fulfill the communicative functions of speech, it
immediately takes on the properties of segmentation and combination that are characteristic
of speech. The appearance of these properties in the adults’ gestures is particularly striking
given that these properties were not found in the gestures that these same adults produced
when asked to describe the scenes in speech. When the adults produced gestures along with
speech, they rarely combined those gestures into strings and rarely used the shape of the
NIH-PA Author Manuscript
hand to convey any object information at all (Goldin-Meadow et al 1996). In other words,
they did not use their gestures as building blocks for larger units, either sentence or word
units. Rather, they used their gestures to holistically and mimetically depict the scenes in the
videotapes, as speakers typically do when they spontaneously gesture along with their talk, a
topic to which we now turn, focusing in particular on the gestures children produce during
the early stages of language learning.
the way for later word learning (Iverson and Goldin-Meadow 2005). In addition, children
use iconic or conventional gestures that convey action information (e.g., moving the hand
repeatedly to mouth to convey eating; extending an open palm next to a desired object to
NIH-PA Author Manuscript
indicate give).
In addition to expanding children’s vocabularies, gesture also paves the way for their early
sentences. Children combine pointing gestures with words to express sentence-like
meanings (“eat” + point at cookie) months before they can express these same meanings in a
word + word combination (“eat cookie”). Importantly, the age at which children first
produce gesture + speech combinations of this sort reliably predicts the age at which they
first produce two-word utterances (Goldin-Meadow and Butcher 2003; Iverson and Goldin-
Meadow 2005; Iverson et al 2008). Gesture thus serves as a signal that a child will soon be
ready to begin producing multi-word sentences. Moreover, the types of gesture + speech
combinations children produce change over time and presage changes in children’s speech
(Özcalıskan and Goldin-Meadow 2005). For example, children produce gesture + speech
combinations conveying more than one proposition (akin to a complex sentence, e.g., “I like
it” + eat gesture) several months before producing a complex sentence entirely in speech (“I
like to eat it”). Gesture thus continues to be at the cutting edge of early language
development, providing stepping-stones to increasingly complex linguistic constructions.
Finding that gesture predicts the child’s initial steps into language learning raises the
NIH-PA Author Manuscript
possibility that gesture could be instrumental in bringing that learning about. Gesture has the
potential to play a causal role in language learning in at least two non-mutually exclusive
ways.
First, children’s gestures could elicit from their parents the kinds of words and sentences that
the children need to hear in order to take their next linguistic steps. For example, a child who
does not yet know the word “cat” might refer to the animal by pointing at it. His mother
might say in response to the point, “yes, that’s a cat,” thus supplying him with just the word
he is looking for. Or a child in the one-word stage might point at her father while saying
“cup.” Her mother replies, “that’s daddy’s cup,” thus translating the child’s gesture + word
combination into a simple (and relevant) sentence. It turns out that mothers often “translate”
their children’s gestures into words, thus providing timely models for how one- and two-
word ideas can be expressed in English (Goldin-Meadow et al 2007). Gesture thus offers a
mechanism by which children can point out their thoughts to others, who then calibrate their
speech to those thoughts and potentially facilitate language learning.
The second way in which gesture could play a causal role in language learning is through its
cognitive effects (Goldin-Meadow and Wagner 2005). Work on older school-aged children
NIH-PA Author Manuscript
solving math problems has found that encouraging children to produce gestures conveying a
correct problem-solving strategy increases the likelihood that those children will learn to
solve the problem correctly (Cook and Goldin-Meadow 2006; Goldin-Meadow, Cook and
Mitchell 2008; see also Broaders et al 2007 and Cook, Mitchell and Goldin-Meadow 2007).
These findings suggest that the act of gesturing can promote learning. Similarly, when
learning language, the act of pointing to an object might itself make it more likely that the
pointer will learn a word for that object. Future work is needed to explore whether gesture
can promote language learning not only by allowing children to elicit timely input from their
communication partners, but also by directly influencing their own cognitive state.
Conclusions
Gesture is chameleon-like in its form and that form is tied to the function the gesture is
serving. When gesture assumes the full burden of communication, acting on its own without
speech, it takes on a language-like form, even when the gesturer is a young child who has
not had access to a usable model of a conventional language. As such, gesture can reveal the
linguistic biases that children bring to the task of communication and may be the best
NIH-PA Author Manuscript
window we have onto those biases. Interestingly, however, when gesture shares the burden
of communication with speech, it loses its language-like structure, assuming instead a
holistic and unsegmented form. Although not language-like in structure when it
accompanies speech, gesture still forms an important part of language. As such, it can tell us
when children are ready to learn language and may even play a role in facilitating the
learning. Gesture can be part of language or can itself be language and thus sheds light on
what it means to be a language.
Acknowledgments
This research was supported by grants from the National Science Foundation (BNS 8810879), the National Institute
of Deafness and Other Communication Disorders (R01 DC00491), the National Institutes of Child Health and
Human Development (R01 HD47450 and P01 HD 40605), and the Spencer Foundation.
References
Acredolo LP, Goodwyn SW. Symbolic gesturing in language development. Human Development.
1985; 28:40–49.
Acredolo LP, Goodwyn SW. Symbolic gesturing in normal infants. Child Development. 1989;
NIH-PA Author Manuscript
Cook SW, Mitchell Z, Goldin-Meadow S. Gesturing makes learning last. Cognition. 2008; 106:1047–
1058. [PubMed: 17560971]
Fant, LJ. Ameslan: An introduction to American Sign Language. Silver Springs, Md.: National
Association of the Deaf; 1972.
Feyereisen, P.; de Lannoy, J-D. Gestures and speech: Psychological investigations. Cambridge:
Cambridge University Press; 1991.
Gershkoff-Stowe L, Goldin-Meadow S. Is there a natural order for expressing semantic relations?
Cognitive Psychology. 2002; 45(3):375–412. [PubMed: 12480479]
Goldin-Meadow, S. The resilience of recursion: A study of a communication system developed
without a conventional language model. In: Wanner, E.; Gleitman, LR., editors. Language
acquisition: The state of the art. N.Y.: Cambridge University Press; 1982.
Goldin-Meadow, S. Language development under atypical learning conditions: Replication and
implications of a study of deaf children of hearing parents. In: Nelson, K., editor. Children's
Language. Vol. Vol. 5. Hillsdale, N.J.: Erlbaum; 1985. p. 197-245.
Goldin-Meadow S, Mylander C. Gestural communication in deaf children: The effects and non-effects
of parental input on early language development. Monographs of the Society for Research in Child
Development. 1984; 49:1–121. [PubMed: 6537463]
Goldin-Meadow S, Mylander C. Spontaneous sign systems created by deaf children in two cultures.
Nature. 1998; 91:279–281. [PubMed: 9440690]
Goldin-Meadow S, Mylander C, Butcher C. The resilience of combinatorial structure at the word level:
Morphology in self-styled gesture systems. Cognition. 1995; 56:195–262. [PubMed: 7554795]
Goldin-Meadow S, Mylander C, Franklin A. How children make language out of gesture:
Morphological structure in gesture systems developed by American and Chinese deaf children.
Cognitive Psychology. 2007; 55:87–135. [PubMed: 17070512]
Goldin-Meadow S, So W-C, Özyürek A, Mylander C. The natural order of events: How speakers of
different languages represent events nonverbally. Proceedings of the National Academy of
Sciences. 2008 in press.
Goldin-Meadow S, Wagner SM. How our hands help us learn. Trends in Cognitive Science. 2005;
9:230–241.
Iverson JM, Goldin-Meadow S. Gesture paves the way for language development. Psychological
Science. 2005; 16:368–371.
Iverson JM, Capirci O, Longobardi E, Caselli MC. Gesturing in mother-child interaction. Cognitive
NIH-PA Author Manuscript
McNeill, D. Speech and gesture integration. In: Iverson, JM.; Goldin-Meadow, S., editors. The nature
and functions of gesture in children's communications. San Francisco: Jossey-Bass; 1998. p.
11-28.in the New Directions for Child Development series, No. 79.
NIH-PA Author Manuscript
McNeill, D. Hand and mind: What gestures reveal about thought. Chicago: The University of Chicago
Press; 1992.
Moores, DF. Nonvocal systems of verbal behavior. In: Schiefelbusch, RL.; Lloyd, LL., editors.
Language perspectives: Acquisition, retardation, and intervention. Baltimore: University Park
Press; 1974.
Morford JP, Goldin-Meadow S. From here to there and now to then: The development of displaced
reference in homesign and English. Child Development. 1997; 68:420–435. [PubMed: 9249958]
Newport, EL.; Meier, R. The acquisition of American Sign Language. In: Slobin, DI., editor. The
cross-linguistic study of language acquisition. Vol. Vol. 1. Hillsdale, N.J.: Erlbaum; 1985.
Özcalıskan S, Goldin-Meadow S. Gesture is at the cutting edge of early language development.
Cognition. 2005; 96 B01-113.
Özyürek A, Kita S. Expressing manner and path in English and Turkish: Differences in speech,
gesture, and conceptualization. Proceedings of the Cognitive Science Society. 1999; 21:507–512.
Özyürek A, Kita S, Allen S, Furman R, Brown A. How does linguistic framing influence co-speech
gestures? Insights from crosslinguistic differences and similarities. Gesture. 2005; 5:216–241.
Papafragou A, Massey J, Gleitman L. When English proposes what Greek presupposes: The cross-
linguistic encoding of motion events. Cognition. 2006; 98:B75–B98. [PubMed: 16043167]
Phillips S, Goldin-Meadow S, Miller P. Enacting stories, seeing worlds: Similarities and differences in
NIH-PA Author Manuscript