100% found this document useful (1 vote)
29 views72 pages

(Ebook) Native Listening: Language Experience

The document promotes the ebook 'Native Listening: Language Experience and the Recognition of Spoken Words' by Anne Cutler, providing a link for download. It also lists several other related ebooks available for download on the same website. The document includes bibliographic details and a brief overview of the content covered in the book.

Uploaded by

reikkogarems
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
29 views72 pages

(Ebook) Native Listening: Language Experience

The document promotes the ebook 'Native Listening: Language Experience and the Recognition of Spoken Words' by Anne Cutler, providing a link for download. It also lists several other related ebooks available for download on the same website. The document includes bibliographic details and a brief overview of the content covered in the book.

Uploaded by

reikkogarems
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Visit https://ebooknice.

com to download the full version and


explore more ebooks

(Ebook) Native Listening: Language Experience


and the Recognition of Spoken Words by Anne
Cutler ISBN 9780262305457, 0262305453

_____ Click the link below to download _____


https://ebooknice.com/product/native-listening-
language-experience-and-the-recognition-of-spoken-
words-51846826

Explore and download more ebooks at ebooknice.com


Here are some suggested products you might be interested in.
Click the link to download

(Ebook) Video-Based Aural Rehabilitation Guide: Enhancing Listening


and Spoken Language in Children and Adults by Daniel, Linda L., Sneha
V. Bharadwaj ISBN 9781635501124, 9781635501131, 1635501121, 163550113X

https://ebooknice.com/product/video-based-aural-rehabilitation-guide-
enhancing-listening-and-spoken-language-in-children-and-
adults-54071608

(Ebook) Xhosa Literature : Spoken and Printed Words (Volume 6) by Jeff


Opland ISBN 9781869143879, 1869143876

https://ebooknice.com/product/xhosa-literature-spoken-and-printed-
words-volume-6-51630832

(Ebook) Words and the Mind: How words capture human experience by
Barbara Malt, Phillip Wolff ISBN 9780195311129, 0195311124

https://ebooknice.com/product/words-and-the-mind-how-words-capture-
human-experience-1796264

(Ebook) The Experience of Injustice: A Theory of Recognition by


Emmanuel Renault ISBN 9780231177061, 0231177062

https://ebooknice.com/product/the-experience-of-injustice-a-theory-of-
recognition-43825322
(Ebook) The Relation of Writing to Spoken Language (Linguistische
Arbeiten) by unknown ISBN 9783484304604, 348430460X

https://ebooknice.com/product/the-relation-of-writing-to-spoken-
language-linguistische-arbeiten-50362194

(Ebook) Swahili (Spoken World) by Living Language ISBN 1400023467

https://ebooknice.com/product/swahili-spoken-world-2107716

(Ebook) Spoken Kashmiri: A Language Course (Indian Institute of


Language Studies) by Omkar N. Koul ISBN 9788186323199, 8186323198

https://ebooknice.com/product/spoken-kashmiri-a-language-course-
indian-institute-of-language-studies-2359238

(Ebook) Tagalog (Spoken World) by Living Language ISBN 9781400023493,


1400023491

https://ebooknice.com/product/tagalog-spoken-world-1804724

(Ebook) Spoken World: Thai by Living Language ISBN 9781400019892,


1400019893

https://ebooknice.com/product/spoken-world-thai-2526164
Native Listening Language Experience and the
Recognition of Spoken Words 1st Edition Anne Cutler
Digital Instant Download
Author(s): Anne Cutler
ISBN(s): 9780262305457, 0262305453
Edition: 1
File Details: PDF, 22.32 MB
Year: 2012
Language: english
Native Listening
Native Listening

Language Experience and the Recognition of Spoken Words

Anne Cutler

The MIT Press


Cambridge, Massachusetts
London, England
© 2012 Massachusetts Institute of Technology

All rights reserved. No part of this book may be reproduced in any form by any electronic
or mechanical means (including photocopying, recording, or information storage and retrieval)
without permission in writing from the publisher.

MIT Press books may be purchased at special quantity discounts for business or sales pro-
motional use. For information, please email [email protected] or write to Special
Sales Department, The MIT Press, 55 Hayward Street, Cambridge, MA 02142.

This book was set in Times Roman by Toppan Best-set Premedia Limited. Printed and bound
in the United States of America.

Library of Congress Cataloging-in-Publication Data

Cutler, Anne.
Native listening : language experience and the recognition of spoken words / Anne Cutler.
p. cm.
Includes bibliographical references and index.
ISBN 978-0-262-01756-5 (alk. paper)
1. Speech perception. 2. Listening. 3. Language and languages—Variation. 4. Speech
processing systems. 5. Linguistic models. I. Title.
P37.5.S68C88 2012
401′.95—dc23
2011045431

10 9 8 7 6 5 4 3 2 1
To all those who made it possible, with a special salutation to those among them who
are no longer here to read it, and foremost among the latter Betty O’Loghlen Cutler
(1915–1994) and C. Ian Cutler (1920–2005), who involuntarily provided me with the
cortical wherewithal to support a career in speech perception research (see chapter
8) and deliberately provided me with the confidence to pursue it.
Contents

Preface xiii

1 Listening and Native Language 1


1.1 How Universal Is Listening? 3
1.2 What Is Universal in Listening? 5
1.3 What Is Language Specific in Listening? 10
1.4 Case Study 1: The Role of a Universal Feature in Listening 11
1.4.1 Vowels and Consonants in Word Recognition: Reconstructing Words 11
1.4.2 Vowels and Consonants in Word Recognition: Initial Activation 14
1.4.3 Detecting Vowels or Consonants: Effects of Phonetic Context 16
1.4.4 A Universal Feature in Listening: Summary 19
1.5 Case Study 2: The Role of a Language-Specific Feature in Listening 21
1.5.1 Lexical Stress in Word Recognition: A Comparison with Vowels and
Consonants 22
1.5.2 Lexical Stress in Word Recognition: Language-Specificity 23
1.5.3 Stress in Word Recognition: The Source of Cross-language Variability 25
1.5.4 A Language-Specific Feature in Listening: Summary 27
1.6 The Psycholinguistic Enterprise 27
1.6.1 When Psycholinguistics Acted as If There Were Only One Language 28
1.6.2 What Would Life Be Like If We Only Had One Language? 30

2 What Is Spoken Language Like? 33


2.1 Fast, Continuous, Variable, and Nonunique 33
2.2 How Listeners Succeed in Recognizing Words in Speech 39
2.2.1 Ambiguous Onsets 40
2.2.2 Within- and Cross-Word Embeddings 43
2.3 The Nested Vocabulary 45
2.3.1 Embedding Statistics 48
2.3.2 Lexical Statistics of Stress 50
2.3.3 The Lexical Statistics of Germanic Stress 52
2.4 Categorizing Speech Input 54
2.4.1 Language-Specific Categorical Perception 55
2.4.2 Categories and Words 59
2.4.3 Vowel and Consonant Categories and Their Implications 60
2.5 Lexical Entries 64
viii Contents

2.5.1 Morphological Structure 65


2.5.2 Open and Closed Lexical Classes 66
2.6 Frequency Effects 68
2.7 Conclusion: Vocabularies Guide How Spoken-Word Recognition Works 69

3 Words: How They Are Recognized 73


3.1 Testing Activation 74
3.1.1 With Lexical Decision 74
3.1.2 With Cross-modal Priming 76
3.1.3 With Eye-Tracking 77
3.2 Modeling Activation 79
3.2.1 Multiple Concurrent Alternatives 79
3.2.2 Competition between Alternatives 82
3.3 Testing Competition 85
3.4 Phonological and Conceptual Representations 88
3.4.1 Separate Representations 89
3.4.2 Differences between Representations 92
3.5 One-to-Many Mappings of Phonological to Conceptual Representation 95
3.6 Dimensions of Activation: Segmental and Suprasegmental Structure 97
3.6.1 Lexical Tone in Activation 98
3.6.2 Durational Structure (Quantity) in Lexical Activation 101
3.7 Morphological Structure in Lexical Activation 103
3.8 The Case of Gender 106
3.9 Open versus Closed Classes in Lexical Activation 108
3.10 Conclusion 112

4 Words: How They Are Extracted from Speech 117


4.1 What English Stress Is Good For 120
4.2 Using Stress as a Segmentation Cue in English 123
4.3 Segmentation in a Language without Stress 126
4.4 Stress and the Syllable: Basic Concepts of Rhythm 129
4.5 The Metrical Segmentation Strategy: A Rhythmic Segmentation
Hypothesis 132
4.6 Testing the Rhythmic Segmentation Hypothesis 134
4.7 The Rhythmic Class Hypothesis 135
4.8 Perceptual Tests of Rhythmic Similarity 138
4.9 Further Phonological Cues to Segmentation 139
4.10 Which Segmentation Cue? 142
4.11 Learning to Segment an Artificial Language 145
4.11.1 An ALL Renaissance 146
4.11.2 ALL as a Test Bed for Segmentation Cues 146
4.11.3 Dissociating Word-Level and Phrase-Level Segmentation in ALL 150
4.12 Conclusion 153

5 Words: How Impossible Ones Are Ruled Out 155


5.1 The Possible Word Constraint 156
5.2 Implementation of the PWC in Shortlist 159
Contents ix

5.3 Is the PWC Universal? 164


5.4 Exploring the PWC across Languages 167
5.5 Vowelless Syllables as a Challenge to the PWC 171
5.5.1 Portuguese Utterances with Deleted Vowels 174
5.5.2 Japanese Utterances with Devoiced Vowels 176
5.5.3 Slovak Utterances Containing Consonants That Might Be Function
Words 177
5.5.4 Berber Utterances with Vowelless Syllables That Might Be Content
Words 178
5.6 The Origin of the PWC? 179
5.7 Conclusion 182
5.7.1 The PWC in the Speech Recognition Process 182
5.7.2 The PWC and the Open and the Closed Classes 185
5.7.3 The PWC and the Segmentation Process 186
5.7.4 The PWC: Two Further Avenues to Explore 187
5.7.5 The PWC and the Vowel-Consonant Difference 188

6 What Is Spoken Language Like? Part 2: The Fine Structure of Speech 191
6.1 Predictable and Unpredictable Variation 192
6.2 Segmental Assimilation Phenomena 198
6.2.1 Perception of Phonemes That Have Undergone Regressive
Assimilation 200
6.2.2 Perception of Phonemes That Have Undergone Progressive
Assimilation 202
6.2.3 Obligatory versus Optional Assimilation in Word Recognition 203
6.2.4 Universal and Language Specific in the Processing of Assimilation 205
6.3 Liaison between Words 206
6.4 Segment Insertion in Words 208
6.5 Segment Deletion 210
6.6 Variant Segmental Realizations 213
6.6.1 Word Recognition and Word-Final Subphonemic Variation 214
6.6.2 Word Recognition and Subphonemic Variation in Word Onsets 215
6.6.3 Word Recognition and Phonemic Variation 219
6.7 Phonemic Neutralization 221
6.8 Multiple Concurrent Variations 222
6.9 Conclusion 225

7 Prosody 227
7.1 Prosody in the Lexical Activation and Competition Processes 229
7.1.1 Stress 229
7.1.2 Pitch Accent 237
7.2 Irrelevant Lexical Prosody 241
7.3 Prosodic Contexts and Their Role in Spoken-Word Processing 242
7.3.1 Processing Prosodic Salience: Words and Intonation Contours 243
7.3.2 Processing Cues to Juncture: Fine Prosodic Detail 248
7.4 Universal Processing of Prosodic Structure? 253
7.5 Conclusion: Future Developments in Perceptual Studies of Prosody? 258
x Contents

8 Where Does Language-Specificity Begin? 259


8.1 What Fetal Sheep Might Extract from the Input 260
8.2 What the Human Fetus Extracts from the Input 261
8.3 Discrimination, Preference, and Recognition 263
8.3.1 The High-Amplitude Sucking and Visual Fixation Procedures 265
8.3.2 The Headturn Preference Procedure 265
8.3.3 Looking Tasks 266
8.3.4 The Infant’s Brain 267
8.4 The First Stages in Language-Specific Listening 267
8.5 Refining Language-Specific Listening: The Phoneme Repertoire 268
8.5.1 Universal Listeners: The Early Months 269
8.5.2 Language-Specific Listeners: What Happens Next 270
8.5.3 How Universal Listeners Become Language Specific 271
8.6 How the Input Helps 273
8.6.1 Van de Weijer’s Corpus 274
8.6.2 The Phonemic Cues in Infant-Directed Speech 276
8.6.3 Speech Segmentation: The Role of Infant-Directed Speech 277
8.7 Beginning on a Vocabulary 279
8.7. 1 Segmentation Responses in the Infant’s Brain 282
8.7. 2 Determinants of Segmentation 284
8.8 Statistics—A Universal Segmentation Cue? 286
8.9 Open and Closed Lexical Classes—A Universal Segmentation Cue? 288
8.10 The First Perceived Words 290
8.10.1 The Form of the First Words 291
8.10.2 What Is Relevant for the First Words 293
8.11 More Than One Language in the Input? 294
8.12 Individual Differences in the Development of Speech Perception 295
8.13 Conclusion: Languages Train Their Native Listeners 298

9 Second-Language Listening: Sounds to Words 303


9.1 First-Language Listening and Second-Language Listening 304
9.2 Distinguishing Non-L1 Phonetic Contrasts 305
9.2.1 The Perceptual Assimilation Model 306
9.2.2 The Speech Learning Model 307
9.2.3 Familiar Phonetic Contrasts in Unfamiliar Positions 308
9.2.4 Effect of Category Goodness Differences 310
9.3 The Activation of L2 Vocabulary 312
9.3.1 Pseudohomophones in Lexical Activation and Competition 313
9.3.2 Spuriously Activated Words in Lexical Activation and Competition 314
9.3.3 Prolonged Ambiguity in Lexical Activation and Competition 316
9.4 The Lexical Statistics of Competition Increase in L2 Listening 318
9.4.1 Lexical Statistics of Pseudohomophony 319
9.4.2 Lexical Statistics of Spurious Embedding 320
9.4.3 Lexical Statistics of Prolonged Ambiguity 322
9.4.4 Lexical Statistics Extrapolated 323
9.5 The L1 Vocabulary in L2 Word Activation 324
9.6 The Relation between the Phonetic and the Lexical Level in L2 328
9.7 Conclusion 335
Contents xi

10 Second-Language Listening: Words in Their Speech Contexts 337


10.1 Segmenting Continuous L2 Speech 338
10.1.1 The “Gabbling Foreigner Illusion”: Perceived Speech Rate in L1 versus L2 as
a Segmentation Issue 338
10.1.2 L1 Rhythm and L2 Segmentation 340
10.1.3 L1 Phonotactics in L2 Segmentation 342
10.2 Casual Speech Processes in L2 344
10.3 Idiom Processing in L2 347
10.4 Prosody Perception in L2 348
10.4.1 Word-Level Prosody and Suprasegmentals 348
10.4.2 Prosodic Cues to L2 Syntactic Boundaries 350
10.4.3 Prosodic Cues to L2 Semantic Interpretation 351
10.5 Higher-Level Processing: Syntax and Semantics in L2 353
10.6 Why Is It So Hard to Understand a Second Language in Noise? 355
10.6.1 Mainly a Phonetic Effect or Mainly a Higher-Level Effect? 355
10.6.2 The Multiple Levels of L1 Advantage 359
10.7 Voice Recognition in L2 versus L1 362
10.8 A First Ray of Hope: When L2 Listeners Can Have an Advantage! 364
10.9 A Second Ray of Hope: The Case of Bilinguals 368
10.10 The Language Use Continuum 371
10.11 Conclusion: Universal and Language Specific in L1 and L2 372

11 The Plasticity of Adult Speech Perception 375


11.1 Language Change 376
11.2 Language Varieties and Perception of Speech in Another Dialect 378
11.2.1 Cross-dialectal Differences in Perceptual Cues for Phonemes 378
11.2.2 Mismatching Contrasts across Varieties, and the Effects on Word
Recognition 381
11.2.3 Intelligibility and Dialect Mismatch 382
11.3 Perception of Foreign-Accented Speech 385
11.4 Perceptual Effects of Speaker Variation 386
11.5 The Learning of Auditory Categories 388
11.6 The Flexibility of L1 Categories 389
11.6.1 Category Adjustment Caused by Phonetic Context 390
11.6.2 Category Adjustment Caused by Phonotactic Regularity 390
11.6.3 Category Adjustment Caused by Inferred Rate of Speech 391
11.6.4 Category Adjustment at All Levels of Processing 391
11.7 Perceptual Learning 393
11.7.1 Lexically Induced Perceptual Learning 394
11.7.2 Specificity of Perceptual Learning 397
11.7.3 Durability of Perceptual Learning 398
11.7.4 Generalization of Perceptual Learning 399
11.8 Learning New Words 400
11.9 Extent and Limits of Flexibility and Plasticity 402
11.9.1 The Effects of Bilingualism on Cognition 404
11.9.2 Early Exposure 405
11.9.3 Training L2 Speech Perception 406
11.10 Conclusion: Is L1 versus L2 the Key Distinction? 407
xii Contents

12 Conclusion: The Architecture of a Native Listening System 411


12.1 Abstract Representations in Speech Processing 412
12.1.1 Abstract Prelexical Representations 412
12.1.2 Abstract Representation of Phoneme Sequence Probabilities 414
12.1.3 Abstract Representation of Prosodic Patterning 415
12.1.4 Underlying Representations in the Lexicon 416
12.1.5 Separate Lexical Representations of Word Form and Word Meaning 417
12.1.6 Phonological Representations and Where They Come From 418
12.2 Specific Representations in Speech Processing 421
12.2.1 Modeling Specific Traces 422
12.2.2 The Necessity of Both Abstract and Specific Information 423
12.2.3 What Determines Retention of Speaker-Specific Information 425
12.3 Continuity, Gradedness, and the Participation of Representations 425
12.3.1 Case Study: A Rhythmic Category and Its Role in Processing 427
12.3.2 Locating the Rhythmic Category in the Cascaded Model 429
12.4 Flow of Information in Speech Processing 431
12.4.1 Lexical Effects in Phoneme Restoration and Phoneme Decision 433
12.4.2 Lexical Effects in Phonemic Categorization 436
12.4.3 Compensation for Coarticulation 437
12.4.4 The Merge Model 440
12.4.5 Is Feedback Ever Necessary? 443
12.5 Conclusion: Universal and Language Specific 445

Phonetic Appendix [fənEtIk əpEndIks] 451


Notes 455
References 459
Name Index 533
Subject Index 549
Preface

Readers who reviewed my manuscript remarked that in some ways it is “personal.”


This is fair; the book recounts the development of psycholinguistic knowledge about
how spoken words are recognized over the nearly four decades that this topic has
been researched, and that makes it a personal story in that those decades are the
ones I have spent as a psycholinguist. Inevitably (it seems to me), the book has
turned out to center on my own work and that of the many colleagues and graduate
students with whom I have been lucky enough to work, because whenever I wanted
an example to illustrate a particular line of research, the rich archive of this long
list of collaborations usually turned one up.
Most psycholinguists enter the field via linguistics or via psychology, rarely from
a combination of the two. Although my background has more psychology in it than
linguistics, it does have both. I trained as a language teacher and was at one time
apparently set for an academic career teaching German in Australian universities,
until I decided to abandon that and do a Ph.D. in psychology instead. What I took
with me from the former line of work to the latter was suspicion as to whether
conclusions drawn about one language could be expected to hold for another. This
was reinforced by the choice of a dissertation topic in prosody, where language-
specificity in structure is blindingly obvious. Some parts of this book’s text are more
narrative or more personal than others, and from those bits some more of the inter-
play of career and research choices can perhaps be gleaned. The central thread
through the entire book, however, is the issue that has occupied me since soon after
I came into psycholinguistics—namely, what is universal and what is language spe-
cific in the way we listen to spoken language.
This central thread delivered the book’s title: listening to speech is a process of
native listening because so much of it is exquisitely tailored to the requirements of
the native language. The subtitle conveys the additional message that the story
effectively stops at the point where listeners recognize words. How is it possible to
fill a fat book with research on the perception of spoken language and only get the
story as far as words? Partly this is because there has been explosive growth over
xiv Preface

recent years in our knowledge of how spoken-word recognition happens, but mainly
it is because the spoken-word recognition story so beautifully forms a theater in
which the whole story of language-specificity in listening plays out.
There is a tendency in all science for the majority of research, and certainly for
the majority of highly cited research, to come from countries where English is the
local language. In many branches of science this has no further consequences for
the science itself, but in psycholinguistics it can have far-reaching implications for
the research program and for the development of theory. As chapter 1 explains, this
threat was indeed real in early psycholinguistics.
Psycholinguistics is lucky, however, in that a serious counterweight to the mass of
English-based evidence has been added by the experiments of the Max Planck
Institute for Psycholinguistics (where I have had the good fortune to work since
1993). This institute, in existence since the late 1970s, happens to be in Nijmegen,
where the local language is Dutch. Dutch may not be that different from English,
but in some respects it is certainly different enough to motivate interesting conclu-
sions (see chapters 1 and 2). Quite a lot of the work discussed in this book was
carried out on Dutch. I hope that one effect of this book is that evidence will be
found from many more languages and wherever informative comparisons are to be
made.

The Book and Its Audience

The book’s introductory chapter lays out the necessity of a crosslinguistic approach
to the study of listening to speech and illustrates it with two in-depth case studies.
After this, the story of what speech is like, and how its structure determines every
aspect of the spoken-word recognition process, is laid out in chapters 2–6. Chapters
11 and 12 then fill out the psychological picture—chapter 11 by addressing the flex-
ibility of speech processing, and chapter 12 by drawing out further implications of
the story for an overall view of the language processing system.
Whoops! What happened to chapters 7–10? They enrich the story with further
indispensable detail: chapter 7 on the processing of prosodic structure, chapter 8 on
infant speech perception, and chapters 9 and 10 on perception of speech in a second
language.
The audience I had in mind while writing this book was, first of all, people like
my own graduate students; the book contains what I would like young researchers
to know as they survey the field in search of a productive area to work on. (My
personal tip for readers falling into this category is that chapter 6 or chapter 11
could be exciting places to start right now—a lot of progress is happening there.)
My graduate students come from a variety of backgrounds, both disciplinary
(psychology, linguistics, and adjacent fields) and theoretical. Theories, as we know,
Preface xv

are the motors of science, with data being the fuel they run on. Surely, every single
piece of research described in this book was motivated by a theoretical issue, and
many were specifically prompted by particular models of spoken-word recognition.
Although it is not hard to discern where my own theoretical preferences lie, this
book is not aimed at pushing a particular approach. (Every theory, after all, is ulti-
mately wrong in some way.) The data any theory have called into being will remain,
however, and may serve as the foundation for new theories. So that research findings
may be evaluated by everyone, regardless of their own preferences, the text mostly
tries to concentrate on the implications of the findings for new theories, rather than
on the particular motivations that brought them into being.
This has meant that many prominent theories in the field hardly appear in the
text (although the data they have generated may appear). The focus of my investiga-
tion has not been a history of the field itself but the growth in our knowledge about
how spoken-word recognition works and, in particular, the role of language struc-
ture in this process. Likewise, many important topics of debate are not discussed
here if they do not directly generate lessons about how we understand words; this
includes debates in the phonetic literature on the nature of speech sounds (i.e., not
only does the spotlight not shine much above the word level, it doesn’t shine much
below it, either). A text has to stop somewhere, and unless a topic had serious influ-
ence in the psycholinguistic literature on recognizing words, I left it out. And finally,
although I felt that chapters 7–10 were needed, I did not add further enriching
chapters on, for instance, language impairment at the lexical level, the representa-
tion of words in the brain, children’s representation of words, the role of spelling
in spoken-word recognition, or many other topics that might have been added
even without leaving the spoken/lexical level of focus or indeed without leaving
the list of topics that have exercised me. Maybe these can be included in a future
volume 2.

Thanks to All the Following

The book is dedicated “to all those who made it possible.” The list of people in this
category is far too long to enumerate on a dedication page. The list is so long
because I have been so very lucky in my scientific life.
Best of all, I have enjoyed long-lasting collaborations. Experimental science is not
a solitary undertaking; we all work in teams. But a long-term collaboration is like a
bottomless treasure chest, always able to supply new gems of ideas and the riches
of intellectual comradeship. For bringing such fortune into my life I am deeply
grateful to Dennis Norris (35 years so far), James McQueen (25 years so far),
Takashi Otake (21 years so far), and the co-makers of magical music with Dennis
and me for more than a decade of the 1980s and early 1990s, Jacques Mehler and
xvi Preface

Juan Segui. (What a lot of happy years all that adds up to!) Indeed I owe an enor-
mous debt of gratitude to all the colleagues with whom I have worked productively
and enjoyably, for however long that has been. This holds too for my students—
thirty-three dissertation completions so far and more on the way; I am grateful not
only for the joys of collaborative work but for all the times they made me rethink
what I thought I knew. And while I am on that topic, let me also thank my scientific
enemies (some of them from time to time good friends, too) for the same service.
It is a mystery to me why any scientist would ever confuse theoretical disagreement
with personal incompatibility. We should all cherish those who disagree with us; we
would not progress half as quickly without them!
Now comes another enormous group to whom gratitude is owed. Throughout this
book I describe experimental work in psycholinguistic laboratories. This work would
be impossible without the enthusiastic participation (often for trivial monetary
reward, or course credit, or no material reward at all) in those experiments by listen-
ers who were willing to subject themselves to sometimes rather tedious tasks and
only receive explanation of the purpose of the experiment after it is over. Their
intelligent comments on the experiments and on their experience as subjects have
frequently been of considerable help to the experimenters. The tasks to which they
were subjected, however, were not designed by them nor did they know in advance
the rationale of the study in which they participated. This allows me to get off my
chest that it is entirely appropriate to refer to them under these circumstances as
“subjects” and referring to them by this or any other term in no way signifies
whether or not they were treated with respect.
For the past three decades these experiments have been carried out in some of
the most supportive working environments any scientist could wish for; from 1982
to 1993 at the Medical Research Council’s Applied Psychology Unit, from 1993 on
at the Max Planck Institute for Psycholinguistics, and from 2006 on at MARCS
Auditory Laboratories. Being privileged to work in such approximations to paradise
is yet another way in which I have been inordinately fortunate, and I convey my
thanks to all the colleagues who have contributed to making each place special.
Finally, I offer acknowledgments at the more specific level of the book itself. My
life in science so far has made me aware that writing articles describing experimental
work has become a natural exercise for me, but writing a book is highly unnatural;
for advice on the art of completing a book manuscript, but even more for setting a
magnificent example, I owe unending gratitude to Virginia Valian. To Bob Ladd, I
offer heartfelt thanks for many years of patiently showing me how prosody works.
Both of these also belong, with Ann Bradlow, Roger Wales, Elizabeth Johnson,
Taehong Cho, Emmanuel Dupoux, the reading group of the MPI Comprehension
Group, and Kate Stevens and the MSA Writing Group at MARCS, in the category
of those who read and gave comments on all or part of the manuscript. I am awed
Preface xvii

and honored by their willingness to put so much of their time and effort into this
task, and I am deeply grateful to them all. Roger Wales, especially, spent a month
being the first manuscript’s first reader; it was one of the last months in which he
took good health for granted. I am but one of the many in psycholinguistics who
wish that he could be a reader of the published version too, rather than one of its
dedicatees.
At the manuscript preparation stage, a small army of MPI research assistants (I
said this was one of the most supportive environments imaginable, didn’t I?) has
been involved in solving technical issues, including making figures. Thanks to all of
them. Special thanks to Natasha Warner for lending her voice for many spectro-
grams, and to Bernadette Jansma, who (years ago!) drew the tasks in the panels. But
by far the brunt of the manuscript work has been borne, as ever, by Rian Zondervan.
The really essential people in one’s life always figure right at the end of the acknowl-
edgments! Rian’s help with the work on the book has been without any doubt
indispensable, and that has been true of her help on all counts for, now, nearly two
decades. Last of all, so most essential of all, comes my husband, Bill Sloman, his
years of patience with “The Book” now swelling the grand sea of all that I thank
him for.
1 Listening and Native Language

This book is about listening to speech—specifically, how humans turn the sounds they hear
into words they can recognize. The central argument of the book is that the way we listen is
adapted by our experience with our native language, so that precisely how we listen differs
from one language to another. The psycholinguist’s task is to find out what parts of the process
of listening are universal, what parts are language specific, and how they became that way.
Two case studies in this chapter illuminate how the task needs to be approached. First, con-
sider that listening is given a universal substrate by how speech is produced; this determines,
for instance, the acoustic characteristics of vowels and consonants, which in turn determine
how listeners perceive speech sounds. But although acoustic vowel-consonant differences
are universal, studies of the perception of vowels and consonants, and of their role in word
recognition, show that examining these issues in just one language is not enough. Only
cross-language comparison allows the full picture to appear. Second, the same is true of a
language-specific feature such as lexical stress. Comparisons across languages reveal its role
in listening, and despite the language-specific nature of stress, its story further illuminates the
universal account of listening. Listening must be studied crosslinguistically.

A book about listening should really be spoken. The recurring themes of this book
are how naturally and effortlessly we understand speech in our native tongue and
how different listening to a nonnative language can be from listening to the native
tongue. Speech would do these themes greater justice than print. The final conclu-
sion of the book is that listening to speech is so easy (when it is easy), or so hard
(when it is hard), because it depends so much on our previous experience of listen-
ing to speech. A printed text leaves such experience untouched; a spoken text would
augment it!
Using speech is one of humankind’s favorite activities but also an outstanding
achievement of the human mind. With such vital subject matter, psycholinguistics
is an exceptionally rewarding discipline; speaking, listening, signing, reading, and
writing are all cognitive operations of enormous complexity that have richly repaid
investigation. Listening seems like the easiest of these operations; we cannot recall
having had to learn to listen to speech, and under most circumstances we notice no
difficulty in listening. Yet, as the following chapters will reveal, when we are listening
2 Chapter 1

we are carrying out a formidable range of mental tasks, all at once, with astonishing
speed and accuracy. Listening involves evaluating the probabilities arising from the
structure of the native vocabulary (see chapter 2), considering in parallel multiple
hypotheses about the individual words making up the utterances we hear (see
chapter 3), tracking information of many different kinds to locate the boundaries
between these words (see chapters 4 and 5), paying attention to subtle variation in
the way the words are pronounced (see chapter 6), and assessing not only informa-
tion specifying the sounds of speech—vowels and consonants—but also, and at the
same time, the prosodic information, such as stress and accent, that spans sequences
of sounds (see chapter 7).
So listening is not really easy, though it undeniably seems easy. After those six
chapters have laid out the complexities involved, chapter 8 discusses how listening
to speech first begins and acquires its necessarily language-specific character, and
chapters 9 and 10 detail the consequences of this language-specific specialization
for listening to other languages. Chapter 11 elaborates on the flexibility and adapt-
ability of listening (as long as we are listening in the native language), and chapter
12 draws some general conclusions about how language-specificity and universality
fit together in our language processing system.

1. Psycholinguistic Experiments
The comprehension of spoken language is a mental operation, invisible to direct inspec-
tion. No techniques yet in existence allow us to observe the process of recognizing
individual words across time. But psycholinguists have devised many ingenious ways of
looking at the process indirectly (in the laboratory, mostly). These laboratory methods
often involve measuring the speed with which a decision is made, or a target detected, or
a verbal response issued (reaction time, or RT for short). Alternatively, the percentage of
correct responses may be measured, or the percentage of responses of different types.
It is important to make sure that the task is reflecting what we want it to reflect— specifi-
cally, that there are no uncontrolled artifacts that might support alternative interpreta-
tions. Also it is important to relate the results of the highly controlled laboratory task to
the natural processes we actually want to study. Some tasks that have been in use in the
field for many years are now well understood and can provide quite subtle and detailed
insights.
Ten of the most useful tasks—thus, the ones that are used in most of the research
described in this book—will be illustrated and briefly described in panels in the first three
chapters. Basic findings with most of these tasks are listed in Grosjean and Frauenfelder
1996. Panel 12 in chapter 12 gives some hints for deciding which task to use to test a
particular hypothesis as well as how to construct a new task if none of the existing ones
seems quite right for the job.
Listening and Native Language 3

The rest of this introductory chapter explains why listening to speech should be
studied, and understood, by comparing across languages. There are some ways in
which languages are all much the same, and there are some ways in which they are
wildly different, but whatever aspect is being considered, crosslinguistic comparison
is vital for understanding it fully. Also, in this chapter, attention is given to how
psycholinguists study listening. We know about the impressive number of cognitive
operations so efficiently accomplished during listening because researchers have
devised ingenious methods for distinguishing between these operations and assess-
ing the course and outcome of each one in turn. These laboratory methods mostly
involve simple tasks performed while listening to speech; panel 1 introduces this
thread, which runs through this chapter and the two that follow.

1.1 How Universal Is Listening?

If psycholinguistics has one most fundamental premise, it is this: children learn the
language of the environment. This concerns listening because of the vital role played
by the language input that the little learner receives. Take a baby born of English-
speaking parents, a baby born of Taiwanese-speaking parents, and a baby born of
Igbo-speaking parents, place those children in the same monolingual Spanish envi-
ronment, and they will all learn Spanish, in much the same way, and certainly without
any differences that can be traced to the language spoken by their biological parents.
Place them in a Tamil-speaking environment, and they will all acquire Tamil. Expose
them to an environment where everyone is deaf and uses a sign language, and they
will all acquire the sign language. The only thing that really matters for the learning
outcome is the language input to which the child is exposed.
This leads us to conclude that the infant, not specialized for any particular
language, is a sort of universal learning system. In line with this, the process of
language acquisition runs its course in a very similar way for all languages. Not
only that, there are structural similarities common to all natural languages.
Acquisition of Spanish and acquisition of Taiwanese and acquisition of Igbo are
not radically different achievements of the human mind, but in essence the same
achievement.
This conclusion is important for psycholinguists because, as cognitive psycholo-
gists, they want to know how the mind works—the human mind. They want to know
how the Spanish mind works, and the Taiwanese mind, and the mind of every lan-
guage user in the world. Language is the core of the human mind and its operation.
Psycholinguists thus seek to understand how language—any language—is spoken
and understood. Since psycholinguistics began as a separate discipline in the mid-
twentieth century, psycholinguists have had as their goal a universal model of lan-
guage processing—that is, of the operations by which an intention to communicate
4 Chapter 1

becomes a produced utterance and a perceived message becomes understood com-


munication. This universal model should account for language processing in any
human mind. Universal commonalities in the acquisition of different languages are
helpful in the pursuit of this goal.
However, languages do differ. The extent to which they differ can still only be
guessed at. Many more languages have died out than are in existence today. The
Ethnologue Web site (http://www.ethnologue.com) tells us that 94 percent of lan-
guages in existence today have fewer than a million speakers each, and together are
spoken by only about 6 percent of the world’s population. Those “smaller” languages
are far less likely to have been fully described by linguists than the remaining 6
percent that comprise the “larger” languages with more than a million speakers
each. This goes for psycholinguistic study too, needless to say; although in this book
we will consider listening data from twenty or so different languages, they are all
members of the top set of well-represented languages. Slovak (chapter 5) and
Finnish (chapters 4 and 7), with around five million first-language speakers each,
are as far toward “small” as the account in this book can go.
It might be tempting to think that the clearest view of a potentially universal
model would be afforded by universal aspects of structure. Suppose that every lan-
guage in the world indisputably evinces a particular feature. Surely, the processing
of this feature will be the same in every language? By extension, it should then not
matter in what language we study such processing—the result will always be the
same. We can take one of the many tasks that psycholinguists have devised to
examine language processing (see panel 1) and produce the same experimental
outcome in any language. But, in fact, this will not work, as two detailed case studies
in the middle section of this chapter will illustrate. One case study shows that under-
standing an aspect of language structure that is truly universal still requires attention
to language-specific factors. We cannot assume that because something is universally
present in languages, it will not matter what language we study it in—indeed, it does
matter. The second case study shows that investigating an aspect of language struc-
ture that is unquestionably not universal, but definitely language specific, can still
shed light on the way all humans process language. We cannot assume that because
something is language specific, it will not be informative about universal character-
istics of processing—it can be.
Much of this book, in fact, deals with the lines of research typified by these case
studies. Crosslinguistic research has revealed an intricate interplay of language
structure and the processes of spoken-language understanding. Even with the
limited knowledge that we have of crosslinguistic diversity, and with the even more
limited psycholinguistic data available, it has become clear that structural differ-
ences across languages have implications for how languages are understood and
spoken. In other words, if there is a universal model of language processing, it cannot
Listening and Native Language 5

be one in which all the processes at every level of speaking and understanding are
constant across languages. (We return to this issue in chapter 12.)

1.2 What Is Universal in Listening?

The question of what is universal across languages, or indeed whether anything is,
has occupied much linguistic energy over the years (e.g., Greenberg 1963; Comrie
1989; and, as a recent installment, Evans and Levinson 2009, with the associated
commentaries). Fortunately, there is an undisputed universal substrate to the task
of listening to speech. Speech is spoken by humans; any aspect of speech that follows
necessarily from the physiology of human speech production will be universal across
languages. The nature of the task of recognizing words also has many unavoidable,
and hence universal, characteristics, described in chapters 2 and 3. The separable
sounds of speech, or phonemes, are universal constructs by definition: a phoneme
is a minimal unit that distinguishes one word from another. Such a minimal unit can
be a vowel (bad differs from bed), or it can be a consonant (bad differs from sad
and from bag)—speech sounds come in these two varieties: vowels and consonants.
This fundamental phonological difference is certainly determined by how we articu-
late speech and thus is certainly shared by all languages.
Consider that mama is a basic word in the vocabulary of many languages. This is
not an accident: it is because it is so easy for an infant to say. If an infant expels air
through the larynx, with the mouth otherwise in a relatively neutral position, the
result is a vowel, probably one like [a].1 If the infant then temporarily interrupts this
production in the simplest way, by closing and reopening the mouth, the result is a
syllable beginning with a bilabial consonant—ma, or if the closure is more abrupt,
ba or pa. Assigning meaning to such simple early productions is apparently an irre-
sistible option for language communities across the world.2
If we want to know how people understand spoken language, it makes good sense
to begin with the speech sounds and their universal properties. Speech is produced
by expelling air from the lungs through the vocal folds in the larynx to generate an
auditory signal and then modulating this signal by adjusting the configuration of the
reverberant chamber through which the sound passes—the mouth. This process
creates the two different kinds of speech sounds. For the really, really full story, see
Ladefoged and Disner 2012, but here is a brief summary: Vowels are sounds made
without obstruction of the air passage—they differ according to the size and shape
of the vocal cavity as they are uttered. This size and shape is controlled by where
the speaker positions the tongue and whether the lips are protruded or spread. If
there is constriction of the passage of air through the vocal tract, so that it is either
entirely stopped for a moment, or it is modulated by being forced through a very
narrow opening of the throat, teeth, or other articulators, the resulting sound is
6 Chapter 1

called a consonant. Consonants differ according to where the flow of air is obstructed
(place of articulation), how it is obstructed (manner of articulation), and the timing
of voiced phonation during the constriction (voicing). All languages make up their
stock of communicative material on the basis of a small set of speech sounds—some
vowels and some consonants.
Besides these phonetic (articulatory) differences between vowels and consonants,
there are also phonological differences—that is, differences in the role that each
may play in the sound patterns of language. Speech sounds are not uttered in isola-
tion but as part of larger units; the unit where the phonological differences between
vowels and consonants occur is the syllable. The syllable is also an articulatory unit
in the sense that the smallest possible act of speech production is a syllable. In
general, every word of every language must consist of at least one syllable, and every
syllable must consist of at least a nucleus. The nucleus presents the primary vowel-
consonant difference: all vowels can be the nucleus of a syllable, but most conso-
nants cannot. In some languages, sonorous consonants such as [m] or [r] can function
as a nucleus, and in a very tiny number of languages (some of which will turn up in
chapter 5), other consonants can be a syllable nucleus, too. But in very many lan-
guages, only vowels can be the nucleus of a syllable.
Syllables may, but need not, also contain consonants accompanying the nucleus,
either in the preceding (onset) or the following (coda) position. Vowels are near-
obligatory and always central, consonants are permitted and near-universally
peripheral: this is a general statement of syllable structure, although it covers a wide
variation in legitimate syllable types (from languages where the only legal syllable
structure is a single vowel preceded by a single consonant onset, to languages that
allow syllables to be anything from a single short vowel to a long vowel preceded
by a triconsonantal onset and followed by a triconsonantal coda—such as English
screeched [skrit∫t], which counts as a single syllable). Phonetic differences in articula-
tion of vowels versus consonants apply in all languages, and so do vowel-consonant
differences in phonological function within syllables.
What do these difference entail for the listener’s task? For speakers, the articula-
tory differences between vowels and consonants are simple: either there is free flow
in the vocal tract or there is constriction. Acoustically, however, and in consequence
perceptually, the effects of the articulatory difference reach further. Phonetic infor-
mation transmitted by an unobstructed airflow is continuous, and hence allows for
greater durational range than information transmitted by a temporary obstruction.
Vowels can vary from quite long to quite short, but a stop consonant such as [b] can
only vary within a much more limited range. Similarly, the crucial portion of the
speech signal of a vowel is a steady state, whereas the crucial portion of the signal
for consonants can be a transition from one state to another. These differences have
perceptual consequences. The longer a sound, and the more steady-state compo-
Listening and Native Language 7

nents it comprises, the more resistant it is to noise masking; so, for instance, vowels
are sometimes perceived more accurately than are consonants against a noisy back-
ground (see, e.g., Cutler, Weber, et al. 2004), and slips of the ear are more likely to
involve misperception of consonants than of vowels (see, e.g., Bond 1999).
When listeners are asked to decide which speech sound they are hearing—in
other words, to perform phonetic categorization (see panel 2)—differences between
vowels and consonants also arise. Typically, what listeners are presented with
in phonetic categorization is a continuum of speech sounds that has been made
by taking two sounds and gradually morphing the feature(s) distinguishing
them. Thus the continuum runs from the value typical of one of the sounds to the
value typical of the other. Then we can ask listeners to identify sounds along the

2. Phonetic Categorization

In normal speech, listeners hear speech sounds that are mostly reasonably good
exemplars of their categories. But in phonetic categorization, listeners get to hear sounds
that are not at all good category exemplars. It is possible to make an artificial continuum
from one sound to another; the middle of this continuum then consists of sounds that the
listeners presumably have never heard before. But they do not report hearing new sounds.
They often report a sudden switch from tokens of one category to tokens of the other —
“categorical perception” (see figure 1.1).
The phonetic categorization task was developed for phonetic research, but it has also
proven useful in psycholinguistics. For instance, categorical functions can shift if one
decision would make a word but the other would make a nonword. There is a lot more
about research with this task in chapter 12.
8 Chapter 1

continuum (is this [b] or [p]?); or we can ask them to discriminate pairs of sounds
([i], [I]—same or different?). Even when the input is in fact ambiguous between two
categories and corresponds to nothing in the listeners’ prior perceptual experience,
listeners find the task of identifying speech sounds as exemplars of one phoneme
or another simple to do. It has long been known (Stevens et al. 1969; Pisoni 1973)
that experiments of this kind produce different response patterns for vowels and
consonants.
For consonants, the response function that usually appears reflects what is called
“categorical perception”; only within a narrow range does each possible response
receive substantial support, showing that listeners feel unsure only in that small
portion of the continuum. Quite a lot of deviation from the typical value is tolerated
before identification responses start to change from what the typical value itself
receives. There is also a parallel effect in the discrimination responses; differences
can only be well discriminated to the extent that they can be well identified. That
is, although listeners can discriminate well between two exemplars of different cat-
egories, discrimination of exemplars within a category is poor.
For vowels, in contrast, the identification curves are less steep, which suggests that
listeners perceive finer distinctions than they do with consonants. Most importantly,
the discrimination function for vowels is not dependent on the identification func-
tion in the way it is for consonants. Within-category discrimination is at chance (i.e.,
50%) for consonants but quite high for vowels. Listeners seem to be capable of
discriminating small vowel differences even when no difference of category label is
involved. Figure 1.1 depicts this difference in patterning, for one example vowel pair
and one example consonant pair. The identification function shows a less steep
crossover between categories for [i]-[I] than for [b]-[p], whereas the discrimination
function is always above 75 percent for [i]-[I] but hovers around 50 percent for
[b]-[p] except at the category boundary, where it shows a sudden peak.
Although these differences between vowels and consonants are striking, they are
not really absolute. Vowels and consonants actually range along a continuum called
the sonority hierarchy. On this continuum, vowels are at the most sonorous end,
unvoiced stop consonants at the least sonorous or most consonantal end, and various
continuant consonants range in between. Patterns of responses in categorization
experiments differ with position along the sonority hierarchy. The patterns are even
mirrored by our own experience as speakers—we can utter a continuous vowel
sound and make it change gradually from any one vowel to another, but we would
have enormous difficulty uttering sounds that we would accept perceptually as a
sequence of intermediate steps along a continuum between most pairs of conso-
nants. The articulatory reality of the vowel-consonant distinction, in other words,
translates to a perceptual reality that causes strong category binding for consonants
but a certain degree of category insecurity, or flexibility for vowels.
Listening and Native Language 9

Short vowels Bilabial stops


100

Correct AX discrimination
Percent identification

75

Percent
50
[i] [I] [b] [p]

25

0
1 2 3 4 5 6 7 1 2 3 4 5 6 7

Figure 1.1
Identification functions for vowels varying from [i] to [I], and for stops varying from [b] to
[p], plus discrimination functions for the same continua. (Data from Pisoni 1973; reproduced
with permission.) The identification functions show the percentage of choices in each case of
the category of the endpoint at left (dotted line) and of the category of the endpoint at right
(solid line). The discrimination functions show the percentage of correct discrimination
between two differing tokens within a category (left and right points) versus across the cat-
egory boundary (middle point).

Category insecurity for vowels may seem to conflict with the fact that vowels
are more resistant to noise-masking. But in fact both effects follow from the
continuous, steady-state nature of vowel sounds. Speech presents more acoustic
evidence for vowels. This evidence gives listeners a better chance of recognizing
a vowel against noise, but it also allows them to make a more fine-grained
analysis of the acoustic support for one vowel rather than another. In a phonetic
categorization experiment, where the acoustic evidence actually is mixed from
two sources, listeners have a better chance of appreciating this ambiguity. The lesser
amount of acoustic evidence for consonants, which renders them more likely to
be misperceived (e.g., in noise), also makes it less likely that listeners will accurately
interpret the ambiguous portion of a phonetic continuum. If external evidence
(e.g., visual information about the speaker’s lips) is on offer, listeners make more
recourse to it in making decisions about consonants (Cohen and Massaro 1995)
but base their decisions about vowels on the acoustic evidence. Thus the universal
basis of the vowel-consonant difference surfaces, in speech perception, as a
difference in how listeners make categorical decisions about the two types of
speech sound.
10 Chapter 1

1.3 What Is Language Specific in Listening?

Just about any aspect of language that is neither a consequence of the physiology
or acoustics of speech nor a consequence of the nature of the word-recognition task
can vary crosslinguistically. In fact, whole dimensions of phonological structure may
be language specific, in that they feature in an account of some languages’ phonol-
ogy but are simply irrelevant for the description of other languages.
In contrast to the universal substrate of listening, which is really rather limited,
the extent of language-specificity is certainly larger than we yet know. All different
types of variation have consequences for the listener’s task. Some of the language-
specific phonological phenomena dealt with in this book are: tonal as well as seg-
mental information to distinguish between words (chapter 3); harmonic restrictions
on vowel patterns (chapter 4); constraints on the size of stand-alone words (chapter
5); linking phenomena across words, such as liaison in French (chapter 6); constraints
on what phonemes may occur in sequence (chapters 4, 10, and 11); and there are
many more. Chapter 7, which deals with prosodic structure, is probably all language
specific.
Prosody provides notorious examples of the language-specificity of whole dimen-
sions of structure. One case is stress. Stress can be accentuation of syllables within
words, when it is called lexical stress (e.g., LANGuage; upper case marks the location
of the stressed syllable, in this case the first one); or it can be accentuation of words
within sentences, when it is called sentence stress (compare What did the CAT eat?
with What did the cat EAT?). In languages with lexical stress, the syllables of a
polysyllabic word are not created equal—some syllables may bear accentual promi-
nence whereas others are prohibited from bearing prominence. English is a stress
language, and it would be incorrect for prominence to fall on the second syllable of
language in any English utterance. Perceptually, stressed syllables are more salient
than unstressed syllables, and this alone makes stress a relevant aspect of spoken-
word recognition in languages such as English.
However, lexical stress will not be at all relevant for an account of spoken-word
recognition in Japanese, or Mandarin, or French, or many other languages. Words
in those languages do not contrast stressed with unstressed syllables. Further, not
all languages that do contrast stressed with unstressed syllables within a word do so
in the same way. In some languages, such as Finnish or Polish, stress always falls at
a particular position within the word (“fixed stress”). In other languages, such as
English or Spanish, stress placement varies (“free stress”), so that some words have
stress on an early syllable, others on a later syllable. There are at least as many
fixed-stress as free-stress languages in the world (Goedemans 2003). We will see
that stress provides some telling examples of how listening must sometimes be
language specific (including the second case study in this chapter as well as more
Listening and Native Language 11

accounts in chapters 2, 4, and 7); but all of the language-specific phenomena that
there are can similarly produce consequences for listening.

1.4 Case Study 1: The Role of a Universal Feature in Listening

Vowels and consonants differ in both their phonetic form and their phonological
function, and these differences hold across languages. The vowel-consonant contrast
is thus a universal part of listeners’ experience. Does this mean that all listeners
show exactly the same response patterns in listening experiments with vowels and
with consonants, respectively? And do the differences between the two types of
speech sound mean that they play different roles in the understanding of spoken
language? For instance, do they differ in their contribution to understanding a word?
We can use the psycholinguist’s repertoire of tasks that provide insight into linguistic
processing to ask whether listeners make different use of the two types of speech
sound when they are recognizing spoken words. An appropriate task in this case is
word reconstruction (see panel 3).

1.4.1 Vowels and Consonants in Word Recognition: Reconstructing Words


Does a wemmen remind you more of a woman or of a lemon? Each of the real
nouns differs from wemmen by a single phoneme. But if listeners are given wemmen
and asked to change a single phoneme to make this nonword into a real word, are
they equally likely to produce either noun as a response? And if they are forced to
change a vowel (i.e., respond with woman), is the task easier or harder than if they
are forced to change a consonant (i.e., respond with lemon)? Word reconstruction,
invented by Van Ooijen (1996), is quite a sensitive task because, as this summary
suggests, it allows more than one view of what listeners will do with an input like
wemmen.
When this task is carried out with Spanish listeners, we find that the two types of
speech sound behave differently. Vowels are significantly easier to replace than
consonants. Cutler et al. (2000) tried out all three types of condition: free choice of
response, forced vowel replacement, and forced consonant replacement. Given a
free choice, listeners were far more likely to replace vowels, thereby turning pecto
into pacto ‘pact’ rather than into recto ‘straight’, or turning cefra into cifra ‘number’
rather than into cebra ‘zebra’. When they had to replace vowels (i.e., the only correct
responses were pacto and cifra), their reaction times (RTs) were faster than when
they had to replace consonants (i.e., respond recto, cebra); in the latter condition
they also made more intrusion errors in which vowels were accidentally replaced
after all.
One thing we know about Spanish is that vowels and consonants are not equally
represented in the phoneme inventory of the language. Spanish has a grossly unbal-
12 Chapter 1

3. Word Reconstruction
Hear nonword Reconstruct word Speak word

Word reconstruction (WR) is a laboratory task in which listeners have to change a


nonword into a real word by altering a single sound. The subject responds by pressing a
response key as soon as a real word has been found, then speaks the word aloud (so that
the experimenter can check the accuracy of the response). WR is one in a family of
similar tasks (another member of the family is detection of mispronunciations). The
input in WR partially corresponds to known words, and our natural response is to look
for the nearest word. So the RT tells us how easy the nearest word is to find. Because this
can be a relatively hard task, subjects sometimes fail to come up with a response within
the available time window or their response is incorrect; so the success rate can also be
informative. Results are typically reported in both forms: RTs and response rates. This
task was used to look at the processing of vowels and consonants—which type of
phoneme constrains word identity more strongly? If we hear wemmen, is the nearest word
woman? Or lemon? It has also been used to look at units of prelexical representation (this
research is described in chapter 12). Response patterns in WR tend to be similar across
languages.

anced inventory, with five vowels and twenty consonants. So perhaps the asymmetric
pattern of response in the word-reconstruction task is related to the asymmetric
distribution of Spanish phonemes across the two major phoneme classes? For
instance, given that Spanish listeners have only five vowels to choose from, trying
out the five vowels might more easily yield the target word than trying out the
twenty Spanish consonants.
In that case, we would have to expect a different pattern of results if we carried
out the experiment in, say, Dutch. With a relatively balanced set of sixteen vowels
and nineteen consonants, the makeup of the Dutch phoneme inventory differs sig-
nificantly from the asymmetric Spanish situation. Cutler et al. (2000) indeed carried
out the same experiment in Dutch. The pattern of results turned out to be exactly
the same as in Spanish: vowels were easier to replace than consonants. Listeners
Listening and Native Language 13

found it easier to turn hieet into hiaat ‘hiatus’ than dieet ‘diet’, and easier to turn
komeel into kameel ‘camel’ than komeet ‘comet’. Again, the pattern was robust across
the various conditions in the experiment.
Dutch is so close to having a balanced repertoire of vowels versus consonants
that the vowel advantage here cannot be ascribed to an imbalance in the phonemic
inventory. The vowel advantage also appears in English (woman, not lemon, is the
preferred response to wemmen; teeble gives table, not feeble, and eltimate gives ulti-
mate, not estimate; Van Ooijen 1996). Van Ooijen considered several reasons for the
asymmetric results in her original study.
One suggestion was that English vowels might give rather unreliable information
about word identity because they vary so much across dialects. Consider how the
three words look, luck, and Luke are pronounced in American or standard southern
British English. The three are different. But in Scottish English, look and Luke
are pronounced the same, and they contrast with luck. In Yorkshire English,
look and luck are pronounced the same, and they contrast with Luke. Nor do Ameri-
can and British vowels always run parallel. British English uses the same vowel in
pass and farce—these words rhyme, and they contrast with gas. American English
uses the same vowel in pass and gas, which contrasts with the vowel in farce.
On this explanation, English listeners would find it easier to change vowels
because of their experience in adjusting between other dialects and their native
dialect, whereby most of the adjustments involved vowels. The Dutch-Spanish study
ruled this suggestion out, however. In Dutch, both vowels and consonants vary
across dialects. In Spanish, the five vowels remain pretty much the same across the
dialects of Peninsular Spain versus Latin America, but the consonants differ quite
a lot, the most well known difference being seen in words like gracias, where the
middle sound is spoken by Castilian speakers as [θ] (as at the end of English path)
but by Latin American speakers as [s] (as in pass). So a dialectal explanation would
predict equivalent word-reconstruction effects for vowels and consonants in Dutch,
and a consonant advantage for Spanish. This was not what happened: Both lan-
guages showed a vowel advantage, as had English.
A second reason suggested by Van Ooijen (1996) was likewise language specific:
Perhaps English just has too many vowels for listeners to be able to tell them apart
reliably. Certainly British English, with seventeen vowels, does not make vowel
discrimination easy for listeners. But because the pattern of results was essentially
identical in English (seventeen vowels), Dutch (sixteen vowels), and Spanish (five
vowels), we have to reject the vowel inventory size account, too.
Only the third explanation was based on articulatory and acoustic effects that
occur in all languages and thus could correctly predict the universally consistent
pattern observed. On this suggestion, vowels are intrinsically more likely than con-
sonants to vary in different speech contexts. Change in the articulatory form of a
14 Chapter 1

vowel due to influence from surrounding consonants is more likely than change in
a consonant due to influence of surrounding vowels. There are articulatory reasons
for this: the articulation targets for consonants involve closures (of lips, tongue, etc.),
whereas the articulation targets for vowels require that the vocal tract be open but
in a certain shape or configuration. In speaking, the vocal tract configurations that
produce vowels are notoriously approached and departed from at speed and with
great variation induced by the closures surrounding them. This means that a vowel,
especially a short vowel, can take quite different acoustic forms, depending on which
consonants precede and follow it. Listeners should, as a result, have become used
to the frequent experience of making an initial hypothesis about a vowel that turned
out to be wrong. They have developed, in consequence, a certain facility in altering
these initial hypotheses in the course of word recognition. This explanation in terms
of universal properties of speech is the best account of the crosslinguistically con-
sistent results.

1.4.2 Vowels and Consonants in Word Recognition: Initial Activation


The universal account of the word-reconstruction findings assumes that what matters
to listeners is identifying words, as rapidly and efficiently as possible. There is a
paradigm that allows us to test whether vowels and consonants contribute in the
same or different ways to listeners’ growing awareness of word identity as spoken
words are heard. Suppose we hear diff-; it could be the beginning of different, or
difficult, or diffident. If the next sound to arrive is [ə], then the word is more likely
to be the first of these and not the second or third. If the next sound is not [ə] but
[I], then different is ruled out, but either difficult or diffident is still possible; the
listener has to wait for another phoneme to come in, either the [k] of difficult or
the [d] of diffident. In other words, sometimes vowel information distinguishes
between alternative continuations; sometimes consonant information does. There
are laboratory tasks for looking at how quickly phonemic information is processed
in listening, and we can use them to compare the effect of vowel versus consonant
distinctions. An appropriate technique for this is cross-modal fragment priming (see
panel 4). If we speak or hear a word once, it is easier to speak or hear it once again
shortly afterward; we call this effect “priming.” (Priming is a useful phenomenon
that helps make conversation easy. For instance, priming probably underlies the
tendency of speakers to reuse their interlocutors’ words and expressions in conver-
sation; Schenkein 1980.) As the word “cross-modal” in panel 4 suggests, priming
happens even if the word is processed once in one modality (e.g., it is heard) and
once in another (e.g., it is read). And “fragment” priming refers to the fact that a
part of a word will produce priming, too. So spoken diff- will prime written versions
of all of different, difficult, and diffident, to a certain extent.
Listening and Native Language 15

4. Cross-Modal Fragment Priming

Hear prime See target Lexical decision


on visual target

Cross-modal priming (CMP) is priming because it examines responses to one stimulus


(the target) as a function of a preceding stimulus (the prime), and it is cross-modal
because the prime is heard and the target is seen. Typically, the task is to decide whether
the target is a real word or not. CMP is a way of investigating what words become
available (“activated”) when a listener hears speech. We measure the RT to make a lexical
decision on the target; that is, to decide whether the target is a real word. If this RT varies
when the spoken prime varies, then we have observed an effect of the prime. Related
primes and targets may be identical (e.g., give-GIVE) or related in meaning (take-GIVE).
The RT to accept the target word is expected to be faster after either of the related primes
than after a control prime (say-GIVE). More information on the task can be found in
Zwitserlood 1996 and in panel 7 (see chapter 2).
Cross-modal fragment priming (CMFP) is a version of CMP in which the prime is not
fully presented. The initial fragment of a word may also be consistent with other words
(e.g., diff- could become difficult or different; oc- of octopus could be the beginning of
octave or oxygen). CMFP enables us to see, for example, whether all possible words begin-
ning as the prime does are momentarily activated or whether factors such as the sentence
context can rule some of them out. It can also be used (as in the examples in this chapter)
to find out which types of mismatching information most strongly affect activation.
CMFP has also been explored with ERPs, which are measures of the brain’s electrophysi-
ological responses to stimuli (e.g., Friedrich, Schild, and Röder 2009).
16 Chapter 1

Soto-Faraco, Sebastián-Gallés, and Cutler (2001) compared vowels and conso-


nants in this task. Their experiment was in Spanish. We have already seen that
Spanish has a very asymmetric repertoire of vowels (few) and consonants (four
times as many), and that Spanish listeners are sensitive to the realization of vowels
and consonants in speech, as revealed in word reconstruction. In cross-modal
priming, we can look more directly at word activation and compare vowel versus
consonant cues to the right continuation. The vowel-consonant experiment com-
pared pairs of words that began similarly, such as protector ‘protector’ and proyectil
‘projectile’, or minoria ‘minority’ and mineria ‘mining’. The beginnings pro- and min-
should prime both members of the respective pair. Then the next phoneme to come
in will distinguish between the two words, further supporting one of the two but
mismatching the other. Will there be any difference when this phoneme is a conso-
nant (as in the pro- pair) versus a vowel (as in the min- pair)? Visual lexical decision
responses (e.g., to PROTECTOR or MINORIA) were measured after primes like
prote-, proye- or mino-, mine-, compared with control primes; the primes always
occurred at the end of neutral (nonconstraining) sentences such as Nadie supo leer
la palabra proye- ‘Nobody knew how to read the word proye-’.
No vowel-consonant difference appeared. In both cases the effect of one mis-
matching phoneme was highly significant. Compared to the baseline of responses
after hearing an unrelated fragment, listeners responded significantly faster when
the fragment matched the word on the screen (mino- MINORIA, or prote- PRO-
TECTOR) but responded significantly slower when a phoneme mismatched the
visual word (mine- MINORIA or proye- PROTECTOR). It did not matter whether
the mismatching phoneme was a vowel or a consonant. It also did not matter how
close the phonemes were to one another—small differences (as in the fourth sounds
of concesion ‘concession’ versus confesion ‘confession’, where only one phonological
feature, place of articulation, distinguishes [θ] from [f]), or large differences (as in
the [t] versus [j] of protector-proyectil), the effect was the same. In other words, what
matters here is distinguishing between words. It does not matter whether the differ-
ence that effects the distinction is a big one or a small one, in terms of phonemic
features. And it certainly does not matter whether the difference is in a vowel or a
consonant. Both do the same job with the same efficiency in the same way.

1.4.3 Detecting Vowels or Consonants: Effects of Phonetic Context


So far, the universal vowel-consonant contrast has provided cleanly universal
response patterns in all studies. Where the task is evaluating speech to distinguish
between words, any relevant information, be it vocalic or consonantal, will always
be seized on and used. Where the task is altering a sound to turn a nonword into a
real word, vowel changes are always tried first because changing vowels is a more
familiar experience to all. The latter results show that listeners are very sensitive to
Listening and Native Language 17

the way that phonemes in speech are influenced by the other phonemes that sur-
round them. In general, across languages, vowels are influenced more. Consonants
are less likely to be so altered that they are initially misidentified—at least, so those
word-reconstruction results suggest.
Nonetheless, the more vowels there are in a language, the more variety there will
be in their influence on adjacent consonants. Can we find a way of seeing whether
this affects how the phonemes are processed?
The processing of individual phonemes can be examined with the phoneme detec-
tion task (see panel 5). This is one of the simplest tasks in psycholinguistics; all the
listener has to do is press a button whenever a particular sound is heard. The speed
with which this response can be made—the reaction time—is the measure of how
difficult the task is at a given point.

5. Phoneme Detection and Fragment Detection

These detection tasks, in which subjects hearing speech listen for a target phoneme or
fragment, are among the simplest of psycholinguistic tasks. RT is the dependent variable.
Phoneme detection involves pressing a button whenever a particular sound is heard, such
as the sound [a] in the example in the drawing. This is very easy to do and the speech input
does not have to be understood for the task to be efficiently performed— the input can
just as well be nonwords or words of a language unknown to the listener. The same goes
for fragment detection — for example, responding to hearing the sequence bal-. These
tasks have been around for over forty years and it was once thought that they could
provide a direct measure of prelexical processing. Now their interpretation does not seem
quite so simple (chapter 12 discusses this in more detail), but the tasks are still widely
used. They can reflect how easy it is, at a given point, to extract sublexical information
(i.e., information below the word level) from speech. Thus they can reflect segmentation,
or how easy a preceding word or phoneme was to process, and so on.
18 Chapter 1

Phoneme-detection times for different speech sounds can vary. For English listen-
ers, it is often easier to detect consonants than vowels (Van Ooijen, Cutler, and
Norris 1991; Cutler and Otake 1994). But this is not the case for listeners in all
languages. In Cutler and Otake’s study, for instance, Japanese listeners found detec-
tion of [n] in words like inori, kinshi and [o] in words like tokage, taoru equally easy,
whereas English listeners presented with the same Japanese words (nonwords to
the English) detected the [n] targets faster than the [o] targets, just as they also
detected [n] in canopy, candy faster than [o] in atomic, kiosk in a further experiment
in their native language.3 There is certainly no systematic difference across lan-
guages in response times for vowels and consonants. Chapter 2 presents more dis-
cussion on the way language structure affects the processing of individual
phonemes.
However, certain factors are known to make phoneme detection easier or harder
in general. One of them is that the less uncertainty there is in the context, the easier
the task becomes. Thus detection of [b] is faster in a string of nonsense syllables like
su, fu, tu, bu, with a constant vowel, than in si, fo, ta, bu, with four varying vowels
as context. It is slower still if the context can vary across eight vowels (Swinney and
Prather 1980). So it is possible to compare the effect of vowel uncertainty on con-
sonant detection, which Swinney and Prather discovered, with effects of consonant
uncertainty on vowel detection. Is detection of [i] harder in su, fo, ta, bi than in bu,
bo, ba, bi? If so, is the effect the same as the effect in consonant detection, or perhaps
stronger, or weaker? And is the effect independent of the relative number of vowels
and consonants in the language?
Costa, Cutler, and Sebastián-Gallés (1998) discovered that the effects indeed
depended on which language the experiment was conducted in. They compared
detection of vowels in varied versus fixed consonant contexts, and detection of
consonants in varied versus fixed vowel contexts, in two languages. In Dutch, strong
effects of consonant uncertainty were observed in vowel detection, and equally
strong effects of vowel uncertainty were observed in consonant detection. So the
two types of speech sound were equivalent as context for each other. But this was
not the result that Costa et al. (1998) found in the other language in which they
tested—namely, Spanish. In Spanish, the effect of consonant uncertainty on vowel
detection was much stronger than the effect of vowel uncertainty on consonant
detection.
Figure 1.2 shows Costa et al.’s (1998) results. Note that the actual number of
varying phonemes in the experimental contexts in their study was held constant—at
five vowels and five consonants—across the two languages. Thus there was no actual
difference in the variability within the experiment—it was exactly the same for the
Dutch and for the Spanish listeners. The nonsense syllables were also the same, and
equally meaningless for each group. The difference in the result could therefore not
Listening and Native Language 19

100
Consonant
variable-constant Vowel
RT difference

80

60

40
Dutch Spanish

Figure 1.2
Inhibitory effect of variable context (variable-context RT minus constant-context RT) in
detection of consonant and vowel targets. For Dutch listeners, variable consonant contexts
threaten vowel perception about as much as vice versa. For Spanish listeners, variable con-
sonant contexts threaten vowel detection much more than the reverse. (Data from Costa
et al. 1998.)

be ascribed to anything in the experiment itself. Instead the difference had to be


due to the languages in question—or, more precisely, to the language experience of
the listeners who took part.
As we saw, Dutch and Spanish differ in the makeup of their phonemic inventories.
Dutch has a relatively balanced set of vowels and consonants, whereas Spanish has
four times as many consonants as vowels. All their lives, the Spanish listeners had
heard vowels being affected by twenty different consonant contexts, and consonants
being affected by only five different vowel contexts. Thus even though actual vari-
ability did not differ in this particular experiment, these listeners were well aware
that the range of variability was greater in one direction than the other. The Dutch
listeners, on the other hand, had heard vowels being affected by many different
consonant contexts throughout their life, and consonants being affected by many
different vowel contexts, so that in their experience the range of variability was
pretty much equivalent. These expectations translated into equivalent uncertainty
effects in each direction on the Dutch listeners’ phoneme detections but significantly
greater effects in one direction than in the other for the Spanish listeners.

1.4.4 A Universal Feature in Listening: Summary


All languages make up their words from a mixture of vowels and consonants. Speech
sounds differ in how they are articulated, and this can be expressed as a continuum
of sonority, with consequences for perception. Sounds at the vowel end of the con-
tinuum are realized with more acoustic evidence but, in turn, allow more room for
influence of surrounding consonants. So their initial interpretation is easier, but
changing that interpretation is often necessary, thus also easy. Sounds at the conso-
nant end of the continuum are less acoustically present, so an initial interpretation
20 Chapter 1

can be harder, but they are less contextually variable so that the initial interpretation
can be more secure.
Insofar as vowels and consonants perform the same function—distinguishing one
word from another, for example—the way listeners process them seems to be much
the same. Differences are attributable to acoustic realization rather than to lan-
guage-specific structure. The cross-modal priming studies suggest that vowels and
consonants equally effectively mismatch or match candidate words. The word-
reconstruction studies show that listeners across different languages find it easier to
change vowel hypotheses than consonant hypotheses, which we interpret as reflect-
ing their experience of what happens to vowels in speech. Listeners are capable of
discriminating the subtle changes that result from contextual influence on vowels,
so they know that vowels are quite changeable as a function of the speech context
surrounding them. Experience of altering decisions, from one vowel category to
another, accrues as an inevitable consequence of the acoustic realization of vowels
and its implications for perception. Just one cross-language difference was discussed,
and it appeared not in a word-recognition task but in phoneme detection, where it
reflected the influence of phoneme-inventory makeup on listeners’ expectations of
how target phonemes can vary. Listeners with a balanced vowel-consonant reper-
toire expected equivalent variation for vowel and consonant targets; listeners with
an asymmetric repertoire expected asymmetric variation.
All the evidence suggests that speech perception is very sensitive to listener
experience, especially the experience of speech acoustics. Speech signals are evalu-
ated continuously, and any available information that is useful is exploited as soon
as it arrives. Later chapters will bolster this claim with many more types of evidence
(including evidence that the contextual influences of consonants on vowels actually
provide listeners with valuable information about the consonants—see chapter 3).
Vowels and consonants perform different functions in the structure of syllables, as
we saw, but this does not seem to be relevant for speech perception. It may well be
more important for speech production, given that producing speech requires compil-
ing phoneme sequences into syllables in a prosodic structure (Levelt, Roelofs, and
Meyer 1999). Clinical evidence from production indeed suggests a vowel-consonant
dissociation: Caramazza et al. (2000) reported on an aphasia patient whose vowel
production was disrupted but whose consonant production was unimpaired, and
another patient with a differently located lesion and exactly the reverse pattern of
impaired production. These patients had no impairments, and no differences in their
response profiles, in word-recognition and perceptual-discrimination tasks. Percep-
tion studies in general show no vowel-consonant differences in the brain. For
instance, PET scans made of listeners’ brains while they were carrying out word
reconstruction (Sharp et al. 2005) revealed the same location of brain activation
during vowel versus consonant replacement. Only the amount of activation differed:
Listening and Native Language 21

there was more for consonant replacement than for vowel replacement, as would
be expected given that the former task is harder. When patients whose brains were
being mapped prior to surgery made same-different judgments on simple syllables
such as pob, tob, direct electrical stimulation of an area in the left hemisphere dis-
rupted consonant processing but hardly affected vowel processing (Boatman et al.
1994; 1997). No region was found in which vowels were disrupted but consonants
were not, however. Thus it does not seem that perception of consonants and vowels
engages separate cortical systems, but rather that processing the two types of speech
sound, with the differing types of acoustic evidence they provide, can cause differ-
ential activation within a single speech perception system.
Vowels and consonants are the phonetic building blocks of all languages. However,
this case study shows that we could not have fully understood vowel-consonant
differences without having looked at more than one language. Only the cross-
language comparisons enabled us to interpret the word-reconstruction and pho-
neme-detection results.
The counterpart argument to this is that language-specificity does not rule out
insight into universals of processing. This is the lesson of a second case study, which
involves a structural feature that is unquestionably language specific: lexical stress.

1.5 Case Study 2: The Role of a Language-Specific Feature in Listening

Although stress is not a universal feature of languages, and free stress even less so,
a comparison of spoken-word recognition in three free-stress languages (Spanish,
Dutch, and English) nonetheless proves very informative about some universal
characteristics of speech processing. The question at issue is whether stress differ-
ences between words play an important role in lexical activation. But the answer
concerns not only languages with free stress, because it turns out to concern the
vocabulary (and all languages have a vocabulary!).
The relation between stress and phonemes differs across these three languages.
Phonemic distinctions, by definition, distinguish words from one another. Spanish
casa and capa differ by a single phoneme, as do casa and caso, and English case and
cape, or cape and cope. When listeners determine that they are hearing casa and not
one of these other, minimally different, words, they are without question processing
phonemic information.
In some languages, stress differences go hand in hand with phoneme differences.
If this were taken to an extreme, so that the phonemes in stressed syllables always
differed from those in unstressed syllables, listeners could extract stress information
from words as a byproduct of processing phonemic information, which they need
to do anyway to distinguish words. The relation between stress and phonemic seg-
ments is actually not so deterministic in any of the three languages discussed here.
22 Chapter 1

There is quite a strong relation in English, as we shall see, and a somewhat less
strong relation in Dutch. In Spanish, however, there is no necessary relation between
stress and phonemes at all. The distinctions between stressed and unstressed sylla-
bles are solely suprasegmental (i.e., variations not at the level of phonemic segments
but above it); the same phonemic segments are differently realized when they bear
stress than when they do not. Thus CAso differs from caSO in the suprasegmental
attributes fundamental frequency, duration, and amplitude. But the two words do
not differ in vowel quality—they have the same vowels.
Spanish is therefore a good language in which to examine the role of stress in the
recognition of spoken words, because the cues to stress are not mixed up with the
cues to phonemes. If listeners extract stress information from the signal in order to
recognize Spanish words, they are exploiting aspects of the signal that are not neces-
sarily involved in phonemic discrimination.

1.5.1 Lexical Stress in Word Recognition: A Comparison with Vowels and Consonants
Consider the Spanish words principe ‘prince’ and principio ‘principle’—they begin
with the same initial string of seven phonemes. But it is not quite identical. This is
because of stress. Principe is stressed on the first syllable and principio on the
second. Can listeners use stress information to distinguish between words in the
same way as they can use vowel and consonant information?
Yes, they can, as Soto-Faraco et al. (2001) discovered in the same study in which
they compared pairs of words distinguished first by vowel differences or by conso-
nant differences. Soto-Faraco et al.’s study also included pairs like principe-principio
in which the first distinction was realized by stress. Compared with responses after
a control fragment, the lexical decisions to the visually present words were signifi-
cantly faster when fragment and word matched in stress (e.g., PRINci-, PRINCIPE
or prinCI-, PRINCIPIO) but significantly slower when they mismatched (e.g.,
prinCI-, PRINCIPE or PRINci-, PRINCIPIO). Thus the stress information
favored the matching word and disfavored the mismatching word in just the same
way as the phonemic information had done in pairs like protector-proyectil and
minoria-mineria.
We saw earlier that Spanish listeners and Dutch listeners differed in some aspects
of their processing of vowels and consonants (they had language-specific expecta-
tions about the relative contextual variability of the two speech sound classes).
Perhaps they also differ in their processing of stress—for instance, they may have
language-specific expectations about permissible patterns of stress. Donselaar,
Koster, and Cutler (2005) conducted a cross-modal fragment priming experiment
in Dutch that closely resembled Soto-Faraco et al.’s (2001) stress study. They used
pairs of Dutch words like octopus ‘octopus’ and oktober ‘October’. Both words begin
with the same first two syllables, octo, with the same vowels; but the first syllable is
Listening and Native Language 23

stressed in octopus and the second in oktober. The results of their experiment exactly
paralleled the results found by Soto-Faraco et al. Responses were significantly faster
after matching primes (OCto-, OCTOPUS or okTO-, OKTOBER) and significantly
slower after mismatching primes (OCto-, OKTOBER or okTO-, OCTOPUS) than
after a control prime.
Thus there is no difference in the use of stress information in Spanish and Dutch
word recognition. Both Spanish and Dutch are stress languages, and in both lan-
guages listeners can use the stress information in a couple of syllables to reduce
their set of lexical choices—including some phonemically matching candidates and
excluding others.

1.5.2 Lexical Stress in Word Recognition: Language-Specificity


Now imagine (especially if you are an English native speaker) conducting the same
experiment in English. Like Spanish and Dutch, English is a language in which stress
distinctions can be the only difference between two unrelated words. Like CAso and
caSO, or BEbe and beBE in Spanish, there are minimal pairs in English: INsight
versus inCITE, or FORbear versus forBEAR, FOREgoing versus forGOing, TRUSty
versus trusTEE. To be strictly correct, there are not that many more than these if
the two members of the pair have to be unrelated in meaning—in all, there are
maybe a little more than a dozen pairs in the language. But there are not that many
minimal pairs in Spanish or Dutch, either. No stress language has many such unre-
lated minimal pairs, it turns out. The usefulness of stress in word recognition is not
really in distinguishing minimal pairs but in distinguishing between whole sets of
words beginning with stressed versus unstressed versions of the same syllable—
PRINci- versus prinCI-, OCto- versus okTO-, and so on. This reduction of choices
really helps.
So to do this experiment in English, we ideally need pairs of words beginning
with the same two syllables, with the same vowels, where stress falls on the first
syllable in one word and on the second syllable in the other—like the PRINci-
versus prinCI- of principe and principio, or the OCto- versus okTO- of octopus
versus oktober. The reader should now take a pause and generate pairs of such
words in English.
The results are necessarily disappointing. Apart from the minimal pairs (and there
aren’t enough of them for a good experiment!) there are no such word pairs in
English. The first syllables can often be matched (one stressed, one unstressed,
otherwise phonemically identical) but then not the second. Consider English October
versus octopus. These are quite similar to the cognate pair in Dutch. Both begin with
oc-, and this syllable is stressed in octopus but unstressed in October. In the first
syllables, the vowels are the same. But in the second syllables, the vowels in English
are not the same (unlike the Dutch case). In English October, the stressed second
24 Chapter 1

syllable is to, pronounced [to]. In English octopus, however, the unstressed second
syllable is not [to]. It is [tә]. An unstressed syllable that follows a stressed syllable
in English, especially if the word is longer than two syllables, is nearly always going
to be reduced. This makes for a phonemic difference between octopus and October,
on top of the stress difference. As we discussed earlier, listeners always have to pay
attention to phonemic differences; so if they have a phonemic difference to distin-
guish these two words, how can we tell whether they are attending to the stress
difference or just to the phonemic difference? In order to tell whether they use
stress, we have to find a case where the suprasegmental stress difference is all that
can be used.
Well, there are such cases in English after all, and they are not completely differ-
ent from the Spanish and Dutch pairs. Soto-Faraco et al. (2001) had some Spanish
pairs that differed not in first versus second syllable stress, but in second versus third,
such as eSTAtua ‘statue’ versus estaTUto ‘statute’. Donselaar et al. (2005) likewise
had pairs in Dutch like paRAde ‘parade’ versus paraDIJS ‘paradise’. If one goes a
step further and allows first-syllable stress to contrast with third-syllable stress, then
English provides pairs in which the first two syllables differ only in stress. Indeed,
Donselaar et al.’s Dutch materials even included some such pairs—for example,
DOminee ‘minister’ versus domiNANT ‘dominant’. Third-syllable stress in an English
word necessarily means a secondary stress on the first syllable, because English does
not allow a word to begin with two weak syllables in a row. (Again, this requirement
is language specific—some free-stress languages, such as Dutch, do allow words in
which the first and second syllables are both weak, such as the Dutch word tegelijk,
‘simultaneously’.)
In English word pairs like admiral versus admiration, or elephant versus elevation,
there is therefore a contrast between secondary versus primary stress on the first
syllable. In both cases, the second syllable is reduced. The first syllable of both
admiral and admiration is [æd] and the second syllable of each is [mə]; but admiral
is stressed on the first syllable, whereas admiration is stressed on the third, which
triggers secondary stress on ad-. This is the only kind of contrast that one can
examine in an English experiment modeled on the Spanish and Dutch studies.
Cooper, Cutler, and Wales (2002) conducted an English experiment just like the
Spanish study of Soto-Faraco et al., using pairs like admiral and admiration, or
elephant and elevation. The listeners (Australian students) heard fragments of these
words at the end of nonconstraining sentences like I can’t believe he can’t spell admi-;
their responses to a visual lexical decision target such as ADMIRAL or ADMIRA-
TION were measured.
What they found was that English listeners could indeed make use of the stress
information, but they appeared to rely on it to a lesser extent than Spanish or Dutch
listeners did. Specifically, responses after a matching prime (e.g., ADmi-, ADMIRAL)
Listening and Native Language 25

Match facilitation
10 Mismatch inhibition
8
% difference from control

6
4
2
0
–2
–4
Spanish Dutch English
–6
–8
–10

Figure 1.3
In comparable cross-modal fragment priming studies in Spanish, Dutch, and English, primes
that match the target in stress pattern facilitate responses in all languages. Mismatching
stress, however, produces inhibition in Spanish and Dutch but not in English. (The figure
shows the difference between RT given matching or mismatching prime, and RT given
control prime, expressed as percentage of control condition RT. Data from Soto-Faraco
et al. 2001; Donselaar et al. 2005; Cooper et al. 2002.)

were significantly faster than after a control prime. But a mismatching prime (e.g.,
admi-, ADMIRAL) did not make responses slower than in the control condition.
In other words, the English listeners could not use stress information as effectively
to reject alternative interpretations of the input.
Figure 1.3 summarizes the facilitation due to stress match, and the inhibition due
to stress mismatch, across the three experiments with comparable bisyllabic frag-
ment primes. The English results resemble the results from other languages in the
matching condition (although the amount of facilitation in English is less than in
Dutch and Spanish), but they are really quite different in the mismatching condition
(where the English effect is almost nonexistent). Thus all three languages have free
stress, yet the degree to which listeners use the free-stress information in lexical
activation differs across the three.

1.5.3 Stress in Word Recognition: The Source of Cross-language Variability


What is responsible for this crosslinguistic difference? Is it due to characteristics of
the listener populations? Or to differences in the experiments? Or should we ascribe
it to the languages themselves?
1. First, consider the listeners and how they activate words. Are English listeners
simply less sensitive to mismatching information in word recognition? No, because
26 Chapter 1

evidence from other experiments reveals that they have just as much sensitivity to
mismatch as listeners in other languages. For instance, in a cross-modal fragment
priming study in which English listeners responded to, say, DEAF after the prime
daff- from daffodil, the vowel information was used rapidly and the potential com-
petitor word was rapidly rejected. English listeners are apparently just as sensitive
to mismatching segmental information as Soto-Faraco et al.’s Spanish listeners were
when they inhibited minoria after hearing mine- from mineria.4
2. Then could differences in the experiments have played a role? The experimental
design was closely matched across the three studies, but the materials could not be
exactly the same. Recall that the type of stress contrast tested in the English experi-
ment was primary versus secondary stress in ad- from admiral versus admiration,
because pairs differing in primary stress versus no stress (as in prin- from principe
versus principio) cannot be found in English. But pairs such as admiral/admiration
also occurred in the Spanish and Dutch experiments, as we saw, and they showed
stress effects no less than the pairs such as principe/principio or octopus/oktober.
Donselaar et al. explicitly tested the type of stress contrast in their statistical analy-
ses; pairs such as DOMinee-domiNANT contributed 45 milliseconds of facilitation
and 30 milliseconds of inhibition. Post hoc analyses of Soto-Faraco et al.’s materials
revealed that pairs such as eSTAtua/estaTUto likewise produced significant facilita-
tion and inhibition effects.
3. Could the fault be with English speakers and the cues they provide to stress?
Perhaps the information in the signal was simply not as useful in the English experi-
ment as in the others. But, in fact, acoustic measurements reported later by Cutler
et al. (2007) showed that the fragments in the Cooper et al. (2002) study differed
significantly in all dimensions in which stress is cued (fundamental frequency, dura-
tion, and amplitude), with an effect size large enough to be extremely useful to
perceivers.
4. Could the difference then arise from the use of stress as an interword distinction
across languages—that is, from the rarity of stress-based minimal pairs such as
trusty/trustee in English? No again, because such pairs are vanishingly rare in Spanish
and Dutch, too.
5. Could English listeners’ disregard of stress information then have arisen because
English allows vowel reduction, so that stress covaries with vowel quality in words?
No, again this alone cannot be the explanation, because Dutch has vowel reduction,
too. Spanish does not, but as we saw, the effects of stress in word recognition were
similar in Spanish and Dutch. It is only English that is different.
6. Thus a language-based difference between English on the one hand and Dutch
and Spanish on the other is the most likely source of the asymmetric results pattern.
Cooper et al. argued that the crucial difference was the rarity, in English, of unstressed
syllables with full vowels. Unstressed English syllables nearly always have weak
Exploring the Variety of Random
Documents with Different Content
Rennes, 97.
Reymonenq, 55.
Richard (le roi), 136, 160.
Richard de Noves, 125.
Rieu, 87.
Riffart, 93.
Rigaut de Montpellier, 61, 193.
An. Rivière, 63.
Rixende de Puyvard (dame de Trans), 156.
Roch-Bourguet, 61.
Rocher (de), 93.
Rodel (Jean), 136.
Rodolphe (le roi), 124.
Rogier (Pierre), 184.
Rome, 168.
Roqueferrier, 194.
Roquefeuille (Ysarde de), 156.
Rostangue (dame de Pierrefeu), 156.
Roumanille, 55, 59, 61, 72.
Roumieux, 62, 63, 91.
Rousselot (l’abbé), 97.
Roux (J.), 91, 183.
Roux-Renard, 93.
Roux-Servine, 93.

Saboly, 178.
Sabran (Hugonne de), 156.
Saint-Antoni (Vte de), 186.
Saint Bernard, 165.
Saint Louis (roi), 160.
Saint-Pol (Cte de), 169.
Sainte-Beuve, 53.
Sainte-Palaye, 139.
Saluce (Mise de), 156.
Savari de Mauléon, 182.
Savinien (le frère), 96, 97, 227.
Schaffhouse, 125.
Schœll (Frédéric), 114.
Séguier (l’abbé), 194.
Silius Italicus, 109.
Simon de Montfort, 169.
Sordel, 153.
Stéphanette de Baulx, 156.
Swynford, 142.

Taillandier (René), 228.


Tallard (Anne, Vtesse de), 156.
Tandon, 194.
Tarif, 118.
Tavan (A.), 61, 73, 94.
Théodoric, 123.
Théroalde, 133.
Thibaut de Champagne, 136.
Thomas (A.), 97, 174.
Tiberge de Séranon, 158, 195.
Titien (le), 166.
Tournier (A.), 93.
Tourtoulon (de), 79, 81, 194.
Trogue-Pompée, 109.
Troubat (Jules), 73.
Troubat (Antoine), 93.

Ulphilas, 117, 118.


Ursynes des Ursières, 156.

Valence, 124.
Vertfeuil, 165.
Victor Hugo, 153.
Vidal (Pierre), 191.
Vienne, 124.
Vigne (l’abbé), 50.
Villemain, 81.
Villeneuve-Esclapon, 79.
Violante (princesse), 154.
Vitet, 97.
Voiture, 153.

W
Wagner-Robier, 93.
Wistace, 152.
Wœlfel, 118.

Xavier de Fourvières (dom), 227.


» de Ricard, 77.

Zacharie, 127.
BIBLIOGRAPHIE

Achard, Dictionnaire provençal et Grammaire


provençale.
Berluc-Pérussis (de), Carte des dialectes et sous-
dialectes provençaux.
Castor (J.-J.), l’Interprète provençal.
Crousillat, la Bresco.
Donadieu, les Précurseurs des Félibres.
Diétrich-Behrens, Bibliographie des patois gallo-
romains (trad. par Rabiet) (1889).
Duclo, Grammaire française expliquée au moyen
de la langue provençale (1826).
Fabre d’Olivet, Poésies occitaniennes et Cours
d’amour (1804).
Féraud, le Saint-Evangile (selon saint Matthieu), en
provençal (1866).
Garcin (E.), Dictionnaire provençal-français (1823-
1841).
Gazier, Lettres à Grégoire sur les patois de France.
Gélu (V.), Chansons provençales (1856).
Honorat, Dictionnaire provençal-français.
Jasmin, Œuvres (1825).
Jourdanne, Histoire du Félibrige.
Laugier de Chartrouse, Nomenclature patoise des
plantes des environs d’Arles (1859).
Laincel (de), Des Troubadours aux Félibres.
Morel (M.), Lou Galoubé (1828).
Papon, Origines et progrès de la langue provençale
(1776) (Histoire de Provence).
Pellas, Dictionnaire provençal-français (1723).
Pierquin de Gembloux, Histoire des patois (1858).
Raynouard, Choix de poésies des Troubadours.
Grammaire romane (1816).
Roumanille, Œuvres (1852).
Roux (J.-L.), Contes daù villagé (1869).
Savinien (le Frère), Grammaire et exercices en
langue provençale à l’usage des écoles primaires
(1882).
Tourtoulon (de), Des parlers populaires comparatifs
entre Vintimille et Antibes (1890).
Vidal, Etude sur les analogies linguistiques du
Roumain et du Provençal (1885).
Villeneuve (de... Christ.), Statistique des Bouches-
du-Rhône (1821).
Xavier de Fourvières (dom), Grammaire provençale
et exercices à l’usage des écoles primaires
(1893).—Lou pichot Trésor daù Félibrige (1901).
TOURS, IMPRIMERIE DESLIS FRÈRES, 6, RUE GAMBETTA.

Au lecteur
L’orthographe d’origine a été conservée et n’a pas été
harmonisée, mais les erreurs clairement introduites par le typographe
ou à l’impression ont été corrigées. Les mots ainsi corrigés sont
soulignés en pointillés. Placez le curseur sur ces mots pour faire
apparaître le texte original. A quelques endroits la ponctuation a été
tacitement corrigée.
La Table des matières ne correspondait pas exactement aux titres
dans le livre. Quelques corrections ont été apportées, qui sont
indiquées comme ci-dessus.
Les notes de bas de page ont été renumérotées et placées à la fin
de chaque chapitre.
*** END OF THE PROJECT GUTENBERG EBOOK LA
PROVENCE: USAGES, COUTUMES, IDIOMES DEPUIS LES
ORIGINES; LE FÉLIBRIGE ET SON ACTION SUR LA LANGUE
PROVENÇALE, AVEC UNE GRAMMAIRE PROVENÇALE
ABRÉGÉE ***

Updated editions will replace the previous one—the old editions


will be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright
in these works, so the Foundation (and you!) can copy and
distribute it in the United States without permission and without
paying copyright royalties. Special rules, set forth in the General
Terms of Use part of this license, apply to copying and
distributing Project Gutenberg™ electronic works to protect the
PROJECT GUTENBERG™ concept and trademark. Project
Gutenberg is a registered trademark, and may not be used if
you charge for an eBook, except by following the terms of the
trademark license, including paying royalties for use of the
Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such
as creation of derivative works, reports, performances and
research. Project Gutenberg eBooks may be modified and
printed and given away—you may do practically ANYTHING in
the United States with eBooks not protected by U.S. copyright
law. Redistribution is subject to the trademark license, especially
commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the


free distribution of electronic works, by using or distributing this
work (or any other work associated in any way with the phrase
“Project Gutenberg”), you agree to comply with all the terms of
the Full Project Gutenberg™ License available with this file or
online at www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand,
agree to and accept all the terms of this license and intellectual
property (trademark/copyright) agreement. If you do not agree to
abide by all the terms of this agreement, you must cease using
and return or destroy all copies of Project Gutenberg™
electronic works in your possession. If you paid a fee for
obtaining a copy of or access to a Project Gutenberg™
electronic work and you do not agree to be bound by the terms
of this agreement, you may obtain a refund from the person or
entity to whom you paid the fee as set forth in paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only


be used on or associated in any way with an electronic work by
people who agree to be bound by the terms of this agreement.
There are a few things that you can do with most Project
Gutenberg™ electronic works even without complying with the
full terms of this agreement. See paragraph 1.C below. There
are a lot of things you can do with Project Gutenberg™
electronic works if you follow the terms of this agreement and
help preserve free future access to Project Gutenberg™
electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright
law in the United States and you are located in the United
States, we do not claim a right to prevent you from copying,
distributing, performing, displaying or creating derivative works
based on the work as long as all references to Project
Gutenberg are removed. Of course, we hope that you will
support the Project Gutenberg™ mission of promoting free
access to electronic works by freely sharing Project
Gutenberg™ works in compliance with the terms of this
agreement for keeping the Project Gutenberg™ name
associated with the work. You can easily comply with the terms
of this agreement by keeping this work in the same format with
its attached full Project Gutenberg™ License when you share it
without charge with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.

1.E. Unless you have removed all references to Project


Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project
Gutenberg™ work (any work on which the phrase “Project
Gutenberg” appears, or with which the phrase “Project
Gutenberg” is associated) is accessed, displayed, performed,
viewed, copied or distributed:

This eBook is for the use of anyone anywhere in the United


States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it
away or re-use it under the terms of the Project Gutenberg
License included with this eBook or online at
www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country where
you are located before using this eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is


derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of the
copyright holder), the work can be copied and distributed to
anyone in the United States without paying any fees or charges.
If you are redistributing or providing access to a work with the
phrase “Project Gutenberg” associated with or appearing on the
work, you must comply either with the requirements of
paragraphs 1.E.1 through 1.E.7 or obtain permission for the use
of the work and the Project Gutenberg™ trademark as set forth
in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is


posted with the permission of the copyright holder, your use and
distribution must comply with both paragraphs 1.E.1 through
1.E.7 and any additional terms imposed by the copyright holder.
Additional terms will be linked to the Project Gutenberg™
License for all works posted with the permission of the copyright
holder found at the beginning of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files
containing a part of this work or any other work associated with
Project Gutenberg™.
1.E.5. Do not copy, display, perform, distribute or redistribute
this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must, at
no additional cost, fee or expense to the user, provide a copy, a
means of exporting a copy, or a means of obtaining a copy upon
request, of the work in its original “Plain Vanilla ASCII” or other
form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or


providing access to or distributing Project Gutenberg™
electronic works provided that:

• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt that
s/he does not agree to the terms of the full Project Gutenberg™
License. You must require such a user to return or destroy all
copies of the works possessed in a physical medium and
discontinue all use of and all access to other copies of Project
Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project


Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite
these efforts, Project Gutenberg™ electronic works, and the
medium on which they may be stored, may contain “Defects,”
such as, but not limited to, incomplete, inaccurate or corrupt
data, transcription errors, a copyright or other intellectual
property infringement, a defective or damaged disk or other
medium, a computer virus, or computer codes that damage or
cannot be read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES -


Except for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU
AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE,
STRICT LIABILITY, BREACH OF WARRANTY OR BREACH
OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH
1.F.3. YOU AGREE THAT THE FOUNDATION, THE
TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER
THIS AGREEMENT WILL NOT BE LIABLE TO YOU FOR
ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE
OR INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF
THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If


you discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person or
entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.
1.F.4. Except for the limited right of replacement or refund set
forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR
ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you do
or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.

Section 2. Information about the Mission of


Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status by
the Internal Revenue Service. The Foundation’s EIN or federal
tax identification number is 64-6221541. Contributions to the
Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.

The Foundation’s business office is located at 809 North 1500


West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws


regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or
determine the status of compliance for any particular state visit
www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states


where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot


make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.

Please check the Project Gutenberg web pages for current


donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.

Section 5. General Information About Project


Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.

Project Gutenberg™ eBooks are often created from several


printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.
back
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

ebooknice.com

You might also like