Teaching Stamina and Silent Reading in T
Teaching Stamina and Silent Reading in T
Elfrieda H. Hiebert
TextProject
University of California, Santa Cruz
TextProject, Inc.
SANTA CRUZ, CALIFORNIA
textproject.org
ISBN: 978-1-937889-04-3
© 2015 Elfrieda H. Hiebert. Some rights reserved.
This work is licensed under the Creative Commons Attribution-
Noncommercial-No Derivative Works 3.0 United States License. To view a copy
of this license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/us/ or
send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.
“TextProject” and the TextProject logo are trademarks of TextProject, Inc.
Cover photo (top) © istockphoto.com/CEFutcher. All rights reserved. Used under license.
Cover photo (bottom) © istockphoto.com/CEFutcher. All rights reserved. Used under license.
Contents
FORWARD iii
Striking the Right Balance: Why Silent and Extended Reading of
Challenging Materials Matters
Timothy Rasinski
Preface vii
Teaching Stamina and Silent Reading in the Digital-Global Age
Elfrieda H. Hiebert
CHAPTER 2 32
Eye Movements Make Reading Possible
S. Jay Samuels, Elfrieda H. Hiebert, & Timothy Rasinski
CHAPTER 3 58
Are Students Really Reading in Independent Reading Contexts?
An Examination of Comprehension-Based Silent Reading Rate
Elfrieda H. Hiebert, Kathleen M. Wilson & Guy Trainin
i
CHAPTER 5 100
he Relationship Between a Silent Reading Fluency Instructional
Protocol on Students’ Reading Comprehension and Achievement
in an Urban School Setting
Timothy V. Rasinski, S. Jay Samuels, Elfrieda H. Hiebert, Yaacov
Petscher, & Karen Feller
CHAPTER 6 121
Exploring the Added Value of a Guided Silent Reading
Intervention: Efects on Struggling hird-Grade Readers’
Achievement
D. Ray Reutzel, Yaacov Petscher, & Alexandra N. Spichtig
CHAPTER 8 169
Revisiting Silent Reading in 2020 and Beyond
Elfrieda H. Hiebert, & D. Ray Reutzel
ii
FORWARD
Timothy Rasinski
Kent State University
O
ver the past dozen years or so, reading assessment and reading
instruction itself increasingly have come to be deined primarily
by oral reading, oten for speed, and for very short periods
of time. his evolution has been due to a number of factors. First, the
curriculum-based measurement (CBM) approach to reading assessment
reduced reading assessment to measures of reading rate over one-minute
periods (Deno, 1985). Second, although the National Reading Panel
(National Institute of Child Health and Human Development, 2000)
identiied reading luency as a critical variable for proicient reading,
the panel restricted luency to oral reading. hese developments, as
well as others, have afected how reading is being taught. In many
primary and intermediate classrooms around the country, oral reading
is the predominant form of reading. Time is allocated to daily luency
instruction where students read a short passage repeatedly for the primary
purpose of reading it faster. his practice is accompanied by regular
assessments of students’ reading (as oten as weekly in some classrooms)
on the number of words students can read correctly in a minute on an
instructional-level passage.
hese developments in reading instruction are based on a solid
foundation of research and indeed I feel there is a legitimate place for
them in the classroom. Reading rate as determined by CBMs and other
similar one-minute readings of short passages is a good measure of word-
recognition automaticity, a critical factor in comprehension. Oral reading
experiences, especially authentic oral experiences, such as the recitation of
poetry or the performance of a script, have been shown to improve reading
luency and overall reading proiciency. However, these are not the only
instructional factors that must be considered for efective reading and
reading instruction.
Most reading done by adults is silent reading. As such, it is not
Rasinski iii
unreasonable to expect students to receive instruction and support in
silent reading in their classrooms. Further, a fair amount of adult reading
consists of lengthy texts. Again, therefore, it is not unreasonable to
provide students with support and opportunities to read such texts on
their own, primarily silently. Additionally, as adults we are occasionally
called on to closely read material that we might consider diicult or
challenging—for example, technical texts related to our profession or
courses we may be taking, or even texts we read for our own pleasure
and entertainment purposes that may be more challenging in nature.
Certainly, giving students similar opportunities to read challenging
material—with appropriate support—needs to be part of our reading
instruction. herefore, we may safely conclude that issues of silent reading
luency, stamina, and close reading of complex texts are foundational for
proiciency in reading and success in various academic and technical ields.
hese issues are challenging for literacy scholars and educators alike. Each
element is critical in its own right. Additionally, these elements interact
with one another and other critical variables in the reading process.
Despite their importance, however, silent reading, stamina, and complex
texts are issues that, until recently, have not received suicient scholarly
attention.
hat is what this book is about. Dr. Elfrieda Hiebert is one of the
few literacy scholars who has extensively studied issues of silent reading,
stamina, and text complexity, and in this volume, she has assembled a
collection of original and previously published papers that explore these
vital issues in depth. his volume ofers readers the opportunity to explore
the conceptual nature of these issues and discover how exactly we may
begin to go about providing students with the relevant instruction that will
help them achieve success in these areas. he irst four chapters explore
just what stamina and silent reading mean. he notion of text complexity is
embedded within these chapters, as readers must engage in silent reading
with stamina in order to negotiate such texts successfully and eiciently.
Here we are confronted with the reality of what happens in school-based
reading instruction. To develop proiciency in reading, students need to
practice reading.
As Hiebert stresses in her opening chapter, many students simply
do not spend enough time reading in school. Increasing the amount that
students read silently in schools is one of the solutions explored in the
second half of the volume. Other ways that students can be supported in
developing silent reading stamina are explored in the inal four chapters.
hese applications range from a computer-based instructional protocol
iv Forward
that requires minimal teacher input to an approach that relies on the
teacher to provide scafolding and support for silent reading. hese
chapters may provide foundational principles that educators and scholars
can use in order to develop their own approaches to instruction that
develop students’ silent reading stamina.
Success in real world reading is not measured by how fast a person
can orally read a short text. Rather, reading success is more likely to be
an outcome of how well a person can engage in meaningful, close, silent
readings of lengthy, complex, and challenging material for extended
periods of time. his volume is a signiicant step in moving the literacy
ield, scholars, curriculum developers, and practitioners toward a deeper
consideration and understanding of these critical issues.
Rasinski v
References
Deno, S.L. (1985). Curriculum-based children to read: An evidence-based
measurement: he emerging alternative. assessment of the scientiic research
Exceptional Children, 52(3), 219–232. literature on reading and its implications
for reading instruction (NIH Publication
National Institute of Child Health and No. 00-4769). Washington, DC: U.S.
Human Development. (2000). Report of Government Printing Oice.
the National Reading Panel. Teaching
vi Forward
Preface
T
he 21st century demands that individuals have a high level
of literacy to successfully participate in the tasks of colleges,
communities, and jobs. Nonetheless, many students in the United
States are not attaining the necessary levels of literacy, according to
the National Assessment of Educational Progress (National Center for
Education Statistics, 2013). he assessment shows that approximately
two-thirds of a grade cohort fail to attain the proicient level. Such poor
performances are oten traced to a lack of word recognition skills, and
solutions have been designated to ameliorate this perceived gap (see
California Board of Education, 2014). But evidence is strong that all but
a small percentage of American students—approximately 2% of a grade
cohort—can recognize the majority of words in a grade-level text by
the end of the primary grades (Bielinski, Daniel, & Hiebert, 2015). Most
students can read, but they don’t have rigorous independent reading habits.
What many students lack is stamina—the ability to persevere in reading
texts on their own.
In the contexts of most real-life reading and as relected in
assessments, individuals read texts on their own. hat setting is
signiicantly diferent than the oral reading context, which, in recent times,
has dominated reading instruction in American elementary classrooms. In
silent reading, students need to monitor their own comprehension. If they
are reading too quickly, they need to accommodate their rate of reading
to match their comprehension. hey need to revisit a word if they couldn’t
igure it out the irst time. By contrast, a teacher, tutor, or peer is present
when students are reading orally. It’s hard to stop and scan a page when
someone is expecting an oral rendition of the text.
Without suicient experience or strategies for silent reading,
many students read slowly. In lieu of a teacher or tutor to monitor
student performance, many students soon engage in less than eicacious
behaviors in the silent reading context. Some eventually engage in counter-
productive behaviors such as skimming. Mandates and interpretations
Hiebert vii
of mandates during the No Child Let Behind era may have exacerbated
the situation, as more emphasis was placed on oral reading in assessment
and instruction. But assessments of oral reading luency and accuracy do
not describe student performances in a silent reading context (Trainin,
Hiebert, & Wilson, 2015).
he nature of silent reading and the manner in which it can be
guided has been relegated to secondary status relative to oral reading.
But ultimately, it is silent reading that is most important. In the tasks of
colleges, workplaces, and communities—tasks such as voting, reviewing
documents to purchase large-ticket items such as cars or houses, and
seeking employment—individuals read silently, not orally. Silent reading
patterns and examinations of instruction that supports eicient habits
have received short shrit in both the research and pedagogical literature.
he handful of programs of research that have been devoted to the topic
are represented in this volume. hese papers indicate that silent reading
stamina can be improved through intentional instruction and teachers’
design of tasks.
Hiebert ix
of text complexity within the CCSS (National Governors Association,
Center for Best Practices, & Council of Chief State School Oicers, 2010).
Mesmer’s chapter sheds considerable light on demands for stamina in
reading complex texts by (a) examining what is meant by complex text,
(b) explaining how increased demands on text complexity could inluence
reader-text interactions, (c) explaining the rationale for stretch text, and (d)
noting factors that may contribute or hinder students’ increased capacity
when texts stretch their reading capacity.
he next two chapters in this section describe a response to
the problems related to stamina in silent reading. he response is an
intervention in which the amount of reading is considerable. Even
students whose silent reading habits are not eicacious read more in
the intervention when students’ reading is scafolded digitally. Texts
are matched to students’ reading levels with daily assignments based
on their ongoing comprehension performances. In addition, the digital
context makes it possible to vary length of reading segments, number of
comprehension questions, use of repeated readings, and assignment of
prereading techniques. he results reported in these two chapters indicate
that increasing silent reading of texts of appropriate levels inluences
students at all levels—from the end of the primary grades through high
school.
In the irst project that is described in Chapter 5, Rasinski, Samuels,
Hiebert, Petscher, and Feller describe the results of the intervention
with students from grades 4 through 10. Students who participated in
the program for a minimum of 40 lessons (20 hours of instruction) over
approximately 6 months made signiicantly greater gains on both criterion-
referenced and norm-referenced tests than students who participated in
alternative interventions. he gains were found generally in all grade levels
studied and in all subpopulations, except for English learners.
Chapter 6 by Reutzel, Petscher, and Spichtig describes the eicacy
of the same intervention with third graders. In this study, the authors
compared the eicacy of the intervention to three other interventions.
he students in the Reading Plus intervention demonstrated signiicantly
superior performance on the state’s reading assessment than students
in the other interventions. Reutzel et al. conclude that the silent reading
intervention aforded struggling third-grade students with appropriately
challenging and varied reading genres to be both motivating and within
their reach. hese two reports, then, ofer evidence that even struggling
readers, when provided with scafolded support, can develop stronger
patterns of stamina in silent reading.
x Preface
How Can Stamina Become a Focus of English/Language Arts
Instruction?
Stamina in silent reading poses a substantial problem for the
success of many students in attaining world-level literacy standards.
he chapters in this section consider the steps that need to be taken by
educators and researchers to make stamina in silent reading an integral
part of students’ school experiences.
In Chapter 7, Hiebert, Samuels, and Rasinski illustrate the
eicacy of interventions that emphasize silent reading stamina at three
developmental levels: primary, intermediate to high school, and young
adult. he authors conclude there is suicient support for initiating policies
and practices in classrooms on all levels aimed at increasing silent reading
stamina. hey also conclude that the process of developing silent reading
stamina extends through the elementary grades and into middle and
high school as students encounter new genres and content. At least for the
students who depend on schools to become literate, good silent reading
habits require that they participate in structured silent reading experiences
that model eicient reading.
he inal chapter of the volume makes a inal plea for attending to
the critical proiciency of silent reading stamina. he chapter ends with
a note of optimism. In particular, Ray Reutzel and I conclude that, while
the digital age increases demands for literacy, it also ofers increased
opportunities. Digital contexts can support students who are especially
vulnerable when they enter school or who have not been successful in
typical learning contexts. Digital contexts can provide consistency of
exposure to ensure that students are reading at appropriate levels and are
staying on task. hese opportunities can support students in successful
participation in other literacy contexts including the large and small group
and independent classroom contexts. Eicacious silent reading patterns
depend on thoughtful and strategic actions that are part of interventions
(such as the ones provided digitally) and typical instructional contexts.
Hiebert xi
outcomes, which promise to be lackluster (Ujifusa, 2012, 2013), are likely
to result in a great deal of hand-wringing among educators. he release
of the assessment results will likely be accompanied by explanations and
accusations on the part of pundits as they attempt to interpret results.
One missing element is an explanation of what the less than
propitious results of students in many states on the new-generation
assessments have to do with attention to the typical tasks of instruction
and the manner in which they support students’ ability to read silently
for extended periods. Stamina, as the chapters in this volume illustrate,
is critical for ensuring that students are ready for the tasks of college,
communities, and the workplace as well as the new generation of
assessments.
As the irst volume to address the topic of silent reading stamina,
Teaching Stamina and Silent Reading in the Digital-Global Age will be
a useful guide for many constituencies. Among those who will beneit
from this volume are teacher educators and professional developmental
leaders who interact with teachers in courses and workshops. he
volume is especially pertinent to supervisors and curriculum leaders in
districts, states, and agencies such as regional laboratories who work in
the translation of policies to practices. Further, graduate students and
professors who study the eicacy of practice in supporting proicient
student reading will ind the volume useful in the design of research,
especially regarding instructional interventions. he conclusions and
suggestions ofered in the chapters in this volume are intended to serve as
grist for study groups of teachers, graduate and undergraduate courses,
professional development sessions, and conversations among colleagues.
A Note of Gratitude
his book would not be possible without the generosity of a
number of publishers and authors who gave permission to reprint several
of the chapters in this volume. Readers who are unfamiliar with the
legalities of academic book and journal publishers may be unaware that
scholars retain the rights to their work until a manuscript begins the
copyediting phase of publishing as a journal article or a book chapter.
Colleagues at Taylor Francis were generous in acknowledging this policy
and conirming that the papers that appear as Chapters 5, 6, and 7 could be
used in the present volume.
A special note of gratitude is owed to both Tim Rasinski and Ray
Reutzel. Tim generously wrote the foreword and, additionally, provided the
xii Preface
manuscript that was accepted for publication in Reading Psychology, which
later became Chapter 5 in this volume. Ray was most generous in agreeing
to permit the republication of chapters from Revisiting Silent Reading ater
the International Reading Association reverted the rights of the volume to
Ray and me as coeditors. his generosity made possible the publication in
this volume of Chapters 2, 3, and 8. In addition, he generously provided the
accepted version of the manuscript provided to the Journal of Educational
Research (Chapter 6 in this volume).
A skillful editor is truly a git, and I thank Stacy Sharp who
meticulously edited all of the prepublication manuscripts and the chapters
commissioned for this volume. An individual who can produce an e-book
is also priceless and, for serving that role for this volume, I thank Alice
Folkins. She has produced many of the products of TextProject, but
creating an e-book is a new venture that she bravely—and successfully—
took on.
I conclude with thanks to Charley Fisher who handles the many
logistics that make TextProject possible. Without Charley’s generosity
and unfailing willingness to attend to the details, this project—and many
others at TextProject—would only be a dream. I hope this volume will
help the many students who come to our schools every day with dreams of
success see those dreams come true.
EHH
Santa Cruz, CA
June 2015
Hiebert xiii
References
Bielinski, J., Daniel, M., & Hiebert, Trainin, G., Hiebert, E. H., & Wilson, K.
E. H. (February 19, 2015). Patterns of (2015). A comparison of reading rates,
silent reading luency and accuracy: comprehension, and stamina in oral and
What they mean for instruction and silent reading of fourth-grade students.
intervention. Presentation at the annual Reading Psychology. Retrieved from http://
meeting of the National Association www.tandfonline.com/doi/abs/10.1080/02
of School Psychologists, Orlando, FL. 702711.2014.966183
Retrieved from: https://www.academia.
edu/11191138/Patterns_of_Silent_ Ujifusa, A. (2012). Scores drop on
Reading_Fluency_and_Accuracy_What_ KY’s Common Core-aligned tests.
hey_Mean_for_Instruction_and_ Education Week Online. Retrieved
Intervention from http://www.edweek.org/ew/
articles/2012/11/02/11standards.h32.html
California Board of Education. (2014).
English language arts/English language Ujifusa, A. (2013). Tests aligned to
development framework. Sacramento, Common Core in New York State trigger
CA: California Department of Education. score drops. Education Week Online.
Retrieved from http://www.cde.ca.gov/ci/ Retrieved from http://blogs.edweek.org/
rl/cf/elaeldfrmwrksbeadopted.asp edweek/state_edwatch/2013/08/_one_
interesting_aspect_of.html
National Center for Education Statistics.
(2013). he nation’s report card: A irst
look: 2013 mathematics and reading
(NCES 2014-451). Washington, DC:
Institute of Education Sciences, U.S.
Department of Education.
National Governors Association, Center
for Best Practices, & Council of Chief
State School Oicers. (2010). Common
Core state standards for English language
arts and literacy in history/social studies,
science, and technical subjects: Appendix
A. Washington, DC: Authors. Retrieved
from http://www.corestandards.org/
assets/Appendix_A.pdf
xiv Preface
I. UNDERSTANDING THE PROBLEM &
THE CONSTRUCT
xv
CHAPTER 1
T
he new assessments developed by the Smarter Balanced Assessment
Consortium and the Partnership for Assessment of Readiness for
College and Careers (PARCC) to align with the Common Core
State Standards (CCSS; National Governors Association Center for Best
Practices [NGA Center] & Council of Chief State School Oicers [CCSSO],
2010) require all but the most severely disabled students to read and
respond to texts in a digital context. Beginning at third grade, students
are expected to read and respond to texts silently over extensive periods of
time (see Table 1.1). And, unlike typical classroom reading tasks, students
will have no access to teachers to present a irst read or to help them by
scafolding a section of text, monitoring their reading, or advising them
when it is time to start answering questions or writing responses.
Of course, extended silent reading is not a requirement limited to
the new CCSS-related assessments. For the tasks of college, citizenry, and
the workplace, we most oten conduct reading tasks silently on our own for
sustained periods of time. But it is highly likely that many students will not
be prepared for the challenge of the silent reading tasks posed by the new
assessments. he reason for this challenge is not—as pundits and observers
of education frequently suggest—that American students cannot read.
Indeed, most American students can read. What many students cannot
do is independently maintain reading focus over long periods of time.
he proiciency they lack is stamina—the ability to sustain mental efort
without the scafolds or adult supports.
In this chapter, I provide an overview for three themes that are
echoed in the chapters of this book: (a) stamina is a major challenge
for many American students, (b) silent reading proiciency depends
on extensive reading opportunities, and (c) appropriate instructional
applications can increase students’ silent reading proiciency. First,
however, I identify and deine the constructs that are the foci of this
16 Stamina in Reading
book—silent reading, comprehension-based silent reading rate, and
the role of oral reading (including oral reading of instructional texts by
teachers).
Table 1.1: Administration Times and Number of Sessions: CCSS Assessment Consortia1
Grade PARCC SBAC
2 3
3 EOY : 60 min. x 2 sessions CAT : 1 hr. 45 min.
Perf: 40-60 min. per task Perf: 35 min. (stimulus + research Qs;
TOTAL: Approximately 4.5 hours 70 min. writing prompt)
TOTAL: Approximately 3.5 hours
4-5 EOY: 70 min. x 2 sessions CAT: 1 hr. 45 min.
Perf: 50-80 min. per task Perf: 35 min. (stimulus + research Qs;
TOTAL: Approximately 5 hrs. 50 70 min. writing prompt)
min. TOTAL: Approximately 3.5 hrs.
6-8 EOY: 70 min. x 2 sessions CAT: 1 hr. 45 min.
Perf: 50-85 min. per task Perf: 35 min. (stimulus + research Qs;
TOTAL: Approximately 5 hrs. 55 70 min. writing prompt)
min. TOTAL: Approximately 3.5 hrs.
9-11 EOY: 70 min. x 2 sessions CAT: 2 hrs.
Perf: 50-85 min. per task Perf: 35 min. (stimulus + research Qs;
TOTAL: Approximately 5 hrs. 55 70 min. writing prompt)
min. TOTAL: Approximately 4 hrs.
1
From Wixson (2013).
2
EOY: End-of-Year
3
CAT: Computer Adaptive Technology
Hiebert 17
Hiebert, Wilson, and Trainin (2010) have introduced the construct of
comprehension-based silent reading rate. Initially, we gave the construct
the acronym CBSRR but, over time, we have shortened this to CSR, which
stands for comprehension-silent reading rate. As this term implies, the
emphasis of CSR is on establishing the rate at which students read silently
with comprehension.
18 Stamina in Reading
than students who were reading at the frequently cited independent
level of 98% or higher (Betts, 1946). his pattern would suggest that
students lack automaticity, not the fundamental ability to recognize
words as is frequently assumed in policies and mandates. For example,
the current California textbook requires required (California State Board
of Education, 2014) that intervention programs for students in grades 4
through 8 contain decodable readers for each of the 43 phonemes and their
graphemes.
Table 1.2: Accuracy Levels for Words Read without Meaning Change (Percentages) for Students
Within Two NAEP Studies
100-98% 97-95% 94-90% <90%
19921 41 51 5 2
2
2002 76 15 5 2
1
Pinnell et al., 1995
2
National Center for Education Statistics, 2005
Hiebert 19
he DIBELS norms are based on approximately 167,000 students
in kindergarten through grade 12 representing every census region in
the U.S. (Dewey et al., 2013)—approximately 24,000 students per grade
level. Table 1.4 provides accuracy, rate, and comprehension data for fourth
graders. hese data support the NAEP data, as even students at the 10th
percentile display reasonable accuracy—95%. heir rate, however, is
approximately 60% of the oral reading rate of typical grade-level readers.
DIBELS developers have added a retelling measure to the assessment.
Difering considerably from the comprehension measures typical of the
NAEP and of the new CCSS-aligned assessments, this measure indicates
that students’ challenges lie not in their ability to recognize individual
words but in their ability to think about text.
Table 1.4: Fourth Graders’ Rate, Accuracy, and Comprehension on DIBELS (2011 to 2012 Cohort)
Percentile Rate Accuracy Comprehension
10 80 95 21
20 98 97 27
30 109 97 32
40 118 98 36
50 128 98 41
60 138 99 45
70 147 99 50
80 160 99 57
90 176 100 67
99 212 100 94
Hiebert 21
lack of automaticity in word recognition does appear to be an issue for
the students in the bottom 5% or even 7% of a cohort, most students can
recognize the core vocabulary. However, when they are asked to sustain
their attention in silent reading, these students appear not to have the
stamina that is required to interact with texts in a meaningful manner.
22 Stamina in Reading
variables, it was only when time was allocated for text reading in
classrooms that signiicant gains were found on any post-test measures
(including word reading, decoding, and passage comprehension). No
other time factors, including time spent on word recognition, alphabetic
knowledge, or phonemic awareness instruction, independently contributed
to reading growth. In another study, Kuhn and Schwanenlugel (2009)
reported that the distinguishing feature in a large scale-up of an
intervention was not in the results demonstrated by the intervention but
rather the success of students in relation to the amount of time that they
spent reading. Students in the seven most successful classes read seven
minutes more each day than did the students in the seven least successful
classrooms, regardless of whether classrooms were part of the intervention.
Observational studies over the decades have shown, however,
that the percentage of school time students reading texts in many of
classrooms is limited. Leinhardt et al. (1981) found that the amount of time
that students spent reading was approximately 15% of the time allocated
to reading instruction. Taylor, Frye, and Maruyama (1990) found that
students spent an average of 15.8 minutes a day in either assigned reading
or sustained silent reading (SSR).
All evidence points to the fact that, although the amount of time
devoted to reading instruction increased during and following the NCLB
era (Dorph et al., 2007), the amount of time that students actually spend
reading has not increased substantially. Brenner, Hiebert, and Tompkins
(2009) observed the amount—and kinds—of reading in which third
graders participated in a sample of classrooms that were participating in
a state’s Reading First program. On average, across the 64 classrooms,
teachers reported that they were devoting twice as much time to English
language arts instruction than they had prior to the implementation
of Reading First, but their students were involved with text less than
20% of the time, spending an average of 18 minutes a day reading text.
his amount of reading practice is less than those amounts proposed by
Allington (2001) and Fisher and Ivey (2006) but it was greater than the
national average of 12 minutes a day reported by the NCES (1999). Even so,
nearly a quarter of students did not read at all during the observed reading
periods in the classrooms in Brenner et al.’s sample.
In the classrooms that Brenner et al. (2009) observed, less than 10%
of total reading instructional time was allocated to unassisted reading,
where students are responsible for reading texts on their own without
teacher assistance or immediate monitoring. he small amount of time
that students read on their own can be tied to interpretations that were
Hiebert 23
prominent as a result of the report of the NRP (i.e., NICHD, 2000), which
concluded that there was insuicient evidence to support independent
reading in classroom time. Teachers in the study had been informed of this
inding as part of Reading First trainings and they appeared to follow this
advice, even though the teacher’s guides in their mandated core reading
programs included in-school independent reading.
During the NCLB era, many educators extended the NRP’s
conclusion on independent reading to silent (or unassisted reading, as
Brenner et al. called it) reading as part of instructional sessions (Allington,
Billen, & McCuiston, 2015). his interpretation of this conclusion to
independent, silent reading did not accurately relect the studies on which
the NRP based their conclusions—studies of SSR where students read
texts of their own choosing and without teacher monitoring or scafolding.
he popular interpretation of this inding among educators, however,
was understandable in that the NRP did not provide a highly nuanced
description of the indings and also failed to include descriptive studies
in their database such as the Manning and Manning (1984) study that
showed that SSR was more efective when it included peer discussion or
teacher conferencing.
Following the NRP report, Lewis (2002) analyzed a broader
group of independent reading studies, many pertaining to students’
silent reading. Out of more than 100 separate student samples that
Lewis examined, the majority showed positive results for silent reading.
he samples in most of the studies that reported no efects or negative
growth from silent reading experiences consisted of students in
fourth grade or above. Lewis speculated that because older students
already have some reading proiciency, 10- to 15-minute silent reading
periods—as was typical in these studies—may have been insuicient to
signiicantly inluence these students’ performance. For students who
were less-proicient readers (e.g., beginning readers, learning disabled,
second- language learners), even such short periods typically produced
beneits. Speciically, the studies suggest that when there is some form
of scafolding, students’ silent reading proiciencies improve as a result
of increased opportunities to read (Nunnery, Ross, & McDonald, 2006).
Scafolding may need to take numerous forms, including support for
selecting appropriate texts (Mervar & Hiebert, 1989).
On the 1998 NAEP (NCES, 1999), fourth graders were asked to
report the number of pages that they read daily in school. Even though a
measure of self-reported reading is a rather simple tool (and not necessarily
the most accurate), this measure predicted students’ performances on
24 Stamina in Reading
the NAEP. A follow-up study that focused speciically on the students
within the state of Maryland conirmed that, ater parental education
was statistically controlled, the amount of engaged reading signiicantly
predicted reading achievement on the NAEP (Guthrie, Schafer, & Huang,
2001).
he survey used in the Guthrie et al. study used number of pages
read to determine the amount of reading. In Table 1.5, I have converted
pages read to number of words likely read by a hypothetical student in
each of three proiciency groups on the NAEP, using the average number
of words per page in a set of 100 fourth-grade texts. It is highly unlikely
that all three hypothetical students, representing diferent proiciency
groups on the NAEP, read at a similar rate (Pinnell et al., 1995; NCES,
2005), making the disparities in amount of text read daily in school by less-
proicient and more-proicient students likely greater than the amounts
shown in Table 1.5. But as Table 1.5 illustrates, even when a similar reading
rate is used across proiciency levels, diferences in amount of time spent
reading in school mean that the poor readers keep getting poorer and the
proicient readers keep getting better (Stanovich, 1986).
Table 1.5: Typical Reading Volume: Reading Levels of hree Hypothetical Students
Alex Alice Abby
Daily reading in school (in 7.2 11 15
minutes)
Daily # of words read (yearly 715 (127,700) 1,100 1,485
total words) (198,000) (267,300)
Projected new words (with 290 (1,160) 446 (1,784) 601 (2,406)
morphological family members)
Performance on NAEP Below-basic Basic Proicient
1
Same reading rate used for all students: 100 wpm
26 Stamina in Reading
from grades 2 through 12 (Hiebert, Spichtig, Bender, 2013), over 14% of
the students could not comprehend a irst-grade text. What is surprising
is what these students gained from consistent reading—on computers—
over a two-month period following the assessment. Ater only 10 hours
of instruction that consisted of reading extended texts and answering
comprehension and vocabulary questions, these students had moved from
a 58% to 79% (on average) level of comprehension, moved to one grade
level higher of text, and were reading an average of nine words faster
(Hiebert et al., 2013). hese students had suicient word recognition—even
the lowest scoring ones—to increase substantially in their comprehension
on a irst-grade passage. And this growth happened ater students had read
approximately 40,000 words over the course of 40 lessons. Even a relatively
small increase in reading apparently can mean substantial increases in
students’ proiciency.
hese reports (Rasinski et al., 2011; Reutzel et al., 2012; Hiebert
et al., 2013) all indicate that there are instructional mechanisms that can
support students in developing the reading habits that are needed for the
21st century—and that build on the research on cognitive and linguistic
processes. But most teachers don’t have access to digital technology such
as that I have discussed, nor am I advocating that digital technology or
a particular program is the solution to all reading problems. Instead,
it is critical to consider the important components of various kinds
of successful programs. Using knowledge about research, theory, and
practice, I have generated seven actions that teachers can take to support
increased stamina in silent reading. he actions are listed below.
1. Give students responsibility for the irst read of texts.
2. Be explicit about the degree of challenge.
3. Have students make explicit goals for increased stamina and
reading.
4. Increase the amount that students are reading.
5. Increase students’ engagement in reading through connected
homework reading and magazine articles.
6. Increase students’ responses to texts through writing and
discussions.
7. Have monthly “on your own” sessions using available sample
assessments.
Individual teachers can implement these actions over the course of
a school year with a cohort of students. Getting support in one year may
make a diference (as was the case in the Rasinski et al., 2011 and Reutzel
et al., 2012 studies). As the Hiebert and colleagues (2013) project indicates,
Hiebert 27
students can beneit even from several months of consistent and deliberate
opportunities of increased silent reading. But for students who have
developed poor reading habits in the early grades, the efort of creating
strong silent reading patterns, including stamina, will likely require the
involvement of teachers over several years of students’ school careers.
Opportunities need to be consistent and aimed at acquiring knowledge.
he texts can’t be vacuous—otherwise students won’t be engaged in
reading—but neither should the texts be far out of the realm of students’
knowledge or their vocabulary expertise.
Conclusions
he need for eicient silent reading habits for success in the digital-
global age is unarguable. here is emerging evidence that these habits
can be enhanced through scafolding, both on the part of teachers and
from digital supports. hese supports look quite diferent than the SSR
that Hunt (1970) advocated in favor of. his structuring can begin when
students are in the early stages of reading (Reutzel et al., 2008). Further, it
is highly likely that the process is an ongoing endeavor, extending through
the elementary grades and into middle and high schools as students
encounter new genres and content. At least for the students who depend
on schools to become literate, good silent reading does not just happen as a
result of an emphasis on oral-reading luency training. For many students,
good silent reading habits require that they participate in structured silent
reading experiences that model eicient reading. he target activities can
be summarized as a succinct mantra (Hiebert, 2013) that provides the
meanings for increasing stamina in silent reading: Read oten. Mostly
silently. Focus on knowledge.
28 Stamina in Reading
References
Allington, R. (2001). What really matters Lieberman (Eds.), Time to learn: A review
for struggling readers: Designing research- of the beginning teacher evaluation (pp.
based programs. New York, NY: Addison- 7–32). Washington, DC: U.S. Department
Wesley. of Education, National Institute of
Education.
Allington, R., Billen, M.T., & McCuiston,
K. (2015). he potential impact of the Foorman, B. R., Schatschneider, C.,
Common Core State Standards on Eakin, M. N., Fletcher, J. M., Moats, L.
reading volume. In P.D. Pearson & E.H. C., & Francis, D. J. (2006). he impact of
Hiebert (Eds.), Research Based Practices instructional practices in grades 1 and
for Teaching Common Core Literacy. 2 on reading and spelling achievement
(pp.161-178) New York, NY: Teachers in high poverty schools. Contemporary
College Press. Educational Psychology, 31(1), 1-29.
Betts, E.A. (1946). Foundations of Guthrie, J.T., Schafer, W.D., & Huang,
reading instruction, with emphasis on C.W. (2001). Beneits of opportunity
diferentiated guidance. New York, NY: to read and balanced instruction on
American Book Company. the NAEP. he Journal of Educational
Research, 94(3), 145–162.
Brenner, D., Hiebert, E.H., & Tompkins,
R. (2009). How much and what are third Hiebert, E.H. (August 30, 2013). Reading
graders reading? In E.H. Hiebert (Ed.), rules for becoming proicient with complex
Reading more, reading better (pp. 118– texts [Frankly Freddy]. Santa Cruz, CA:
140). New York, NY: Guilford. TextProject. Retrieved from http://www.
textproject.org/library/frankly-freddy/
California State Board of Education reading-rules-for-becoming-proicient-
(July 9, 2014). he ELA/ELD Framework. with-complex-texts/
Sacramento, CA: Author.
Hiebert, E.H., Menon, S., Martin, L.A.,
Clay, M.M. (1985). he early detection of & Bach, K.E. (2009). Online scafolds
reading diiculties (3rd ed.). Portsmouth, that support adolescents’ comprehension
NH: Heinemann. (Research Brief). Seattle, WA: Apex
Dewey, E.N., Kaminski, R.A., & Good, Learning.
R.H., III. (2013). 2011-2012 DIBELS Hiebert, E.H., Spichtig, A., & Bender,
net system-wide percentile ranks for R. (2013). Building capacity in low-
DIBELS Next. Eugene, OR: Dynamic performing readers: Results of two months
Measurement Group. of Reading Plus practice (Research
Dorph, R., Goldstein, D., Lee, S., Lepori, Brief 2.1). Winoski, VT: Reading Plus.
K., Schneider, S., & Venkatesan, S. (2007). Retrieved from http://www.readingplus.
he status of science education in the com/results/research-briefs/.
Bay Area: Research brief. Berkeley, CA: Hiebert, E.H., Trainin, G., & Wilson,
Lawrence Hall of Science, University of K. (July 15, 2011). Comprehension and
California, Berkeley. reading rates across extended grade-
Fisher, D. & Ivey, G. (2006). Evaluating appropriate texts. Paper presented at
the interventions for struggling the annual conference of the Society
adolescent readers. Journal of Adolescent for the Scientiic Study of Reading, St.
& Adult Literacy, 50(3), 180—189. Petersburg, FL.
Fisher, C.W., Berliner, D.C., Filby, N.N., Hiebert, E.H., Wilson, K.M., & Trainin,
Marliave, R., Cahen, L.S., & Dishaw, M.M. G. (2010). Are students really reading
(1980). Teaching behaviors, academic in independent reading contexts? An
learning time, and student achievement: examination of comprehension-based
An overview. In C. Denham & A. silent reading rate. In E.H. Hiebert & D.R.
Hiebert 29
Reutzel (Eds.), Revisiting silent reading: National Center for Education Statistics.
New directions for teachers and researchers (2006). he nation’s report card. Reading
(pp. 151–167). Newark, DE: International 2005 (NCES 2006-451). Washington,
Reading Association. DC: Institute of Education Sciences, U.S.
Department of Education.
Hunt, L.C., Jr. (1970). he efect of
self-selection, interest, and motivation National Center for Education Statistics.
upon independent, instructional, and (2014). he nation’s report card. A irst
frustration levels. he Reading Teacher, look: 2013 mathematics and reading.
24(2), 146–151, 158. National Assessment of Educational
Progress at grades 4 and 8 (NCES 2014-
Kuhn, M.R., & Schwanenlugel, 451). Washington, DC: Institute of
P.J. (2009). Time, engagement, and Education Sciences, U.S. Department of
support: Lessons from a 4-year luency Education.
intervention (pp. 141–161). In E.H.
Hiebert (Ed.), Reading more, reading National Governors Association Center
better. New York, NY: Guilford. for Best Practices & Council of Chief
State School Oicers. (2010). Common
Leinhardt, G., Zigmond, N., & Cooley, Core State Standards for English language
W.W. (1981). Reading instruction and its arts and literacy in history/social studies,
efects. American Educational Research science, and technical subjects with
Journal, 18(3), 343–361. Appendices A-C. Washington, DC:
Lewis, M. (2002). Read more—read better? Authors.
A meta-analysis of the literature on the National Governors Association Center
relationship between exposure to reading for Best Practices & Council of Chief State
and reading achievement. Unpublished School Oicers. (2012). Supplemental
doctoral dissertation, University of information for Appendix A of the
Minnesota, Twin Cities. Common Core State Standards for English
Manning, G.L., & Manning, M. (1984). language arts and literacy: New research
What models of recreational reading on text complexity. Washington, DC:
make a diference? Reading World, 23(4), Author. Retrieved from http://www.
375–380. corestandards.org/resources.
Mervar, K., & Hiebert, E.H. (1989). National Institute of Child Health and
Literature selection strategies and amount Human Development (NICHD). (2000).
of reading in two literacy approaches. In Report of the National Reading Panel.
S. McCormick & J. Zutell (Eds.), Cognitive Teaching children to read: An evidence-
and social perspectives for literacy based assessment of the scientiic research
research and instruction (38th Yearbook literature on reading and its implications
of the National Reading Conference; pp. for reading instruction (NIH Publication
529–535). Chicago, IL: National Reading No. 00-4769). Washington, DC: U.S.
Conference. Government Printing Oice.
National Center for Education Statistics. Nunnery, J.A., Ross, S.M., & McDonald,
(1999). NAEP 1998 reading report card for A. (2006). A randomized experimental
the nation and states (NCES 1999-500). evaluation of the impact of Accelerated
Washington, DC: Institute of Education Reader/Reading Renaissance
Sciences, U.S. Department of Education. implementation on reading achievement
in grades 3 to 6. Journal of Education for
National Center for Education Statistics. Students Placed at Risk, 11(1), 1-18.
(2005). he nation’s report card. Fourth-
grade students reading aloud: NAEP Pinnell, G.S., Pikulski, J.J., Wixson, K.K.,
2002 special study of oral reading (NCES Campbell, J.R., Gough, P.B. &. Beatty,
2006-469). Washington, DC: Institute of A.S. (1995). Listening to children read
Education Sciences, U.S. Department of aloud: Data from NAEP’s integrated
Education. reading performance record (IRPR) at
30 Stamina in Reading
grade 4 (Rep. No. 23-FR-04). Washington, Stanovich, K.E. (1986). Matthew efects in
DC: Oice of Educational Research reading: Some consequences of individual
Improvement, U.S. Department of diferences in the acquisition of literacy.
Education. Reading Research Quarterly, 21(4),
360–407.
Rasinski, T., Samuels, S.J., Hiebert, E.,
Petscher, Y., & Feller, K. (2011). he Taylor, B.M., Frye, B.J., & Maruyama,
relationship between a silent reading G.M. (1990). Time spent reading and
luency instructional protocol on reading growth. American Educational
students’ reading comprehension and Research Journal, 27(2), 351–362.
achievement in an urban school setting.
Reading Psychology, 32(1), 75–97. Wixson, K.K. (April 24, 2013). Key shits
in assessment and instruction related
Reutzel, D.R., Fawson, P.C., & Smith, J.A. to CCSS-ELA [webinar]. TextProject.
(2008). Reconsidering silent sustained Retrieved from http://www.youtube.com/
reading: An exploratory study of watch?v=IHYcJAX0AO8
scafolded silent reading. he Journal of
Educational Research, 102(1), 37–50.
Reutzel, D.R., Petscher, Y., & Spichtig,
A.N. (2012). Exploring the value added
of a guided, silent reading intervention:
Efects on struggling third-grade readers’
achievement. he Journal of Educational
Research, 105(6), 404–415.
Hiebert 31
CHAPTER 2
S. Jay Samuels
University of Minnesota-Twin Cities
Elfrieda H. Hiebert
TextProject & University of California, Santa Cruz
Timothy Rasinski
Kent State University
T
he ability to read and understand printed words represents a
remarkable human accomplishment. Although the ability to
communicate through the spoken word seems to have been
genetically hardwired into our species over the eons of time it has taken
our species to develop (an estimated 5-8 million years), the skill of reading
has been with us only for about 7,000 years. Because of the huge time
diferences between the development of language by ear versus language
by eye, there appears to be some design laws in the human eye that must
be overcome before reading can occur. In essence, as remarkable an
instrument as is the human eye, it is not ideally constructed for reading.
An argument that we make in this chapter is that without eye movements,
reading alphabetic texts would not be possible.
A century ago, the study of eye movements was one of the hottest
topics in reading psychology. In the classic volume he Psychology and
Pedagogy of Reading, Huey (1908/1968) devotes two chapters to eye
movements. Despite this auspicious start, it is not the hot topic in reading
that it once was, as evidenced in Cassidy and Cassidy’s (2009) “What’s
Hot for 2009” survey in the United States. In this list, ocular-motor eye
movement is not listed as a topic to be rated by the experts. Because of
the critical role that eye movements play in the reading process, the topic
should be of interest to educational leaders at all levels who desire to see
improvements in reading achievement. his chapter on eye movements in
1 his chapter was previously published in Revisiting Silent Reading: New Directions for Teachers
and Researchers. he deinitive publisher-authenticated version published in 2010 and in 2014 is available
online at: http://www.reading.org/general/Publications/Books.aspx & http://textproject.org/library/books/
revisiting-silent-reading/
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
A key idea in this chapter is that the human eye is not ideally
designed for reading. Although the eyes are designed to move to perceive
things, the typical perception pattern of a visual image difers from that
of a line of print. Consequently, the eyes need to learn to make particular
kinds of movements if proicient reading is to occur. Imagine that you are
trying to identify the person who is standing in front of you. As you look
at this person, all that is in focus is the person’s nose and eyes. he rest
is fuzzy, but you can detect shape. You rapidly shit your points of focus
to other parts, so that in time the various parts of the individual are in
focus. he diiculty in determining the identity of this person is somewhat
similar to the problem of recognizing words when reading a text.
he problem with the eye when reading is that at any given
moment only a tiny amount of printed material from a page is in enough
focus to enable easy reading. Consequently, rapid eye movements are
required to bring diferent parts of a text onto that tiny area on the retina
that can see the letters and words clearly—the fovea. he retina of the eye
Legge et al.’s (2007) research suggests that only six or seven letters
surrounding the ixation point on the fovea can be identiied with 80%
accuracy and, as the eye moves farther away from the ixation point,
he Fixation Pause
Eye ixations in reading are critical because it is during a ixation
that the eye takes in information from the printed page and begins to
process it for meaning. he duration of the typical ixation pause is about
300 ms, which is about one third of a second (pauses can be as short as 100
ms or as long as 500 ms, which is 1/10 to 1/2 of a second). It is assumed
that during longer ixations considerable cognitive processing is going
on, such as attempting to grasp the meaning of a sentence or integrating
information across several sentences. “While the word ixation implies
that the eye is motionless, this is not the case. here is a slight eye tremor
that serves to activate the neurons in the retina so they will continue iring
(Gilbert, 1959). Taking into account the brief amount of time it takes to
make a forward saccade in which the eye moves from one ixation pause
to the next, in a single second the eye can make approximately three
ixations. “When viewing a scene or a page of printed material, the typical
person seems to be unaware that the information being processed by the
brain has been coming in at a rate of three bursts a second and that each
burst must be processed rapidly, because the visual image coming with
each burst survives for less than a second and then it is lost. If,however, the
processing is too slow and the visual image disappears from the retina, all
is not lost. he reader can reixate the original image. he term eye ixation
pause represents the time spent on a single ixation, whereas the term gaze
duration suggests the total amount of time the reader spends on a word
across several eye ixations.
Because of the rapid loss of the visual image from a ixated word or
word part, what the reader must do is transform the visual image into its
sound representation. For example, when the reader encounters the printed
word cat, it is transformed into its phonological form /c-a-t/ and then
placed in short-term memory.
he advantage gained by transferring visual into phonological
information and placing the phonological information in short-term
memory is that the shelf life of the acoustic information in short-term
memory is about 10 seconds, which is considerably longer than the
duration of visual information in iconic memory, which is less than 1
second (Peterson & Peterson, 1959). For the acoustic information that
is in short-term memory, 10 seconds is usually suicient time (in most
Forward Saccades
When reading English, forward saccades are characterized
by let-to-right eye movements. During an eye movement, vision is
suppressed because the movement is so fast that the brain cannot process
the information. he amount of time required to move the eye from
ixation to ixation requires only 1/20th of a second. he distance the eye
moves in each forward saccade ranges between 1 and 20 letter spaces,
with the average being 4-5 letter spaces—the length of a shorter word. It
would appear, then, that for skilled readers, for whom the unit of word
recognition is the word, the eye jumps from word to word. For skilled
readers, what controls the distance the eye jumps with each saccade are
the rod cells, which are sensitive to the spaces that mark word boundaries.
Ideally, the saccade would place the image of the word so that the letters
2 Reference to the next section is referring to materials in Revisiting Silent Reading: New Directions
for Teachers and Researchers.
Cassidy, J., & Cassidy, D. (2009, February/ Gough, P.B. (1971). One second of
March). What’s hot for 2009. Reading reading. In J.F. Kavanagh & I.G.
Today, 26(4), 1, 8-9. Mattingly (Eds.), Language by ear and by
eye: he relationship between speech and
Cattell, J. (1947). Man of science: reading (pp. 331-358). Cambridge, MA:
Psychological research. Lancaster, PA: MIT Press.
Science.
Hebb, D.0. (1930). Organization of
Deno, S.L (1986). Formative evaluation behavior: A neuropsychologica1 theory.
of individual student programs: A new New York: John Wiley & Sons.
role for school psychologists. School
Psychology Review, 15(3), 358-374. Hochberg, J. (1970) Components of
literacy: Speculations and exploratory
Dodge, R. (1900). Visual perceptions research. In H. Levin & J.P. Williams
during eye movement. Psychological (Eds.), Basic studies on reading (pp. 74-
Review, 7(5), 454-465. doi:10.1037/ 89). New York: Basic.
h0067215
Huey, E. (1968). he psychology and
Ehri, LC., & Sweet, J. (1991). Fingerpoint- pedagogy of reading. Cambridge, MA:
reading of memorized text: What enables MIT Press. (Original work published
beginners to process the print? Reading 1908)
Research Quarterly, 26(4), 442-462.
doi:10.2307/747897 Ikeda, M., & Saida, S. (1978). Span
of recognition in reading. Vision
Feinberg, R. (1949). A study of some Research, 18(1), 83-88. doi:10.1016/0042-
aspects of peripheral visual acuity. 6989(78)90080-9
American Journal of Optometry and
American Academy of Optomet1ics, 26(2), Javal, LE. (1879). Essai sur la physiologie
49-56. de la lecture [Essay on the physiology of
reading]. Annales d’Oculistique, 82, 242-
Gaur, A. (1992). A history of writing. New 253.
York: Cross River.
Just, M.A., & Carpenter, P.A. (1980). A
Gelzer, A., & Santore, NJ. (1968). theory of reading: From eye ixations to
A comparison of various reading comprehension. Psychological Review,
improvement approaches. he Journal of 87(4), 329-354. doi:10.1037/0033-
Educational Research, 61(6), 267-272. 295X.87.4.329
Germane, C.E., & Germane, E.G. (1922). Kaakinen, J.K., & Hyönä, J. (2008).
McConkie, G.W., & Rayner, K (1976). Samuels, SJ., LaBerge, D., & Bremer,
Asymmetry of the perceptual span in C.D. (1978). Units of word recognition:
reading. Bulletin of the Psychometric Evidence for developmental changes.
Society, 8, 365-368. Journal of Verbal Learning and Verbal
Behavio1; 17(6), 715_:_720. doi:10.1016/
Paulson, E., & Goodman, K. (1999). Eye S0022-5371(78)90433-4
movements and miscue analysis: What
do the eyes do when a reader makes a Saxe, J.G. (1873). he poems of John
miscue? Southern Arizona Review, 1, Godfrey Saxe (Complete ed.). Boston:
55-62. James R. Osgood.
Peterson, L.R., &Peterson, M.J. (1959). Shifrin, R.M., & Schneider, W. (1977).
Short-term retention of individual verbal Controlled and automatic human
items. Journal of Experimental Psychology, information processing. Psychological
58(3), 193-198. doi:10.1037/h0049234 Review, 84(2), 127-190. doi:10.1037/0033-
295X.84.2.127
Pikulski, J.J., & Shanahan, T. (1982).
Informal reading inventories: A critical Sipel, B., & van den Broek, P. (2009).
analysis. In J.J. Pikulski & T. Shanahan Where readers look when reading recently
(Eds.), Approaches to the informal learned and unknown words. Unpublished
Elfrieda H. Hiebert
TextProject & University of California, Santa Cruz
A
ter a recent presentation by one of the authors (Kathleen), a
teacher asked, “My students act like they are reading when reading
silently, but how do I know if they are really reading?” his
teacher’s question relects a concern of many teachers. Recently, however,
teachers have not been the only ones asking questions about the eicacy
of silent reading. As a result of the conclusions of the National Reading
Panel (NRP; National Institute of Child Health and Human Development,
2000) that sustained silent reading has not proven particularly efective in
increasing luency and comprehension, policymakers and administrators
have raised questions about the efectiveness of silent reading during
instructional time. he NRP’s conclusions regarding the eicacy of
oral, guided repeated reading have meant an emphasis on oral reading
experiences in the primary grades as evident in classroom observations
(Brenner, Hiebert, & Tompkins, 2009) and in textbook programs (Brenner
& Hiebert, 2010). At the same time, the Panel’s conclusions regarding
the lack of substantive empirical literature that conirms the eicacy of
independent, silent reading experiences on comprehension have meant, at
least in the primary grades, a deemphasis on silent reading (Brenner et al.,
2009).
Ultimately, however, most of the reading that adults, adolescents,
and even middle- and upper elementary-grade students do is silent.
1 his chapter was previously published in Revisiting Silent Reading: New Directions for Teachers
and Researchers. he deinitive publisher-authenticated version published in 2010 and in 2014 is available
online at: http://www.reading.org/general/Publications/Books.aspx & http://textproject.org/library/books/
revisiting-silent-reading/
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Silent 50th 80 115 138 158 173 185 195 204 214 224 237 250 280
reading
25th 23 65 87 92 100 122 123 124 NA
rates
(Taylor et
al. (1960)
Oral 50th 54 94 114 118 128 150 150 151 NA
reading
75th 82 117 137 153 168 177 177 177 NA
rates
(Hasbrouck
& Tindal,
2006)
Method
Eighty-three students from ive fourth-grade classrooms in
a Midwestern, urban school district participated in the study. he
participants were 65% Caucasian, 13% African American, 12% Asian
American, and 9% Hispanic. More than 60% of the students in the schools
receive free-or reduced-cost lunch. Participants included 15% English
Learners and 13% special education students (i.e., those with speech-
language disorders or speciic learning disabilities).
We wrote two comparable sets of informational texts, each
containing 1,000 words. Each set consisted of ive texts connected by
a common theme. he content of both themes came from a similar
domain—communication. he underlying theme of one set of texts had to
do with the role of posters in the past and present (e.g., posters as a source
of information and announcements before the printing press). he theme
of the second set was on nonverbal language (e.g., military hand signals,
66 Are Students Really Reading in Independent Reading Contexts?
Braille).
Texts were created over numerous iterations to ensure that the
two sets were as comparable as possible on several measures. he irst
was sentence length. As the readability levels for the Flesch-Kincaid and
Fry indicate in Table 3.2, texts were comparable on that dimension. A
second consideration in the creation of the texts was the comparability of
vocabulary. Data on the distribution of words in word zones established by
frequency of appearance in written English (Hiebert, 2005) indicate that
the distribution of words that were highly frequent (i.e., Word Zones 0-2),
moderately frequent (Word Zones 3-4), and rare (Word Zones 5-6) was
comparable across the two sets of texts.
he readability levels on both the Flesch-Kincaid and Fry suggest
that the texts were approximately 1.5-2.5 grade levels above the mid-
fourth-grade (the grade-level placement of students in the study). his
diiculty level, however, is an artifact of a feature of readability formulas
that has long been recognized as inlating the diiculty of informational
texts (Cohen & Steinberg, 1983). his feature is that each appearance of a
word counts in the establishment of readability with formulas such as the
Flesch-Kincaid or Fry. In informational texts, rare (and oten multisyllabic
words) are repeated frequently when they are central to the content. hus,
informational texts typically are assigned high readability levels.
Table 3.2. Features of Texts Used in Study
Feature Text A (Posters) Text B (Nonverbal Language)
Number of words 1,000 1,000
Flesch-Kincaid readability 6.1 5.9
Fry readability 7 7
Unique words:
Word Zones 0-2 85% 83%
Word Zones 3-4 13% 16%
Word Zones 5-6 1.5% 1%
Type-token ratio 0.28 0.28
Results
Outlier analysis showed that there was a group of students with
extremely high reading rates and very low comprehension performances.
he performances of the outlier students can be seen in Figure 3.1. he
observers who had been present during the task administration to ensure
students’ ease with the computer interface conirmed that particular
students appeared to move rapidly through the task. As a result of this
analysis, the data used in the subsequent analyses was limited to 65
students.
Descriptive statistics that appear in Table 3.3 indicate that silent
reading rates were precisely the same on the two diferent sets of passages.
his silent reading rate of approximately 154 wpm is similar to the average
of 158 wpm reported by Taylor et al. (1960) for fourth graders almost 50
years ago. Comprehension performances were slightly lower on the posters
text than that on nonverbal language.
A repeated-measures ANOVA was used to compare performances
in the paper-and-pencil and computer administrations. For reading
comprehension, there were no signiicant diferences: F(1, 77) = 1.19, p =
0.28 MSE = 6.32. For silent reading rate, there was a signiicant efect for
mode of presentation F(1,61) = 5.43, p = 0.02 MSE = 873. his diference
was not massive, but the context in which the slightly faster rate occurred
is of interest-the computer context as is evident in Figure 3.1. Further,
the lack of signiicant diferences in comprehension indicates that this
somewhat higher rate did not compromise comprehension.
he next set of analyses considered diferences across quartile
groups. Quartile groups were established on the basis of comprehension
scores. Repeated-measures ANOVA revealed that rates for diferent
comprehension quartiles were signiicantly diferent overall F(3, 72) = 2.7, p
= 0.05 MSE = 210035.
he interpretation of rates by diferent groups is diicult because of
diferent patterns of performance by the quartile groups on diferent parts
of the texts. hese patterns are provided for the irst text (Posters) in Figure
3.2. For the irst section of the assessment, the highest quartile performed
approximately 30 wpm faster than the other three quartiles. he rates
of Quartiles 1 and 2 were slightly lower than those of Quartile 3 but not
substantially so on the irst section of the text.
Table 3.3. Descriptive Statistics for Comprehension and Silent Reading Rate for Texts
Mean SD
Corrected comprehension score Text A 6.3 4.1
(posters)
Corrected comprehension score Text B 7.9 3.7
(nonverbal language)
Silent reading rate Text A 153.5 63
Silent reading rate Text B 153.5 60
Brown, A.L., & Smiley, S.S. (1978). he Duke, N.K., & Pearson, P.D. (2002).
development of strategies for studying Efective practices for developing reading
texts. Child Development, 49(4), 1076- comprehension. In A.E. Farstrup & S.J.
1088. Samuels (Eds.), What research has to
say about reading instruction (3rd ed.,
Brozo, W.G., & Johns, J.L. (1986). A pp. 205-242). Newark, DE: International
content and critical analysis of 40 speed Reading Association.
reading books. Journal of Reading, 30(3),
242-247. Frank, S.D. (1992). Remember everything
you read: he Evelyn Wood 7-day speed
Carver, R.P. (1990). Reading rate: A reading and learning program. New York:
review of research and theory. San Diego: Avon.
Academic.
Fuchs, L.S., Fuchs, D., Hosp, M.K.,
Carver, R.P. (1992). Reading rate: heory, & Jenkins, J.R. (2001). Oral reading
research, and practical implications. luency as an indicator of reading
Journal of Reading, 36(2), 84-95. competence: A theoretical, empirical,
and historical analysis. Scientiic Studies
Cassidy, J., & Cassidy, D. (2009). What’s of Reading, 5(3), 239-256. doi:10.1207/
hot for 2009: National Reading Panel Sl532799XSSR0503_3
inluence wanes in 13th annual survey.
Reading Today, 26(4), 1, 8-9. Good, R.H., & Kaminski, R.A. (1996).
DIBELS: Dynamic Indicators of Basic
Cohen, S.A., & Steinberg, J.E. (1983). Literacy Shills. Longmont, CO: Sopris
Efects of three types of vocabulary West.
on readability of intermediate grade
science textbooks: An application of Graves, M.F. (2006). he vocabulary
78
CHAPTER 4
S
tretching students in text? What does that mean? Put them on a rack?
A third-grade teacher mischievously made the comment at a recent
professional development workshop. I had to bite my tongue because,
in truth, I ind the phrase a little odd myself. I know that I certainly never
used the term “stretch text” when I thought about challenging students
with reading materials before the Common Core State Standards for the
English Language Arts in History/Social Studies, Science, and Technical
Subjects (CCSS) were established (National Governors Association Center
for Best Practices (NGA Center) & Council of Chief State School Oicers
(CCSSO), 2010a). Instead, like many other teachers, I might have spoken of
an instructional-level text but never a stretch text.
So where did this term come from? What does scholarship say
about how to stretch—or challenge—students in text? his chapter focuses
on these very questions. he chapter will begin with a discussion of the
meaning of complex text, both how the CCSS deine it as well as how it
is deined from other perspectives. he second section discusses what is
meant by stretch text in elementary school and how the introduction of
the stretch notion will inluence reader–text matching paradigms. he
brief third section presents a series of rationales, both good and bad, used
to bolster arguments to stretch students in text. Finally, the last section
is an extended discussion of the factors that may contribute to or inhibit
students being stretched in text. Each section pays attention to the gaps in
the literature and the type of information needed for students to reach the
high aspirations that the CCSS introduce.
Mesmer 79
challenge teachers, publishers, and researchers to think more carefully
about students’ reading materials. According to the CCSS, text complexity
is “he inherent diiculty of reading and comprehending a text combined
with consideration of reader and task variables” (NGA Center & CCSSO,
2010b, p. 43). hus, the CCSS use the term “complexity” interchangeably
with “diiculty” (a point with which I difer later in this section).
In some respects, understanding the CCSS deinition of complex
text comes into focus better by reviewing the three-part assessment of text
complexity articulated in Appendix A (NGA Center & CCSSO, 2010b).
his model illustrates the elements of text and of the reader–text match
that the CCSS conceptualize as making a text diicult. he tripartite model
addresses qualitative tools, reader and task variables, and quantitative tools
that capture the complexity of a text to an individual student.
hrough qualitative means, a discernible and experienced human
reader applies professional judgment to evaluate a text in order to estimate
its complexity for target readers. According to the CCSS, the text features
best evaluated using human judgment include:
• Levels of meaning in literary texts and levels of purpose in
informational texts
• Text structures (e.g., simple, well-marked structures vs. implicit and
layered structures)
• Language conventionality and clarity (e.g., literal, clear language vs.
igurative, academic, or domain-speciic vocabulary)
• Knowledge demands (e.g., level of knowledge assumed by the text)
he qualitative leg of the CCSS tripod, while theoretically
interesting, has not been reliably established by research.
he second leg of text complexity relates to reader and task factors,
elements generally not inherent to the text itself. (From my perspective,
these are part of the reader–text match but not really an assessment of text
complexity.) Appearing to draw from the reader–text–task model found in
the RAND report, the CCSS remind the ield that reader variables, such as
motivation, background, knowledge, and experiences, will all render a text
more or less diicult to a group of readers (RAND Reading Study Group,
2002). Additionally, the CCSS address task variables, including purpose,
assignment requirements, and teacher levels of expectation, reminding the
reader that the analysis of text complexity as it relates to reader and task is
best done by teachers.
he third leg of the text complexity assessment is the one with
the most validation and reliability and the longest history, as it uses the
quantitative systems of readability formulas (Harrison, 1980; Mesmer,
Mesmer 81
In the CCSS, kindergarten through irst grade levels are not
assigned a text diiculty range, but a default level is set by the entering
value for the band for second to third grade. First-grade children must
reach the minimal level at the bottom of that default entry by the end of
irst grade (420L level). Note that the levels of text complexity expected at
various grades are somewhat accelerated. While schools would typically
expect students at the end of the third-grade year to read at a fourth-grade
level, the CCSS staircase sets that level at about ith or sixth grade.
Lest anyone think the staircase is merely suggestive, the language
in English Language Arts Standard 10 indicates otherwise. he phrasing
within Standard 10 for any grade level indicates that the text ranges are not
loose guidelines but concrete expectations. For instance, the language in
the grade 3 informational text standard reads, “Comprehend information
texts…at the high end of the 2-3 band independently and proiciently”
(NGA Center & CCSSO, 2010a, p.12, emphasis mine). hus, although the
CCSS ofer three ways to assess text complexity, the quantitative tools are
the most speciic and the most translatable into classrooms.
Treating the terms “text complexity” and “text diiculty” as
interchangeable, however, as done in the CCSS, confuses causes with
efects. Mesmer, Cunningham, and Hiebert (2012) distinguished between
text complexity and text diiculty. Text complexity is simply the naturally
occurring textual elements in a passage or book that can be analyzed,
manipulated, or otherwise studied, and, as such, is an independent
variable. On the other hand, text diiculty is not one-dimensional but a
numeric expression of a relationship between text and readers, and it is not
a feature intrinsic to the text. As Mesmer et al. stated, “he diiculty of a
text or text feature always implies a dependent or criterion variable: the
actual or predicted performance of multiple readers on a task based on that
text or feature” (p. 236).
Text diiculty estimates, such as those created by readability
formulas, connect the complexity of a text (e.g., word frequency and
sentence length) to reader performance (i.e., readers’ comprehension of a
text) or predicted performance (e.g., a formula’s estimate of diiculty or a
teacher’s estimates of diiculty). herefore, the estimate of text diiculty is
only as good as the relationship upon which the estimate is based, and the
complexity of a text is simply what is there. herefore, if we are to “stretch”
students in text, we must depend on the very best estimates of text
diiculty that exist, and we must better understand the impact of various
text complexity features on readers.
Mesmer 83
texts,” they would likely identify the instructional level text as such.
Inadvertently, the Betts’s labels and reader–text matching standards
may have shaped the views of many text researchers and teachers; however,
while these boundaries for text diiculty have become the essential
guidance through the present day, many questions have been asked across
the years about their empirical basis (Clay, 1985; Ehri, Dreyer, Flugman, &
Gross, 2007; Ekwall, Solis, & Solis, 1973; Halladay, 2012; Morgan, Wilcox,
& Eldredge, 2000; Pikulski & Shanahan, 1982; Stahl & Heubach, 2005). he
intense concern about avoiding frustration may not have been balanced
with the equally important message to encourage challenge and avoid
stagnation. It is indeed possible to build capacity for readers to handle
more diicult passages. Although the text complexity staircase introduces
many valid concerns, the theme of the Standard 10, to embrace challenge,
is a message long overdue.
Unfortunately, just as the reader–text standards of the previous
decades lacked empirical basis, so also does the stretch paradigm. We
simply do not have an empirically based paradigm for how to challenge
students in texts. We do not know exactly how far students can be
pushed before they break, reaching the point where reading becomes
incomprehensible and cognitively, psychologically, or emotionally
exasperating. We do not know which text features can be ramped up and
which must only be gently accelerated. We do not know at which points
students can be stretched developmentally and within which contexts. Of
course, this all begs the question why the text complexity standard and
surrounding verbiage were introduced to begin with. What exactly has
happened to cause standards writers to be concerned about the levels of
texts at which students are reading?
Mesmer 85
especially at the elementary levels, may not be strong but the guidelines
and recommendations regarding challenging texts at all grade levels
promises to have important consequences for teachers and students.
he stretch text levels were one standard deviation above the targeted
on-level designation. We also separated students into two proiciency
groups: those whose targeted levels were on or above grade level, and those
whose targeted level was below the grade-level range. Below-level readers
were deined as those reading below 450L because the CCSS deine the
range of text diiculty for the second-to-third-grade band as 450L to 790L
(NGA Center & CCSSO, 2010a). Students reading at or above 450L were
Mesmer 87
designated as on-level readers.
Means for comprehension are provided in Table 4.2 by reader level
(below vs. on or above level). here were main efects for the text diiculty
and reader levels (F (3, 9,531) = 207.34, p <.001, F (2, 9,532) =10.55, p <.001,
respectively). he reader by text diiculty interaction also was signiicant
(F (3, 9,532) = 15.03, p < .001). Pairwise comparisons were signiicant at
the .001 level for all text and reader combinations except for the diicult
texts. On average, all students achieved a 61% reading comprehension in
stretch texts that averaged 76L above their target levels. Below-level readers
comprehended at a lower level than did on-level readers at all text levels
except in the diicult texts, where all readers comprehended at about 53%.
Across reader levels, performance declined as text diiculty increased,
with comprehension levels dipping below 70% in the stretch texts.
Table 4.21: Comprehension of Below-Level vs. On- or Above-Level hird-Grade Students on Texts
of Diferent Levels
Text Levels Reader Level Comprehension:
(Lexile range relative to reader X (SD)
proiciency)
Easy (101L to 250L below) Below 80.65 (14.83)
On/above 84.23 (15.75)
On-Level (100L below to 50L Below 66.25 (19.56)
above On/above 71.88 (17.12)
Stretch (51L to 100L above) Below 58.76 (19.81)
On/above 63.95 (17.35)
Diicult (101L to 250L above) Below 53.63 (19.77)
On/above 53.70 (18.01)
1
From Mesmer and Hiebert (2011); used with permissions of the authors.
85
80
75
70
65 Below
60 On/above
55
50
45
40
Easy On‐Level Stretch Difficult
Level of Text
What happened was that below-level readers, even in on-level texts, were
still not performing well; therefore, in a sense, an on-level text was a
stretch text for them. hese preliminary results suggested that students’
reading proiciencies determined the upper limits of their performances.
In particular, students who had been designated as performing below-
grade level were not able to rise to the occasion to the same degree as did
students designated as at-grade level or above.
Mesmer 91
Both Hiebert (2008) and Mesmer (2008) write about how
readability formulas can artiicially inlate the diiculty of expository
texts due to the repetition of infrequent words. When readability formulas
are used, they count each infrequent word, whether or not it is repeated
elsewhere in the passage, as an occurrence of a “hard” word. hus, in the
expository passage example in Table 4.3, the word ivory would be counted
as a diicult word each time it occurred, despite the fact that repetition
of the word actually provides the student with support and practice. his
artifact of the formulas especially should cause teachers to carefully review
expository texts before completely trusting the estimates delivered by the
formulas.
As Mesmer et al. (2012) concluded in their review, a great deal
more research must be conducted to understand exactly how genre
operates within text complexity models, and this is true of models for
stretching or challenging students as well. Genre may be best represented
by multivariate approaches that characterize the many text features that
represent the label. In addition, the text features that present challenges
in each genre may diferentially apply to various outcomes. For instance,
prior knowledge may operate more in the expository format than in the
narrative format. Clearly, a second generation of research is needed to
move the typical diet of text in elementary classrooms beyond simply
including various genres to challenge students appropriately.
1
Boldfaced words illustrate repetition of words across sentences within texts.
Mesmer 93
Conclusions: Programmatically Addressing Challenge
he Common Core text complexity standard and overall focus on
challenging text (NGA Center & CCSSO, 2010a) and the need for students
to “stretch their reading abilities,” as outlined in Appendix A to the
standards (NGA Center & CCSSO, 2010b), have introduced a major shit
in reader–text matching paradigms that promises to balance the intense
focus on the avoidance of frustration with the importance of challenge.
Nonetheless, this introduction raises some important issues. Shanahan
(2011) expressed the following:
We have tended to overgeneralize from younger readers (for whom easier
text allows a more systematic focus on decoding) to older readers (who may
do better with more intellectually challenging texts). Now, I fear that the
Common Core is over-generalizing in the other direction. Harder beginning
reading books may stop many young readers in their tracks. (p. 21)
I have this same fear, especially in light of the fact that the rationale
for increasing text diiculty is based on studies of secondary students.
When carefully examined, patterns in the data that are frequently cited
to support claims of textbook simpliication do not actually hold true for
elementary students (Chall, 1977; Hayes et al., 1996).
Existing research is scant and simply not suicient to support the
increases in text levels required in the elementary grades by the CCSS’s
staircase of text complexity. here is not enough empirical data to suggest
exactly how students should be stretched in text; however, in this piece I
identify text and other factors that may be considered in future research.
In the past, classroom reactions to inappropriate text standards have been
extreme. Either teachers (or, more likely, district supervisors) knuckle
down and insist that every student in a given grade reads texts of a certain
level or teachers abandon ship altogether and default to reading aloud
anything that might be considered challenging. But I caution schools and
teachers to resist what I call the “read-aloud solution”; instead, a blend of
scafolded challenge reading with some reading aloud should characterize
stretching students in the elementary school.
At a basic level, teachers must know the reading levels of their
students and estimates of the diiculty of the texts they wish to use.
Although this is a basic tenet of reader–text matching, frequently the
obvious is overlooked. While the reader–text matching standards of Betts
(1946) should indeed be questioned, I caution educators to remember
that stretch text should not cause frustration. Stretch texts, whatever
the research ultimately decides they may be, should represent optimal
challenge, not gut-wrenching exasperation. Shanahan (2011) notes the
94 Stretching and Complex Text
opposite response to challenge that might occur: “When the books get
hard, the usual responses have been to move kids to easier books, to stop
using textbooks, or to read the texts to the students” (p. 20). How very
ironic it would be if the text standards designed to challenge students in
actuality water down their exposure to challenging texts.
As identiied in this chapter, additional factors that may afect
students’ abilities to be stretched include text levels, text length, genre, and
cohesion. All of these are malleable factors that can be manipulated and
designed into text. In presenting a framework for texts in the early grades,
Mesmer et al. (2012) proposed four elements: content (e.g., words, concepts,
sentences, ideas, genre), sequence in which the content is presented, pace of
presentation, and repetition of content. As researchers develop a theory of
challenge that contributes to the important notion of stretching students,
each of these elements of a text program must be addressed. A paradigm
for understanding how to stretch students in text must move beyond
an isolated, drive-by approach to a more consistent, programmatic one.
Stretching students cannot and should not be relegated only to a Friday
aternoon read-aloud and discussion. It must be infused into the text
choices made over weeks, months, and years. Certainly, the arguments put
forth for challenge in the Common Core suggest that it is the accumulated
efects of text that resulted in lower ACT scores or grades in college (ACT,
2006). So then must the approach to stretching students in text also be
longitudinal, across days, weeks, months, and years. How text length,
diiculty, genre, cohesion, and text levels are balanced and introduced
across a unit of study or a developmental period will support or inhibit
fruitful “stretching.” Focused and consistent eforts at presenting students
with challenging texts that stretch their capacity will ultimately have the
kinds of efects intended by the Common Core writers.
Are we going to lower the fences or teach kids to climb? asks
Shanahan (2011) in the title of a recent Reading Today article. he message
is important. For too long we have been overly concerned about the height
of the fences and not concerned enough about teaching kids to climb. I
think that stretching students in texts might be like adjusting the uneven
bars in the gym. When gymnasts are at a certain level in their training,
they are expected to mount the bars using a springboard or other device
to begin their routines. his means that the bar is typically above their
head and several feet ahead of them. hey must run and bounce on the
springboard and reach for the bar to begin the routine. Sometimes they
fall on the dense 12-inch mats beneath them, but eventually they can
consistently make it. hroughout a meet or workout, you will see coaches
Mesmer 95
raise and lower the bars to accommodate diferent heights because, even
though the mount is challenging—and, in fact, over the heads of the
gymnasts—there are still limits placed on the gymnast by factors such
as height and arm length. No one expects the bar to be set the same for a
gymnast who is four feet and three inches tall as it is for a gymnast who
is four feet and eight inches tall. he same is true with stretch students.
We want them to leap and grab, but we should set the bar relative to their
characteristics. As argued in another piece, stretching students in text is a
dynamic activity that cannot be dictated by static text diiculty standards
(Mesmer & Hiebert, 2013). he duty of researchers is to continue to create
knowledge to support teachers as they work to develop stronger readers in
elementary school.
Clay, M.M. (1985). he early detection of Halladay, J.L. (2012). Revisiting key
reading diiculties (3rd ed.). Portsmouth, assumptions of the reading level
NH: Heinemann. framework. he Reading Teacher, 66(1),
53–62.
Coleman, D., & Pimentel, S. (2012).
Revised publishers’ criteria for the Harrison, C. (1980). Readability in the
Common Core State Standards in English classroom. Cambridge, UK: Cambridge
Language Arts and Literacy, Grades 3–12. University Press.
Washington, DC: National Governors Hatcher, P.J. (2000). Predictors of reading
Association Center for Best Practices & recovery book levels. Journal of Research
Council of Chief State School Oicers. in Reading, 23(1), 67–77.
Retrieved from http://www.corestandards.
org/assets/Publishers_Criteria_for_3-12. Hayes, D.P., Wolfer, L.T., & Wolfe, M.F.
pdf. (1996). Sourcebook simpliication and
its relation to the decline in SAT-Verbal
Cunningham, J.W., Spadorcia, S.A., scores. American Educational Research
Erickson, K.A., Koppenhaver, D.A., Journal, 33(2), 489–508.
Sturm, J.M., & Yoder, D.E. (2005).
Investigating the instructional Hiebert, E.H. (2005). In pursuit of an
supportiveness of leveled texts. Reading efective, eicient vocabulary curriculum
Research Quarterly, 40(4), 410–427. for the elementary grades. In E.H. Hiebert
& M. Kamil (Eds.), he teaching and
Ehri, L.C., Dreyer, L.G., Flugman, B., learning of vocabulary: Bringing scientiic
Mesmer 97
research to practice (pp. 243–263). Educational Research Association, New
Mahwah, NJ: LEA. Orleans, LA.
Hiebert, E.H. (2008). he word zone Mesmer, H.A.E., & Hiebert, E.H. (2013).
luency curriculum: An alternative How far can third graders be “stretched”?
approach. In M.R. Kuhn & P.J. Exploring the inluence of text diiculty
Schwanenlugel (Eds.), Fluency in the and length. Unpublished manuscript.
classroom (pp. 154–170). New York, NY:
Guilford. Morgan, A., Wilcox, B.R., & Eldredge,
J.L. (2000). Efect of diiculty levels on
Hiebert, E.H., & Martin, L.A. (2015). second-grade delayed readers using
Changes in the texts of reading dyad reading. he Journal of Educational
instruction during the past ity years. Research, 94(2), 113–119.
In P.D. Pearson & E.H. Hiebert (Eds.),
Research-based practices for teaching National Governors Association Center
Common Core literacy. New York, NY: for Best Practices & Council of Chief
Teachers College Press. State School Oicers. (2010a). Common
Core state standards for English language
Hiebert, E.H., & Mesmer, H.A.E. (2013). arts and literacy in history/social
Upping the ante of text complexity in studies, science, and technical subjects.
the Common Core state standards: Washington, DC: Author. Retrieved from
Examining its potential impact on young www.corestandards.org/assets/CCSSI_
readers. Educational Researcher, 42(1), ELA%20Standards.pdf
44–51.
National Governors Association Center
Hiebert, E.H., Wilson, K.M., & Trainin, for Best Practices & Council of Chief
G. (2010). Are students really reading State School Oicers. (2010b). Common
in independent reading contexts? An Core state standards for English language
examination of comprehension-based arts & literacy in history/social studies,
silent reading rate. In E. H. Hiebert & D. science, and technical subjects: Appendix
R. Reutzel (Eds.), Revisiting silent reading: A: Research supporting key elements of
New directions for teachers and researchers the standards and glossary of key terms.
(pp. 151–167). Newark, DE. International Washington, DC: Author. Retrieved from
Reading Association. http://www.corestandards.org/assets/
Appendix_A.pdf
Martin, M.O., Mullis, I.V.S., &
Kennedy, A.M. (Eds.). (2007). Progress Nelson, J., Perfetti, C., Liben, D., &
in international reading literacy study Liben, M. (2012). Measures of text
(PIRLS): PIRLS 2006 Technical Report. diiculty: Testing their predictive value
Chestnut Hill, MA: TIMSS & PIRLS for grade levels and student performance.
International Study Center. Washington, DC: Council of Chief State
School Oicers.
Mesmer, H.A.E. (2008). Tools for matching
readers to texts: Research-based practices. Pikulski, J.J., & Shanahan, T. (1982).
New York, NY: Guilford. Informal reading inventories: A critical
analysis. In J.J. Pikulski & T. Shanahan
Mesmer, H.A., Cunningham, J.W., (Eds.), Approaches to the informal
& Hiebert, E.H. (2012). Toward a evaluation of reading (pp. 94–116).
theoretical model of text complexity for Newark, DE: International Reading
the early grades: Learning from the past, Association.
anticipating the future. Reading Research
Quarterly, 47(3), 235–258. RAND Reading Study Group. (2002).
Reading for understanding: Toward an
Mesmer, H.A.E., & Hiebert, E.H. (April R&D program in reading comprehension.
9, 2011). An examination of the efects Santa Monica, CA: RAND.
of discrepancies in reader-text match
on comprehension. Paper presented at Roskos, K., & Neuman, S.B. (2013).
the annual meeting of the American Common Core, commonplaces, and
Mesmer 99
CHAPTER 5
Timothy V. Rasinski
Kent State University
S. Jay Samuels
University of Minnesota
Elfrieda H. Hiebert
TextProject & University of California, Santa Cruz
Yaacov Petscher
Florida Center for Reading Research
Karen Feller
Reading Plus
R
eading luency has been deined as the ability to simultaneously
process written texts accurately, automatically, and with appropriate
prosody and comprehension (NICHD, 2000; Rasinski, 2006,
2010). Although it has been relatively neglected in reading curricula and
instruction for years (Allington, 1983; Rasinski & Zutell, 1996), recent
reviews of empirical research have identiied reading luency as a critical
element in successful literacy instruction (Chard, Vaughn, & Tyler, 2002;
Kuhn & Stahl, 2003; NICHD, 2000; Rasinski & Hofman, 2003).
Chall’s (1996) model of reading development posits reading luency
as a task to be mastered in the primary grades, and indeed most research
on luency to date has focused on the primary grades. For example, several
1 his chapter was previously published in Reading Psychology, (v32, n1, p75-97). he
deinitive publisher-authenticated version published in 2011: http://www.tandfonline.com/doi/
abs/10.1080/02702710903346873#.VYLxEOf7LDE
Reproduced with permission of the copyright owner. Further reproduction prohibited without
permission.
Background
his study was conducted in cooperation with Miami-Dade County
(Florida) Public Schools to determine the relationship between student
participation in a silent reading instructional program and overall student
reading achievement in grades 4 through 10, as measured by the FCAT
with selected schools in Regions II and III of the Miami-Dade County
Public Schools. he experimental treatment employed in the study was
Reading Plus (RP), a computer-based reading luency and comprehension
intervention system that is designed to develop silent reading luency and
overall reading proiciency.
Method
Participants
A total of 16,143 students from grades 4 through 10 in 23 schools
in Regions II and III in the Miami-Dade County Public School System
participated in the study; 5,758 students made up the treatment group,
while the remaining 10,385 students constituted the control group. Both
regions of the district had signiicant populations of minority students
with 34% African American and 56% Latino American.
Subpopulations in the sample included the following:
• Learning disabled (LD; 6% of total; 541 participating, and 491
nonparticipating)
• English language learners (ELLs; 3% of total; 176 participating, and
286 nonparticipating)
he 23 schools were distributed across elementary (11) and middle
and high schools (12). In a number of schools, only those students who
scored achievement level 1 or 2 (nonproicient) on the 2006 Reading
portion of the FCAT were assigned to RP. In other schools, students
from speciic grade levels or subpopulations were assigned. Most
nonparticipating students who engaged in alternative interventions
were assigned to Scholastic’s Read 180 and/or Renaissance Learning’s
Procedures
At the beginning of the 2006-2007 school year, teachers in the two
regions of the school district were trained on the intent and use of the RP
program and were guided in identifying appropriate students from their
classes to participate in the intervention. Implementation began soon ater
and continued until administration of the 2007 FCAT in early March of
2007.
Prior to the implementation of the intervention, students completed
the Reading Placement Appraisal assessment to establish their initial
placement level in RP. he 20-minute placement test assessed independent
reading rate, comprehension, and vocabulary to determine the most
appropriate starting level. he placement assessment consisted of three
parts. Part I presented students with 100-word selections followed by a set
of literal-recall questions. Content diiculty was automatically adjusted
by the program according to a student’s reading rate and comprehension
to ascertain the independent reading level. Part II presented 300-word
selections followed by a set of diverse comprehension questions to conirm
the independent reading level. Part III assessed a student’s vocabulary.
From these, an instructional reading level was established, and students
were placed at appropriate levels within each component of the program.
Students continued to be assessed on similar tasks throughout the
program with appropriate adjustments made to the level of activities as a
result of their performances on these formative assessments.
he RP intervention involved students in a series of lessons that
were provided on a digitized network platform in individual computer
environments. A speciic sequence of activities was followed during the
lesson period, and the diiculty level of the activities was adjusted as a
function of a student’s progress. Each RP lesson required approximately
30 minutes to complete. Treatment schedules varied within the 23 schools,
but most schools followed a schedule of either two 45-minute sessions per
Assessments
he FCAT was part of a statewide initiative to raise academic
standards for students in the state of Florida. he FCAT consisted of two
kinds of tests. he irst was a CRT, which measured how well students were
meeting the Sunshine State Standards in reading, writing, mathematics,
and science. he second was a norm-referenced test (NRT), which
permitted a comparison of Florida student performance on reading and
mathematics with the performance of students nationwide. he NRT
used during the time of this study was the Stanford Achievement Test–10
(SAT–10). he reading section evaluated students’ ability to understand
the meaning of informational and literary passages. Both portions of the
FCAT were administered to all students in grades 3 through 10, and results
were reported publicly in summary form. Pretesting occurred during the
spring 2006 administration of the FCAT. Posttesting occurred during the
spring 2007 administration of the FCAT.
Results
Data Analysis
A 3 x 7 x 3 x 2 x 2 (Group x Grade x Minority x ELL x LD) analysis
of variance (ANOVA) was used to test if diferences existed in the simple
diference score of the posttest minus the pretest among the groups
Table 5.2: Gain Scores on the FCAT Reading (CRT) Developmental Scale Scores and SAT–10 (NRT) for All Students
Silent Reading Fluency
Measure Grade No Lessons 1–39 Lessons 40+ Lessons Contrast Efect Size
N M SD N M SD N M SD F p-value d1 d2 d3
CRT 4 529 158.75 224.69 461 162.18 220.81 340 181.42 200.85 2.05 0.160 0.02 0.09 0.10
5 449 71.43 216.53 393 78.37 200.07 364 117.46 209.16 9.32 0.006 0.03 0.20 0.21
6 1423 48.03 216.19 563 80.45 237.77 217 130.06 240.08 28.60 0.002 0.15 0.21 0.38
7 1256 46.27 199.35 508 109.19 212.73 307 157.78 212.40 88.25 0.002 0.32 0.23 0.56
8 1546 44.76 180.50 502 128.45 195.77 403 137.20 185.51 113.58 0.002 0.46 0.04 0.51
9 2803 66.48 190.09 406 84.31 202.17 328 107.23 203.30 14.85 0.002 0.09 0.11 0.21
10 2379 33.78 215.55 521 22.70 207.14 445 20.39 182.12 2.16 0.160 -0.05 -0.01 -0.06
NRT 4 528 5.05 26.17 459 7.04 24.07 337 11.74 21.82 14.76 0.002 0.08 0.19 0.26
5 445 13.60 22.10 391 20.33 25.03 360 21.19 23.40 21.94 0.002 0.30 0.03 0.34
6 1416 11.36 24.35 560 11.77 23.05 217 17.60 23.62 8.78 0.006 0.02 0.25 0.26
7 1239 5.06 23.21 497 5.64 22.60 303 9.22 22.37 6.66 0.024 0.02 0.16 0.18
8 1530 7.46 25.13 482 10.20 25.68 393 11.97 22.57 12.25 0.002 0.11 0.07 0.18
9 2719 13.06 28.12 383 7.12 31.47 324 14.17 27.01 0.86 0.363 -0.21 0.22 0.04
10 2267 0.45 29.29 465 6.60 28.15 415 8.24 24.16 35.95 0.002 0.21 0.06 0.27
Note. p-values relect Linear Step-Up adjustments.
Table 5.3: Gain Scores on the FCAT Reading (CRT) Developmental Scale Scores and SAT–10 (NRT) for African American Students Receiving 40+ Lessons of
the RP Intervention Versus Students Receiving No RP Lessons
Measure Grade No Lessons 1–39 Lessons 40+ Lessons ANOVA Efect Size
N M SD N M SD N M SD F p-value d1 d2 d3
CRT 4 236 147.01 243.59 234 133.40 224.47 162 176.31 211.79 1.19 0.310 -0.06 0.19 0.12
5 158 69.93 229.94 193 60.77 194.51 235 90.20 204.82 1.14 0.310 -0.04 0.15 0.09
6 480 12.77 203.35 267 38.50 223.58 113 89.80 229.47 11.87 0.003 0.13 0.23 0.38
7 310 34.55 160.09 234 100.80 199.30 167 143.91 211.69 40.30 0.003 0.41 0.22 0.68
8 447 28.85 172.17 211 95.86 201.78 208 126.10 180.07 45.38 0.003 0.39 0.15 0.56
9 760 52.08 182.64 110 50.77 235.27 113 85.08 217.77 2.22 0.200 -0.01 0.15 0.18
10 465 16.62 227.29 195 13.69 221.68 226 -4.59 186.62 1.33 0.310 -0.01 -0.08 -0.09
NRT 4 236 7.47 27.75 232 4.78 24.69 161 13.39 21.77 3.96 0.092 -0.10 0.35 0.21
5 155 14.92 22.40 193 20.77 25.26 232 19.43 24.11 1.62 0.172 0.26 -0.05 0.20
Rasinski, Samuels, Hiebert, Petscher, & Feller
6 475 10.46 24.18 266 9.25 21.70 113 17.12 22.76 3.69 0.093 -0.05 0.36 0.28
7 311 1.64 22.63 228 6.25 22.11 165 10.20 21.58 16.84 0.030 0.20 0.18 0.38
8 439 7.37 24.22 200 7.70 23.96 205 14.61 21.35 11.42 0.003 0.01 0.29 0.30
9 740 13.53 25.38 108 8.58 30.08 110 14.54 24.47 0.10 >.500 -0.19 0.20 0.04
10 436 1.56 27.89 170 10.72 27.63 210 8.13 22.40 11.67 0.003 0.33 -0.09 0.24
Note. p-values relect Linear Step-Up adjustments.
111
112
Table 5.4: Gain Scores on the FCAT Reading (CRT) Developmental Scale Scores and SAT–10 (NRT) for Latino American Students Receiving 40+ Lessons of
Silent Reading Fluency
6 66 14.15 24.81 21 7.24 30.40 6 29.17 27.62 0.35 0.500 -0.28 0.72 0.61
7 130 2.93 21.40 33 3.61 21.67 17 1.82 22.28 0.01 0.500 0.03 -0.08 -0.05
8 119 7.01 23.67 32 6.78 23.02 16 7.69 25.38 0.01 0.500 -0.01 0.04 0.03
9 298 16.92 28.46 30 0.80 33.94 32 16.69 29.11 1.20 0.500 -0.57 0.47 -0.01
10 288 -4.87 28.03 27 -4.11 22.11 20 11.80 23.36 5.41 0.032 0.03 0.72 0.59
Note. p-values relect Linear Step-Up adjustments.
114
Table 5.6: Gain Scores on the FCAT Reading (CRT) Developmental Scale Scores and SAT–10 (NRT) for Learning Disabled Students Receiving 40+ Lessons of
Silent Reading Fluency
6 92 15.20 31.13 6 -6.33 20.87 16 10.06 28.65 0.23 0.500 -0.69 0.79 -0.16
7 270 5.45 23.36 115 1.80 21.94 11 17.18 16.35 0.57 0. 500 -0.16 0.70 0.50
8 384 7.49 25.39 58 4.31 32.33 12 7.75 21.35 6.35 0.083 -0.13 0.11 0.01
9 414 20.30 30.30 20 9.85 22.93 5 17.20 28.01 0.02 0.500 -0.35 0.32 -0.10
10 445 -9.16 25.75 22 13.55 28.10 19 2.63 26.49 0.67 0. 500 0.88 -0.39 0.46
Note. p-values relect Linear Step-Up adjustments.
achievement gains than non-RP students at all grade levels on at least one
reading achievement measurement (and at grades 5, 6, 7, and 8 signiicantly
greater achievement gains were found on both tests). Efect sizes by grade
level ranged from .03 to .34 (small to moderate in magnitude). None
of the gain score comparisons of all students (Table 5.2) demonstrated
signiicantly greater gain scores in favor of the non-RP students. Moreover,
the trends in gain scores are worth noting. Students receiving the
intermediate number of RP lessons (1 to 39) tended to have gains that were
greater than students receiving no lessons but had gains that were less than
students receiving 40 or more lessons. his suggests that the efects of the
RP lessons are cumulative—more instruction using RP led to greater gains
in reading achievement.
Tables 5.3 through 5.7 report FCAT Reading (CRT) Developmental
Scale gain scores and SAT–10 (NRT) gain scores by grade level for students
who were African American (Table 5.3), Latino American (Table 5.4),
Caucasian (Table 5.5), LD (Table 5.6), and ELLs (Table 5.7). Aside from
the ELL students, the data indicate that students receiving RP instruction
made generally greater gains on the FCAT CRT and the NRT than
students not receiving RP.
Table 5.8 presents statewide and district mean developmental scale
scores for the CRT for grades 4 through 10 statewide and for the individual
school district from which the RP schools were drawn. Mean gain scores
for the statewide and district-level CRT are also presented. he mean gain
scores for students engaged in the RP intervention for 40 or more lessons
(Table 5.2) were greater than the statewide and district-level gains (Table
5.8) at every grade level for which a comparison was possible. Moreover,
mean gain scores for students engaged in the RP intervention for 1 to 39
lessons (Table 5.2) also were greater than the statewide and district-level
gains (Table 5.8) at every grade level except for grade 5.
Table 5.8: Dade County Reading CRT and Statewide Mean Development Scale Scores (DSS)
Grade Mean 2006 DSS Mean 2007 DSS Mean DSS Gain
4 1554 (1573) 1393 (1420) 161 (154)
5 1618 (1659) 1537 (1557) 81 (101)
6 1644 (1694) 1583 (1624) 61 (70)
7 1773 (1801) 1694 (1722) 79 (78)
8 1814 (1862) 1730 (1786) 84 (76)
9 1851 (1912) 1789 (1844) 62 (68)
10 1881 (1947) 1864 (1931) 17 (16)
Note. Values in parentheses are statewide mean reading developmental scale scores.
D. Ray Reutzel
Utah State University
Yaacov Petscher
Florida Center for Reading Research &
Florida State University
Alexandra N. Spichtig
Reading Plus
R
eading research has produced an emerging consensus on several
essential elements of beginning reading instruction, and luency is
widely agreed to be one of the key components, as reading luency
creates the bridge between word recognition and reading comprehension
processes (National Institute of Child Health, & Human Development
(NICHD), 2000; Rasinski, 1989; Reutzel & Hollingsworth, 1993; Samuels &
Farstrup, 2006). he initial stages of reading luency occur when students
are able to automatically recognize words. As luency develops, automatic
word recognition eventually leads to the achievement of the ultimate goal
of reading: comprehension (Torgesen & Hudson, 2006; Samuels, 2007).
Topping (2006) described this later stage of luency development, when
word recognition bridges comprehension processes, as “the extraction of
maximum meaning at maximum speed in a relatively continuous low,
leaving spare, simultaneous processing capacity for other higher order
processes” (p. 107).
In its report, the National Reading Panel (NRP; NICHD,
2000) reviewed 77 studies of guided repeated oral reading (GROR)
1 his chapter was previously published in he Journal of Educational Research (v105, n6, p404-415,
2012). he deinitive publisher-authenticated version published is available online at: http://www.tandfonline.
com/doi/abs/10.1080/00220671.2011.629693
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Method
Research Design
his study used a matched quasi-experimental research design. he
study’s quasi-experimental control and treatment groups were constructed by
the use of a propensity score sampling and matching process. A propensity
score, as deined by Rosenbaum and Rubin (1983), has a conditional
probability of assignment to a particular treatment given a vector of observed
covariates. Put simply, a propensity score is the probability of being in the
treatment group derived from a logistic regression when accounting for
important matching variables. he primary objective for researchers using
propensity scores is to select a series of variables that would be considered
important for matching students. In traditional reading research, these
variables might include race/ethnicity, socioeconomic status, English learner
status, primary exceptionality status, gender, and some type of baseline
measure of achievement (e.g., a pretest). he main efects and interactions
among these and other variables are then included in a logistic regression
to determine the probability of being in the treatment when controlling for
these important matching covariates (Shadish, Cook, & Campbell, 2002).
he probabilities resulting from the logistic regression may then be used to
match students who actually received the intervention with those who did
not, creating matched treatment and control groups. In this way, students are
more probabilistically matched at the pretest, allowing for stronger causal
inferences regarding diferences on the posttest or on gain scores than a
simple comparison of all available students in a sample.
here are several limitations that should be noted in regard to using
propensity scores to construct an experimental sample such as the one
used in this study: (a) propensity scores tend to be most practically used
with larger samples; (b) missing data can be problematic for propensity
analyses, as the techniques are still relatively new; and (c) propensity scores
assume that no further confounds exist that may predict the propensity.
Nevertheless, despite these acknowledged limitations, propensity scores are
now viewed as one of the strongest quasi-experimental methods for assessing
relationships between treatments and outcomes (Shadish et al., 2002).
Instrumentation
At the time of this study, the FCAT was a major component of
Florida’s testing efort to assess student achievement in reading, writing,
math, and science as represented in Florida’s Sunshine State Standards
(SSS) (Florida Department of Education [FDE], 2007). he SSS reading
portion of the FCAT is a group-administered, criterion-referenced test
consisting of 6 to 8 narrative or informational reading passages, wherein
students respond to between 6 and 11 multiple-choice items per passage.
126 Value-Added Silent Reading Intervention
Embedded within these 6 to 11 multiple-choice questions are four content
clusters: (a) reference and research, (b) words and phrases in context, (c)
the main idea, and (d) comparison/cause and efect.
Based on their scores, students are placed into one of ive
performance levels on a scaled score ranging from 100 to 500. Levels 1 and
2 relect below grade-level performance in reading, with Level 1 being the
lowest indication of reading performance. Levels 3 and above represent
proiciency in reading comprehension at or above grade-level standards.
Students who score below Level 1 proiciency on the FCAT in
third grade must be retained for another year, according to Florida law.
If they can demonstrate the required reading level or proiciency through
the approved alternate test (the SAT-10) or through a student portfolio,
they can be granted an exemption and be promoted to fourth grade.
hus, the students selected for this study represented the highest-risk
segment of the overall third-grade population. he internal-consistency
reliability for the FCAT-SSS has been shown to be 0.90 (Cronbach’s alpha);
moreover, test score content and concurrent validity have been established
through a series of expert panel reviews and data analyses (Florida State
Department of Education, 2007). he construct validity of the FCAT-SSS
as a comprehensive assessment of reading outcomes recently received
strong support in an empirical analysis of its relationships with a variety
of other reading comprehension, language, and basic reading measures
(Schatschneider, Fletcher, Francis, Carlson, & Foorman, 2004).
he SAT-10 is approved for use by the U.S. Department of
Education and is constructed to determine if students in kindergarten
through grade 12 are meeting national or state standards in reading,
mathematics, and language. he reading section of the SAT-10 has an
alpha reliability coeicient of 0.87, the math section 0.80 to 0.87, and the
language section 0.78 to 0.84. Alternate forms of reliability coeicients
ranged in the low 0.90s for the total reading section. he SAT-10, by design,
evidences content and criterion-related validity since its development is
tied very closely to assessing progress toward meeting state and national
standards in reading, mathematics, and language (Berk, 1998; Carney &
Morse, 2005).
Data Analysis
In order to assess the added value of this silent reading luency
intervention with third-grade struggling readers, a propensity score
analysis was used in this study to match the 40 students from the sample
of 158 who did not receive this supplementary silent reading luency
intervention to a group of 40 students who were similar with regard to
demographics, prior FCAT achievement, and performance on the SAT-
10. he 40 struggling students completed an average of 71 lesson units
during the study. he logistic regression used in this study to construct
the propensity scores predicted group membership with race/ethnicity,
limited English proiciency status, primary exceptionality status, and
reading performance on the previous year’s FCAT-SSS and the SAT-10.
Prior technical reports have indicated that the correlation between FCAT
Results
Summaries of the demographics and descriptive statistics for the
FCAT and SAT-10 scores for the treatment and matched controls groups
are reported in Tables 6.1 and 6.2. As can be seen by the reported indices,
the two groups were reasonably matched from the propensity analysis.
he mean pretest score for the matched control group on the FCAT-
SSS was 814.90 (SD=217.92) compared to the treatment groups’ mean of
845.50 (SD=117.69), corresponding to a standardized coeicient of g=0.17.
Similarly, the mean pretest score on the SAT-10 for the treatment group
was 575.75 (SD=16.04), compared to the control groups’ mean of 570.73
(SD=18.90), and corresponded to a standardized coeicient of g=0.28.
Because students who participated in the program were from diferent
classes and schools, and the analysis was based on available archival
data, the ratio of students to classes was small, precluding a mixed-efects
modeling of the data to account for clustering at the classroom and school
levels.
Table 6.1: Demographic Comparison of Treatment and Matched Control Students
Demographics Treatment Control
(n=40) (n=40)
% Black 65 58
% White 0 8
% Latino 35 35
% ELL 15 13
% ESE 5 25
ANOVA was used to assess the extent to which the treatment and
matched control students were statistically diferentiated on the posttest
scores for both the FCAT-SSS and the SAT-10. In order to control for the
FDR, a linear step-up procedure was used for any statistically signiicant
inding.
FCAT-SSS results indicated that a signiicant efect existed for
treatment (F [1,79]=24.52, p < 0.001), suggesting that treatment students’
scores on the posttest were signiicantly higher than the matched control.
he mean posttest score for the silent reading luency intervention students
was 1,322.63 (SD=171.24) compared to the matched control’s mean of
1,012.33 (SD=357.46). A more appropriate way to contextualize these
results is to calculate an efect size, which communicates, in standard
deviation units, how large the diferences were between the means of the
two groups, regardless of sample size. A standardized efect size value
g=1.09 was estimated, indicating that the mean for the students who were
receiving the silent reading luency intervention were performing one full
standard deviation above the mean for the matched controls. In context,
Cohen (1988) provided guidelines stating that an efect size of 0.80 would
be considered large. In practical terms, 80% of the treatment students who
received the supplementary guided, silent reading luency intervention
in this study achieved reading proiciency as measured by the FCAT
(achievement level of 3 or higher) and were promoted to the next grade
level, as compared to 32% of the of the matched control students.
Conversely, no statistically important indings were observed for
the SAT-10 diferences in the ANOVA (F [1,79]=2.59, p=0.11), despite a
higher posttest SAT-10 score for students receiving the supplementary
guided, silent reading intervention (M=608.53, SD=23.43) compared to
the matched controls (M=597.83, SD=34.95). Two important components
to consider in these seemingly conlicting indings are the issues of power
and baseline equivalence. Given the present total sample size in the
design (n=80), a potential reason for the lack of statistical signiicance in
Discussion
Providing the highest quality of reading instruction for all students
is a central focus of current educational reforms and practices. Such an
emphasis is particularly critical in the era of the Common Core State
Standards (National Governors Association Center for Best Practices &
Council of Chief State School Oicers, 2010). Much of the past research
on silent reading has focused upon comparing the results obtained from
silent independent reading versus oral, guided reading practice, and
such studies have typically found that guided oral reading practice is
more efective for students and is also preferred by teachers. However, we
believe this is largely due to the fact that guided oral reading provides a
check on whether students are actually reading and how well they do so
when that check is not possible with silent reading conducted by students
independently. herefore, prior to the turn of the millennia, these studies
comparing guided, oral reading versus independent silent reading practice
contributed little to an understanding of how silent reading practice might
become more efective.
Instead of providing yet one more comparison of independent
silent reading versus a largely guided oral approach to reading practice,
this study examined how changing silent reading practice conditions from
silent, independent reading to silent, guided reading afected the reading
Limitations
he results of this study comparing a matched sample of struggling
third-grade readers who were retained in grade level for poor reading
performance were limited by the total sample size (n=80). he criteria
used to select struggling readers for this study was poor performance
Implications
his study provided emerging evidence supporting the use of a
guided silent reading intervention known as Reading Plus for improving
the reading comprehension and achievement scores of struggling third-
grade readers on the FCAT. It did not provide similar evidence for the
use of this guided silent reading intervention for improving the reading
comprehension and achievement scores of struggling third-grade readers
on the SAT-10. Future researchers may want to broaden the criteria used
O’Connor, R.E., Bell, K.M., Harty, K.R., Reutzel, D.R., Jones, C.D., & Newman,
Larkin, L.K., Sackor, S.M., & Zigmond, N. T.H. (2010). Scafolded silent reading:
(2002). Teaching reading to poor readers Improving the conditions of silent reading
in the intermediate grades: A comparison practice in classrooms. In E.H. Hiebert
of text diiculty. Journal of Educational & D.R. Reutzel (Eds.), Revisiting silent
Psychology, 94(3), 474–485. reading: New directions for teachers and
researchers (pp. 129–150). Newark, DE:
Pikulski, J.J. (2006). Fluency: A International Reading Association.
developmental and language perspective.
In S. J. Samuels & A. E. Farstrup (Eds.), Rosenbaum, P.R., & Rubin, D.B. (1983).
What research has to say about luency he central role of the propensity score in
instruction (pp. 70–93). Newark, DE: observational studies for causal efects.
International Reading Association. Biometrika, 70(1), 41–55.
146
CHAPTER 7
Elfrieda H. Hiebert
TextProject, Inc. & University of California, Santa Cruz
S. Jay Samuels
University of Minnesota
Timothy V. Rasinski
Kent State University
A
s has been the case with many aspects of reading instruction,
an emphasis on oral versus silent reading activities has varied in
particular educational eras (Allington, 1984). During the whole
language period of the 1990s, silent reading experiences were emphasized
(Hagerty, 1999). Some oral reading occurred during guided reading and
for obtaining running records, but occasions for monitored, repeated oral
reading were few, even for beginning and struggling readers. However, in
2000, when the National Reading Panel (NRP; NICHD, 2000) concluded
that guided, repeated oral reading but not sustained silent reading (SSR)
facilitated luency, comprehension, and vocabulary, the pendulum swung
to an almost-exclusive emphasis on oral reading. An emphasis on oral
reading went beyond the primary grades since the NRP had concluded
that the luency of all students through the fourth grade and struggling
readers through high school was enhanced with guided, repeated oral
reading. Evidence of the dominant role of oral reading can be seen in the
prominence of the Dynamic Indicators of Basic Essential Literacy Skills
(DIBELS)—a test of oral reading tasks—in the implementation of Reading
First (Gamse, Jacob, Horst, Boulay, & Unlu, 2009).
Oral reading serves many critical functions, especially during
the early stages of reading development. However, when the reading diet
is no longer a balanced one, with oral reading dominating the menu, as
1 his chapter was previously published in Literacy Research and Instruction (v51, n2, p110-124,
2012). he deinitive publisher-authenticated version published is available online at: http://www.tandfonline.
com/doi/abs/10.1080/19388071.2010.531887
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Conclusion
he need for eicient silent reading habits for success in the digital-
global age is unarguable. here is emerging evidence that these habits can
be enhanced through scafolding, both on the part of teachers and from
digital supports. hese supports look quite diferent than the SSR that
Hunt (1970) advocated. his structuring can begin when students are in
the early stages of reading (Reutzel et al., 2008). Further, it is highly likely
that the process is an ongoing endeavor, extending through the elementary
grades and into middle and high school as students encounter new genres
and content. At least for the students who depend on schools to become
literate, good silent reading does not just happen as a result of an emphasis
on oral reading luency training. For many students, good silent reading
habits require that they participate in structured silent reading experiences
that model eicient reading.
Pearson, P.D. (2006). Foreword. In K.S. Samuels, S.J. (2007). he DIBELS tests: Is
Goodman (Ed.), he truth about DIBELS: speed of barking at print what we mean
What it is, what it does (pp. v–xix). by reading luency? Reading Research
Portsmouth, NH: Heinemann. Quarterly, 42(4), 563–566.
Pinnell, G.S., Pikulski, J.J., Wixson, K.K., Samuels, S.J., Hiebert, E.H., & Rasinski,
Campbell, J.R., Gough, P.B., & Beatty, T.V. (2010). Eye movements make reading
A.S. (1995). Listening to children read possible. In E.H. Hiebert & D.R. Reutzel
aloud: Data from NAEP’s integrated (Eds.), Revisiting silent reading: New
reading performance record (IRPR) at directions for teachers and researchers
grade 4 (Rep. No. 23-FR-04). Washington, (pp. 24–44). Newark, DE: International
DC: Oice of Education Research and Reading Association.
Improvement, U.S. Department of Schatschneider, C., Buck, J., Torgesen,
Education. J.K., Wagner, R.K., Hassler, L., Hecht, S.,
Pressley, M., Hildren, K., & Shankland, & Powell- Smith, K. (2004). A multivariate
R. (2005). An evaluation of end-grade-3 study of factors that contribute to
Dynamic Indicators of Basic Early individual diferences in performance
Literacy Skills (DIBELS): Speed reading on the Florida Comprehensive Reading
without comprehension, predicting Assessment Test (Technical Report #
little. East Lansing, MI: Michigan State 5). Tallahassee, FL: Florida Center for
University, College of Education, Literacy Reading Research.
Achievement Research Center. Seuss, Dr. (1978). I can read with my eyes
Radach, R., Vorstius, C., & Reilly, R. shut! New York, NY: Random House.
(July 10, 2010). he science of speed Stanovich, K.E. (1986). Matthew efects in
reading: Exploring the impact of speed on reading: Some consequences of individual
visuomotor control and comprehension. diferences in the acquisition of literacy.
Paper presented at the annual meeting Reading Research Quarterly, 21(4),
of the Society for the Scientiic Study of 360–407.
Reading, Berlin, Germany.
Taylor, B.M., Frye, B.J., & Maruyama,
Rasinski, T. (2006). Reading luency G.M. (1990). Time spent reading and
instruction: Moving beyond accuracy, reading growth. American Educational
automaticity, and prosody. he Reading Research Journal, 27(2), 351–362.
Teacher, 59(7), 704–706.
Taylor, S.E., Frackenpohl, H., & Pettee,
Rasinski, T., Samuels, S.J., Hiebert, E., J.L. (1960). Grade level norms for the
Petscher, Y., & Feller, K. (2011). he components of the fundamental reading
relationship between a silent reading skills (EDL Research and Information
luency instructional protocol on Bulletin #3). New York, NY: Educational
students’ reading comprehension and Developmental Laboratories/McGraw
achievement in an urban school setting. Hill.
Hiebert, Samuels, & Rasinski 167
Taylor, S.E. (1965). Eye movements in
reading: Facts and fallacies. American
Educational Research Journal, 2(4),
187–202.
Torgesen, J.K., Myers, D., Schirm, A.,
Stuart, E., Vartivarian S., Mansield,
W., ... Haan, C. (2007). Closing the
achievement gap: Second year indings
from a randomized trial of four reading
interventions for striving readers.
Washington, DC: he Corporation for the
Advancement of Policy Evaluation.
Valencia, S.W., Smith, A.T., Reece, A.M.,
Li, M., Wixson, K.K., & Newman, H.
(2010). Oral reading luency assessment:
Issues of construct, criterion, and
consequential validity. Reading Research
Quarterly, 45(3), 270–291
Wexler, J., Vaughn, S., Edmonds, M.,
& Reutebuch, C.K. (2008). A synthesis
of luency interventions for secondary
struggling readers. Reading and Writing,
21(4), 317–347.
Wiley, H.I., & Deno, S.L. (2005). Oral
reading and maze measures as predictors
of success for English learners on a state
standards assessment. Remedial and
Special Education, 26(4), 207–214.
Wise, B.W., Ring, J., & Olson, R.K.
(1999). Training phonological awareness
with and without explicit attention to
articulation. Journal of Experimental
Child Psychology, 72(4), 271–304.
Elfrieda H. Hiebert
TextProject, & University of California, Santa Cruz
D. Ray Reutzel
Utah State University
A
s the title of this book indicates, our interest lies in addressing
how the current knowledge base about silent reading practices can
provide a foundation for future instruction and research. Based on
what we know in 2015, we might ask the question, what changes in reading
instruction and practice need to be made now to positively inluence
students’ literacy proiciencies ive years from now? We have chosen the
year 2020 not only because it directs us into the future but also because it is
the year that President Obama (Dillon, 2010) has targeted as the point when
the majority of high school graduates should have the literacy skills that
successfully prepare them for college and a later career.
his goal is ambitious, but if even modest movement is to be made
toward achieving it, increased attention needs to be directed toward the use
of efective silent reading in classrooms. In the digital-global world of the
21st century, accessing, organizing, creating, sharing, and using knowledge
are critical commodities. he acquisition and use of knowledge requires
that students and employees develop the ability to read silently with skill
and stamina in a variety of texts for a variety of purposes, because these
texts are increasingly presented to the reader using a variety of traditional
and digital media. For the necessary shit from oral repeated reading with
feedback to efective silent reading to occur, literacy educators need to be
relective and strategic going forward. If the researchers who revisit the
topic of silent reading in 2020 are to see movement toward greater literacy
capacity among elementary students and high school graduates, literacy
educators will need to recognize the unique contributions and roles of both
oral and silent reading in developing proicient lifelong readers.
1 his chapter was previously published in Revisiting Silent Reading: New Directions for Teachers and
Researchers. he deinitive publisher-authenticated version published in 2010 and in 2014 is available online
at: http://www.reading.org/general/Publications/Books.aspx & http://textproject.org/library/books/revisiting-
silent-reading/
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Teacher Support
Change of any kind takes time and information. Fundamental
changes in silent reading practices in classrooms can be expected to
require substantial amounts of support for the teachers who will be asked
to make them. As the teachers’ questions—which provided the basis
for Hiebert et al.’s (Chapter 3) development of comprehension-based
silent reading rate—illustrate, teachers ask many important questions.
Oten, these are questions for which researchers have few solid answers.
Conversations between researchers and teachers are urgently needed on
issues associated with independent silent reading so that the questions that
teachers ask are addressed by future research and so that the questions that
Digital Contexts
In the digital-global world of the 21st century, proicient silent
reading is essential to meeting the challenge of ensuring that more high
school graduates are ready for the increasing demands of college and
career-related literacy tasks. Literacy proiciencies in traditional print
contexts do not necessarily extend seamlessly to those practiced in digital
contexts. Efective silent reading in online contexts requires that students
adopt a problem-solving stance, where an initial task involves searching for
and selecting from available information and a second involves evaluating
whether the accessed information is valid and valuable to read. he texts
in these tasks are almost always informational in nature, whereas much of
past conventional print-based reading instruction has focused heavily on
traditional print versions of narrative texts.
Informational and narrative texts difer in structure, conceptual
density, and physical features such as diagrams, photo inserts, headings
and subheadings, and a table of contents (Duke & Bennett-Armistead,
2003). Readers oten skim sections of an informational text, but closely
read and reread those sections that provide the precise content they are
seeking. In contrast, narrative texts are typically written to be read from
beginning to end with a relatively uniform amount of focused attention.
Despite the fact that digital contexts have made the demands
for processing informational texts more critical, there is evidence that
opportunities for content area learning in elementary schools have
decreased rather than increased. In a recent survey, elementary teachers
reported devoting around an hour of time weekly to science instruction
(Dorph et al., 2007). his amount of time is half of what was reported in a
survey conducted in 2000 (Fulp, 2002). If students have not had adequate
Final houghts
If we are to be successful in promoting eicacious silent reading
over the next decade, educators need to be more strategic and thoughtful.
Unexamined assumptions associated with past independent silent reading
practices have led to results that, in the long run, have not supported
students in becoming more proicient independent, silent readers.
Furthermore, privileging oral reading over silent reading in instruction
had not resulted in students transferring oral reading skills to silent
reading.
Oral and silent reading both have critical roles in the development
of proicient reading. Failing to view oral and silent reading as having
complementary rather than competing functions in the development of
proicient literacy could jeopardize the futures of our students. Teachers
and researchers need to work together to solve the conundrums around
how best to support all readers through appropriate uses of both oral and
silent reading at diferent points in students’ literacy development.
Brenner, D., & Hiebert, E.H. (2010). he Hagerty, P.J. (1999). Readers’ workshop:
impact of professional development Real reading. New York, NY: Scholastic.
on students’ opportunity to read. In Hunt, L.C. (1970). he efect of self-
E.H. Hiebert & D. Ray Reutzel (Eds.), selection, interest, and motivation
Revisiting Silent Reading: New Directions upon independent, instructional, and
for Teachers and Researchers (pp. 198- frustration levels. he Reading Teacher,
217). Newark, DE. IRA. 24(2), 146–151.
Clay, M.M. (1985). he early detection of Krashen, S. (2001). More smoke and
reading diiculties (3rd ed.). Portsmouth, mirrors: A critique of the National
NH: Heinemann. Reading Panel report on luency. Phi
Coles, G. (2000). Misreading reading: Delta Kappan, 83(2), 119–123.
he bad science that hurts children. Krashen, S. (2005). A special section on
Portsmouth, NH: Heinemann. reading research: Is in-school free reading
Deno, S.L. (1985). Curriculum-based good for children? Why the National
measurement: he emerging alternative. Reading Panel Report is (still) wrong. Phi
Exceptional Children, 52(3), 219–232. Delta Kappan, 86(6), 444–447.
Deno, S.L. (2003). Developments in Lee-Daniels, S.L., & Murray, B.A. (2000).
curriculum-based measurement. he DEAR me: What does it take to get
Journal of Special Education, 37(3), children reading? he Reading Teacher,
184–192. 54(2), 154–155.
Dewitz, P., Jones, J., & Leahy, S. (2009). Lewis, M. (2002). Read more-read better:
Comprehension strategy instruction in A meta-analysis of the literature on the
core reading programs. Reading Research relationship between exposure to reading
Quarterly, 44(2), 102–126. doi:10.1598/ and reading achievement. (Unpublished
RRQ.44.2.l doctoral dissertation). Minneapolis/St.
Paul, MN: University of Minnesota.
Dillon, S. (2010, March 13). Obama calls
for major change in education law. he Manning, G.L., & Manning, M. (1984).
New York Times. Retrieved from www. What models of recreational reading
nytimes.com/2010/03/l4/education/ make a diference? Reading World, 23(4),
l4child.html 375–380.
Dorph, R., Goldstein, D., Lee, S., Lepori, McKeown, M.G., Beck, I.L., & Blake,
K., Schneider, S., & Venkatesan, S. (2007). R.G.K. (2009). Rethinking reading
he status of science education in the Bay comprehension instruction: A
Area. Berkeley, CA: Lawrence Hall of comparison of instruction for strategies
Science, UC-Berkeley. and content approaches. Reading Research
181