How To Create and Use Rubrics For Formative Assessment and Grading
How To Create and Use Rubrics For Formative Assessment and Grading
Gene R. Carter, Executive Director; Mary Catherine (MC) Desrosiers, Chief Program Development
Officer; Richard Papale, Publisher; Genny Ostertag, Acquisitions Editor; Julie Houtz, Director, Book
Editing & Production; Deborah Siegel, Editor; Louise Bova, Senior Graphic Designer; Mike Kalyan,
Production Manager; Keith Demmons, Desktop Publishing Specialist; Kyle Steichen, Production
Specialist
Copyright © 2013 ASCD. All rights reserved. It is illegal to reproduce copies of this work in print or
electronic format (including reproductions displayed on a secure intranet or stored in a retrieval sys‑
tem or other electronic storage device from which copies can be made or displayed) without the prior
written permission of the publisher. By purchasing only authorized electronic or print editions and
not participating in or encouraging piracy of copyrighted materials, you support the rights of authors
and publishers. Readers who wish to duplicate material copyrighted by ASCD may do so for a small
fee by contacting the Copyright Clearance Center (CCC), 222 Rosewood Dr., Danvers, MA 01923,
USA (phone: 978-750-8400; fax: 978-646-8600; web: www.copyright.com). For requests to reprint or
to inquire about site licensing options, contact ASCD Permissions at www.ascd.org/permissions, or
[email protected], or 703-575-5749. For a list of vendors authorized to license ASCD e-books to
institutions, see www.ascd.org/epubs. Send translation inquiries to [email protected].
Printed in the United States of America. Cover art © 2013 by ASCD. ASCD publications present a
variety of viewpoints. The views expressed or implied in this book should not be interpreted as official
positions of the Association.
All web links in this book are correct as of the publication date below but may have become inactive or
otherwise modified since that time. If you notice a deactivated or changed link, please e-mail books@
ascd.org with the words “Link Update” in the subject line. In your message, please specify the web
link, the book title, and the page number on which the link appears.
ASCD Member Book, No. FY13-4 (Jan. 2013, PSI+). ASCD Member Books mail to Premium (P),
Select (S), and Institutional Plus (I+) members on this schedule: Jan., PSI+; Feb., P; Apr., PSI+; May,
P; July, PSI+; Aug., P; Sept., PSI+; Nov., PSI+; Dec., P. Select membership was formerly known as
Comprehensive membership.
Brookhart, Susan M.
How to create and use rubrics for formative assessment and grading / Susan M. Brookhart.
p. cm.
Includes bibliographical references and index.
ISBN 978-1-4166-1507-1 (pbk. : alk. paper)
1. Grading and marking (Students) 2. Educational evaluation. I. Title.
LB3051.B7285 2013
371.26--dc23
2012037286
22 21 20 19 18 17 16 15 14 13 1 2 3 4 5 6 7 8 9 10 11 12
To my daughter Rachel Brookhart,
with love and thanks for all her help and support.
Preface................................................................................................................................................... ix
Acknowledgments................................................................................................................................ xi
8 More Examples......................................................................................................................... 82
Afterword............................................................................................................................................. 126
Appendix B: Illustrated Six-Point 6+1 Trait Writing Rubrics, Grades K–2................................... 142
References.......................................................................................................................................... 154
The purpose of this book, as the title suggests, is to help you use rubrics in the class‑
room. To do that, two criteria must be met. First, the rubrics themselves must be well
designed. Second, the rubrics should be used for learning as well as for grading.
Many of you are already familiar with rubrics, and you will read this book through
the lens of what you already know. For some, the book will be an affirmation of your
current understanding of rubrics and provide (I hope) some additional suggestions and
examples. But for others, the book may challenge your currently held views and prac‑
tices regarding rubrics and call for some change.
So I wrote this book with some apprehension. It’s always a challenge to “come in in
the middle” of something. Teachers do that all the time, however. I ask all of you to keep
an open mind and to constantly ask yourself, “What do I think about this?” To that end,
I have included self-reflection questions along the way. I encourage you to think about
them, perhaps keeping a journal of these reflections so you can review and consolidate
your own learning at the end.
In some ways, this book is two books in one, and for that reason it is divided into
Part I and Part II. Part I is about rubrics themselves: what they are, how to write them,
and some examples of different kinds of rubrics. Part II is about how to use rubrics in
your teaching.
ix
|
x How to Create and Use Rubrics
The big ideas in Part I concern the two must-have aspects of rubrics. First, rubrics
must have clear and appropriate criteria about the learning students will be demonstrat‑
ing (not about the task). Second, rubrics must have clear descriptions of performance
over a continuum of quality. If the rubrics are analytic, each criterion will have separate
descriptions of performance. If the rubrics are holistic, the descriptions of performance
for each level will consider all the criteria simultaneously.
The big idea in Part II is that rubrics should assist with learning as well as assess
it. The strategies in Part II are grouped according to purpose: sharing learning targets
with students, formative assessment in terms of feedback and student self-evaluation,
and grading. Actually, sharing learning targets with students is the foundational forma‑
tive assessment strategy. Without clear learning targets, from the students’ point of view
there is nothing to assess.
Acknowledgments
I am grateful for the support, help, and assistance of many people. Thanks to the amaz‑
ing Bev Long and the educators in Armstrong School District, to the incredible Connie
Moss and the Center for Advancing the Study of Teaching and Learning in the School of
Education at Duquesne University, to wonderful colleagues Judy Arter and Jan Chap‑
puis, and to all the dedicated educators over the years with whom I’ve been fortunate
to have conversations about rubrics and about student learning. I have learned from
you all. Thanks to the talented editorial and production staff at ASCD, especially Genny
Ostertag and Deborah Siegel. Thanks to my family, especially my husband Frank for his
love and support, to my daughter Rachel for help especially with the Rubric for Laugh‑
ing, and to my daughter Carol for hanging in there. This work has been inspired by all of
you. Of course, any errors or omissions are mine alone.
xi
Part I
All Kinds of
RubRics
1
What Are Rubrics and
Why Are They Important?
The word rubric comes from the Latin word for red. The online Merriam-Webster
dictionary lists the first meaning of rubric as “an authoritative rule” and the fourth
meaning as “a guide listing specific criteria for grading or scoring academic papers,
projects, or tests.” How did the name for a color come to mean a rule or guide? At least
as far back as the Middle Ages, the rules for the conduct of liturgical services—as
opposed to the actual spoken words of the liturgy—were often printed in red, so the
rules were “the red things” on the page.
In this book, I will show that rubrics for
classroom use are both more and less than the Self-reflection
dictionary definition suggests. They are more
because rubrics are good for much more than What is your current view of rubrics? Write down
just grading or scoring. They are less because what you know about them and what experiences
not just any set of rules or guides for student you have had using them. Save this reflection to
work are rubrics. This first chapter lays out some compare with a similar reflection after you have
basic concepts about rubrics. Chapter 2 illus‑ read this book.
trates common misconceptions about rubrics,
and Chapter 3 describes how to write or select effective rubrics.
3
|
4 How to Create and Use Rubrics
What is a rubric?
A rubric is a coherent set of criteria for students’ work that includes descriptions of levels
of performance quality on the criteria. Sounds simple enough, right? Unfortunately, this
definition of rubric is rarely demonstrated in practice. The Internet, for example, offers
many rubrics that do not, in fact, describe performance. I think I know why that might
be and will explain that in Chapter 2, but for now let’s start with the positive. It should be
clear from the definition that rubrics have two major aspects: coherent sets of criteria and
descriptions of levels of performance for these criteria.
The genius of rubrics is that they are descriptive and not evaluative. Of course,
rubrics can be used to evaluate, but the operating principle is you match the perfor‑
mance to the description rather than “judge” it. Thus rubrics are as good or bad as the
criteria selected and the descriptions of the levels of performance under each. Effective
rubrics have appropriate criteria and well-written descriptions of performance.
suitable for using rubrics, when they are appropriate indicators of your goals for student
learning.
About the only kinds of schoolwork that do not function well with rubrics are ques‑
tions with right or wrong answers. Test items or oral questions in class that have one
clear correct answer are best assessed as right or wrong. However, even test items that
have degrees of quality of performance, where you want to observe how appropriately,
how completely, or how well a question was answered, can be assessed with rubrics.
Rubrics give structure to observations. Matching your observations of a student’s
work to the descriptions in the rubric averts the rush to judgment that can occur in
classroom evaluation situations. Instead of judging the performance, the rubric describes
the performance. The resulting judgment of quality based on a rubric therefore also
contains within it a description of performance that can be used for feedback and
teaching. This is different from a judgment of quality from a score or a grade arrived at
without a rubric. Judgments without descriptions stop the action in a classroom.
Analytic • Each criterion • Gives diagnostic information to teacher. • Takes more time to score than holistic rubrics.
(dimension, trait) • Gives formative feedback to students. • Takes more time to achieve inter-rater reli-
is evaluated • Easier to link to instruction than holistic rubrics. ability than with holistic rubrics.
separately. • Good for formative assessment; adaptable for
summative assessment; if you need an overall
score for grading, you can combine the scores.
Holistic • All criteria • Scoring is faster than with analytic rubrics. • Single overall score does not communicate
(dimensions, traits) • Requires less time to achieve inter-rater reli- information about what to do to improve.
continued
|
7
8
|
How to Create and Use Rubrics
Figure 1.2 Advantages and Disadvantages of Different Types of Rubrics (continued )
General • Description of • Can share with students, explicitly linking • Lower reliability at first than with task-specific
work gives charac- assessment and instruction. rubrics.
teristics that apply • Reuse same rubrics with several tasks or • Requires practice to apply well.
to a whole family assignments.
of tasks (e.g., • Supports learning by helping students see
writing, problem “good work” as bigger than one task.
solving). • Supports student self-evaluation.
• Students can help construct general rubrics.
Task- • Description of • Teachers sometimes say using these makes • Cannot share with students (would give away
Specific work refers to the scoring “easier.” answers).
specific content of • Requires less time to achieve inter-rater • Need to write new rubrics for each task.
a particular task reliability. • For open-ended tasks, good answers not listed
(e.g., gives an in rubrics may be evaluated poorly.
answer, specifies
a conclusion).
Source: From Assessment and Grading in Classrooms (p. 201), by Susan M. Brookhart and Anthony J. Nitko, 2008, Upper Saddle River, NJ: Pearson Education. Copyright 2008 by Pearson Educa-
tion. Reprinted with permission.
What Are Rubrics and Why Are They Important? | 9
• Can be shared with students at the beginning of an assignment, to help them plan
and monitor their own work.
• Can be used with many different tasks, focusing the students on the knowledge and
skills they are developing over time.
• Describe student performance in terms that allow for many different paths to
success.
• Focus the teacher on developing students’ learning of skills instead of task
completion.
• Do not need to be rewritten for every assignment.
students how to approach the assignment (for example, in solving the problem posed,
I should make sure to explicitly focus on why I made the choices I did and be able to
explain that). Therefore, over time general rubrics help students build up a concept of
what it means to perform a skill well (for example, effective problem solving requires
clear reasoning that I can explain and support).
Can be used with many different tasks. Because general rubrics focus students on the
knowledge and skills they are learning rather than the particular task they are complet‑
ing, they offer the best method I know for preventing the problem of “empty rubrics”
that will be described in Chapter 2. Good general rubrics will, by definition, not be task
directions in disguise, or counts of surface features, or evaluative rating scales.
Because general rubrics focus students on the knowledge and skills they are sup‑
posed to be acquiring, they can and should be used with any task that belongs to the
whole domain of learning for those learning outcomes. Of course, you never have an
opportunity to give students all of the potential tasks in a domain—you can’t ask them to
write every possible essay about characterization, solve every possible problem involv‑
ing slope, design experiments involving every possible chemical solvent, or describe
every political takeover that was the result of a power vacuum.
These sets of tasks all indicate important knowledge and skills, however, and they
develop over time and with practice. Essay writing, problem solving, experimental design,
and the analysis of political systems are each important skills in their respective disci‑
plines. If the rubrics are the same each time a student does the same kind of work, the stu‑
dent will learn general qualities of good essay writing, problem solving, and so on. If the
rubrics are different each time the student does the same kind of work, the student will
not have an opportunity to see past the specific essay or problem. The general approach
encourages students to think about building up general knowledge and skills rather than
thinking about school learning in terms of getting individual assignments done.
Why use task-specific rubrics? Task-specific rubrics function as “scoring direc‑
tions” for the person who is grading the work. Because they detail the elements to look
for in a student’s answer to a particular task, scoring students’ responses with task-
specific rubrics is lower-inference work than scoring students’ responses with general
rubrics. For this reason, it is faster to train raters to reach acceptable levels of scoring
reliability using task-specific rubrics for large-scale assessment. Similarly, it is easier for
teachers to apply task-specific rubrics consistently with a minimum of practice. General
rubrics take longer to learn to apply well.
What Are Rubrics and Why Are They Important? | 11
However, the reliability advantage is temporary (one can learn to apply general
rubrics well), and it comes with a big downside. Obviously, task-specific rubrics are use‑
ful only for scoring. If students can’t see the rubrics ahead of time, you can’t share them
with students, and therefore task-specific rubrics are not useful for formative assess‑
ment. That in itself is one good reason not to use them except for special purposes. Task-
specific rubrics do not take advantage of the most powerful aspects of rubrics—their
usefulness in helping students to conceptualize their learning targets and to monitor
their own progress.
sentences and conventions, presumably areas of much previous drill for all young writ‑
ers. Andrade, Du, and Mycek (2010) replicated these findings with students in 5th, 6th,
and 7th grade, except that the rubric group’s writing was evaluated as having higher
quality on all six criteria.
Ross, Hoagaboam-Gray, and Rolheiser (2002) taught 5th and 6th grade students self-
evaluation skills in mathematics, also using a method based on criteria. Their self-evalua‑
tion instruction involved four strategies: involving students in defining criteria, teaching
them how to apply the criteria, giving them feedback on these self-evaluations against
criteria, and helping them develop action plans based on the self-evaluations. Controlling
for previous problem-solving ability, students who self-assessed using criteria outscored
a comparison group at solving mathematics problems.
Ross and Starling (2008) used the same four-component self-assessment training,
based on criteria, with secondary students in a 9th grade geography class. Students
were learning to solve geography problems using global information systems (GIS)
software, so the learning goals were about both accurate use of the software and apply‑
ing it to real-world geography problems, including being able to explain their problem-
solving strategies. Controlling for pretest computer self-efficacy (known to be important
in technology learning), the treatment group outscored a comparison group on three
different measures: production of a map using the software, a report explaining their
problem-solving strategies, and an exam measuring knowledge of the mapping pro‑
gram. The largest difference was for the problem-solving explanations.
Hafner and Hafner (2003) investigated col‑
lege biology students’ use of rubrics for peer Self-reflection
assessment and teacher assessment of a collabor‑
ative oral presentation. There were five criteria: What evidence would it take to convince you that
organization and research, persuasiveness and using rubrics with learning-based criteria in your
logic of argument, collaboration, delivery and classroom would enhance learning of content
grammar, and creativity and originality. Origi‑ outcomes and improve students’ learning skills as
nally the rubric was developed and then modi‑ well? How can you get that evidence in your own
fied with discussion and involvement of students. classroom?
For the study, the same rubric was used for a
required course assignment three years in a row. The instructors were interested in
finding out whether the information students gained from peer evaluation was accurate,
whether it matched teacher input, and whether this accuracy was consistent across
different years and classes. The short answer was yes. Students were able to accurately
|
14 How to Create and Use Rubrics
give feedback to their peers, their information matched that of their instructor, and this
was the case for each class.
Summing up
This chapter has defined rubrics in terms of their two main components: criteria
and descriptions of levels of performance. The main point about criteria is that they
should be about learning outcomes, not aspects of the task itself. The main point about
descriptions of levels of performance is that they should be descriptions, not evaluative
statements. The “evaluation” aspect of assessment is accomplished by matching student
work with the description, not by making immediate judgments. Finally, the chapter has
presented some evidence that using this kind of rubric helps teachers teach and stu‑
dents learn, and it has invited you to pursue your own evidence, in your specific class‑
room and school context.
2
Common Misconceptions About Rubrics
This chapter starts with misconceptions about rubrics and then shows how the prin‑
ciples for writing or selecting effective rubrics overcome these problems. A couple of
good counterexamples will, I think, show clearly how the principles for writing effective
rubrics work.
I think it is likely that many misconceptions about rubrics stem from teachers’ need
to grab a tool—rubrics—and integrate the tool with what they already know and do
about assessment, which is related mostly to grading. They may already have miscon‑
ceptions about grading (Brookhart, 2011; O’Connor, 2011). Many well-meaning teachers
use rubrics in ways that undermine students’ learning. Many rubrics available on the
Internet also exhibit these problems.
Goldberg and Roswell (1999–2000) give a good example of what they meant by
scoring products rather than outcomes. A social studies teacher intended to teach, and
assess, students’ understanding of two Maryland learning outcomes: “to examine or
describe the processes people use for making and changing rules within the family,
school, and community, . . . to propose rules that promote order and fairness in various
situations” (p. 277). The teacher created a multipart performance task. First, students
read the novel Jumanji, in which a board game goes out of control, and answered both
literal and inferential questions. Then, in groups, the students brainstormed a list of
other board games they were familiar with, invented a new board game, and participated
in a tournament. Finally, they identified problems with the various games and revised
them, and then wrote an advertisement to market their game. However, as Goldberg
and Roswell point out, none of the questions or activities was about how and why people
make rules for games.
Without a close analysis, this looks like a wonderful activity. It is cross-disciplinary
(encompassing English language arts and social studies), engaging, and fun. It could,
with some modification, actually teach and assess the intended social studies concepts.
As it stands, however, it teaches and assesses reading comprehension (reading and
answering questions about the novel, although not about the concept of people making
Common Misconceptions About Rubrics | 17
rules), cooperative group skills (devising the games and the tournament), some
problem-solving skills (diagnosing and revising the games), and communication skills
(designing the advertisement).
These sorts of near-miss activities are often accompanied by miss-the-mark rubrics
that assess the task, not the outcome. Using task-related criteria (Comprehension,
Board Game, Tournament Participation, and Advertisement) would have resulted in
a grade, but not one that gave any information about the social studies outcomes the
grade was supposed to indicate. Had the task been modified so that the questions
addressed the social studies concepts and the board game activity included a reflection
or brainstorming session about the process of making rules, outcome-related criteria
such as the following could have been used: Clear Explanation of the Rule-Making Pro‑
cess, Support for Explanation from Both the Novel and the Activity, and Demonstration
of Order and Fairness of Rules in Revised Games.
The problem of focusing on the task or instructional activity and not on learning
goals is not limited to performance assessment and selection of rubric criteria. It is
common in teacher planning and assessment in general (Chappuis, Stiggins, Chappuis,
& Arter, 2012). This problem of confusing a task with a learning goal is highlighted in
the selection of rubric criteria, however, because of the huge temptation to align the
criteria to the task instead of the learning goal and because of the existence of so many
near-miss—engaging but “empty” (Goldberg & Roswell, 1999–2000, p. 276)—classroom
performance tasks. In fact, many performance tasks and their associated rubrics are a
lot more empty than the board game example. I have chosen this “near-miss” example to
make the point about rubrics indicating learning, not task completion, precisely because
it looks so good. It is not a straw man to knock down.
Not focusing beyond tasks to intended learning outcomes is an error on two levels.
First, students really will think that what you ask them to do exemplifies what you want
them to learn. Therefore, the task should be a “performance of understanding” (Moss
& Brookhart, 2012) and not a near-miss. Near-miss tasks cheat students out of learning
opportunities and out of opportunities to conceptualize what it is that they are supposed
to be learning. Second, task-based, as opposed to learning-based, criteria do not yield
the kind of information you and your students need to support future learning. Instead,
they yield information about what was done, and they stop the action—the task, after all,
is completed. The resulting information is more about work habits, following directions,
and being a “good student” than it is about learning. The opportunity to foster and then
gauge learning is missed.
|
18 How to Create and Use Rubrics
My State Poster
4 3 2 1
Facts The poster The poster The poster Several facts are
includes at least includes 4–5 includes at least missing.
6 facts about facts about the 2–3 facts about
the state and state and is the state.
is interesting to interesting to
read. read.
Grammar There are There are There are There are more
no mistakes 1–2 mistakes 3–4 mistakes than 4 mistakes
in grammar, in grammar, in grammar, in grammar,
punctuation, or punctuation, or punctuation, or punctuation, or
spelling. spelling. spelling. spelling.
fun with facts. This is a good example of an “empty” task that does not give students
opportunities to demonstrate the intended learning outcomes.
The best way to assess recall of facts is with a simple test or quiz. Making a poster
might be an instructional activity to help students get ready for the test. Or perhaps
there are more important uses of instructional and assessment time for a unit on the
states than memorizing sets of facts about them. That depends on the district curricu‑
lum and state standards. At any rate, I am going to use the rubric for this common task
|
20 How to Create and Use Rubrics
to illustrate what not to do. I have met many teachers who really do think rubrics like the
one in Figure 2.1 are good for students. Not so!
With these “rubrics,” the assignment really doesn’t need any more directions except
perhaps “Work with a partner, and pick a state.” These rubrics are really more like a
checklist for students to use, listing desired attributes of the task, not the learning it is
designed to represent. The posters should have six facts, each illustrated with a graphic,
and they should be neat and use correct grammar. There is nothing wrong with check‑
ing for this, and the teacher could create a tool if she wished. The resulting checklist
could be used for self-assessment of the completeness of the poster activity:
My state poster
_______ Has six facts.
_______ Has a picture related to each fact.
_______ Is neat.
_______ Uses correct grammar.
Whether the students could recall the facts they were supposed to know would be
assessed separately, with a quiz.
The My State Poster rubric illustrates another common misconception about the
descriptions of performance along the continuum of quality for each criterion. Rarely
is a count the best way to distinguish levels of quality of criteria, and if it is, the criteria
are likely related to work habits (for example, counting how often a student completes
homework). Chapter 7 discusses how to build rating scales with frequency levels as
indicators of work habits and other learning skills.
Occasionally an academic learning goal is best measured with counts (for example,
counting the number of errors in a keyboarding passage). But most of the time, the best
way to describe levels of quality is with substantive descriptions. The poster rubric has
a glimmer of that in the Level 4 description for graphics: “All graphics are related to the
topic and make it easier to understand.” The quality of an illustration making something
easier to understand is a substantive one. But this aspect of the graphics is not carried
through in parallel form for the other levels (for example, “Graphics are included but
do not add to understanding,” “Graphics are included but are confusing,” and so on).
Instead, the descriptions turn into counts. Counts are used for the criteria of facts and
grammar as well. The only criterion with substantive descriptions of performance at
each level is neatness.
Common Misconceptions About Rubrics | 21
I have also seen versions of the poster assignment that have “criteria” for each of the
intended facts. For example, a class was assigned to make posters about a chosen Native
American group, and the criteria on the rubric were Name of the Group, Type of Dwell‑
ing, Location, Dress, Food, and Neatness/Mechanics/Creativity.
Once again, let me be clear that I have nothing against posters and nothing against
facts. What is at issue here is the use of task-based (rather than learning-based) rubrics
that count or enumerate aspects of the directions students are expected to follow. The
resulting “grade” is an evaluation of compliance, not of learning. Students could “score”
top points on these rubrics and, in fact, understand nothing except how to make a neat
poster. Students can also “score” top points on these rubrics and understand a lot. You
don’t know, and your rubrics can’t tell you. That’s a problem.
In summary, rubrics with criteria that are about the task—with descriptions of per‑
formance that amount to checklists for directions—assess compliance and not learning.
Rubrics with counts instead of quality descriptions assess the existence of something
and not its quality. Most of the time this also means the intended learning outcome is not
assessed.
Summing up
This chapter took a brief look at some common misconceptions about rubrics to
sharpen your “radar” so that you can avoid these pitfalls in rubrics you write yourself
or with your students. In the next chapter you will learn how to write or select effective
rubrics for use in your classroom.
3
Writing or Selecting Effective Rubrics
One purpose of this chapter is to help you write—alone, with colleagues, or with your
students—rubrics that will support learning in your classroom. Another purpose is to
help you become a savvy consumer of the rubric resources that abound. If you know
how to write effective rubrics, you can sometimes save time by finding and using exist‑
ing ones. You may find useful rubrics that you can use as is, or fairly good ones that you
can revise and adopt for your purposes. And of course, if you know how effective rubrics
are written, you can dismiss the numerous ineffective ones you will find. Whether you
are writing your own rubrics or selecting rubrics written by others to adapt for your own
use, focus on their two main defining aspects: the criteria and the descriptions of levels
of performance.
23
|
24 How to Create and Use Rubrics
Select as criteria the most appropriate and important aspects of the work given what
the task is supposed to assess. These should not, generally, be characteristics of the task
itself (for example, Cover, Report on Famous Person, Visuals, References), but rather
characteristics of the learning outcome the task is supposed to indicate (for example,
Selection of Subject, Analysis of Famous Person’s Contribution to History, Support with
Appropriate Historical Facts and Reasoning). Such criteria support learning because
they describe qualities that you and the students should look for as evidence of students’
learning.
Appropriateness is the most important “criterion for criteria,” if you will; that is, it is
the most important property or characteristic that criteria for effective rubrics should
possess. But it’s not the only one. To be useful and effective for rubrics, the criteria you
choose also need to be definable and observable. They should also be different from
one another, so that they can be appraised separately, and yet as a group define a set of
characteristics that, taken together, describe performance in a complete enough manner
to match the description of learning in the standard or instructional goal. Finally, criteria
should be characteristics that can vary along a quality continuum from high to low, so
you can write meaningful performance-level descriptions. Figure 3.1 summarizes the
characteristics you want in a set of criteria for rubrics for a performance.
There will be additional characteristics “in the background”—Sadler (1989) called
these “latent criteria”—that students have already mastered or that are not the main
focus of an assignment. For example, in a high school science laboratory report, stu‑
dents will use sentencing skills that they learned in early elementary school. “Sentenc‑
ing skills” operate in the background, are important in an overall sense for writing
good laboratory reports, but are not likely to be part of the rubric used to evaluate the
reports. In most cases, appropriate criteria for a high school laboratory report would
have to do with understanding the science content, understanding the inquiry process
and scientific reasoning, and skillfully communicating findings via a conventional labora‑
tory report. Effective rubrics do not list all possible criteria; they list the right criteria for
the assessment’s purpose.
To choose criteria, start with your intended learning outcome, as stated in the stan‑
dard or instructional goal you are intending to assess. Ask yourself this question:
Characteristics Explanation
Distinct from one another Each criterion identifies a separate aspect of the learning
outcomes the performance is intended to assess.
Complete All the criteria together describe the whole of the learning
outcomes the performance is intended to assess.
Able to support descriptions along Each criterion can be described over a range of perfor-
a continuum of quality mance levels.
For most standards and instructional goals, the answers to this question will be charac‑
teristics that could be elements of student work on more than one task. For example,
if students are supposed to be able to “cite textual evidence to support analysis of what
the text says explicitly as well as inferences drawn from the text” (CCSSI ELA Standard
RL.6.1), then they should be able to do that in a variety of different tasks. Students might
read a passage and then answer a question or set of questions in writing. They might
read a passage and participate in a discussion with peers. They might read a passage
and explain what it meant to a fictional younger student. They might read a passage and
make a list of literal and inferential conclusions they could draw from the reading. They
might use this skill in a more complex task, like comparing and contrasting two texts. In
addition, any of these kinds of tasks might be based on different passages.
|
26 How to Create and Use Rubrics
The result is a huge number of potential tasks, and you want the characteristics of
performance that give evidence applicable to all potential tasks by which students could
demonstrate how well they have learned this skill. In other words, you want criteria that
are appropriate to the learning common to all the tasks.
(for example, Proficient), describe that, and then adjust the remaining descriptions
from there—backing off (for example, for Basic and Below Basic) or building up (for
example, for Advanced). Another common way is to start with the top category (for
example, A), describe that, and then back off (for example, for B, C, D, F). These
methods illustrate two different approaches to assessment. In a standards-based grading
context, Advanced is supposed to be described by achievement above and beyond what
is expected. In a traditional grading context, often the A is what students are aiming for.
Ask yourself this question:
Whether you begin with the Proficient category or the top category for a criterion,
you don’t write four or five completely different descriptions for the different levels of
performance. You describe a continuum of levels of performance quality. These levels
should be distinguishable. You should be able to describe what is different from one
level to the next and to illustrate those descriptions with examples of students’ work.
Figure 3.2 summarizes desired characteristics for descriptions of levels of performance.
Describe student performance in terms that allow for many different paths to
success. Good general rubrics do not overly constrain or stifle students. Chapman and
Inman (2009) report this story about a 5th grader:
Chapman and Inman use this story to argue that rubrics constrain creativity and meta‑
cognitive development. I disagree. Rather, bad rubrics constrain creativity and metacog‑
nitive development. These rubrics were the “directions” type described in Chapter 2.
The authors described them as a chart in which each cell “includes specific elements
that are either present or absent” (p. 198). In terms of my definition of rubrics, there
were no descriptions of levels of performance quality on the criteria. These were, in fact,
checklists dressed up as rubrics.
Choose the words in your performance-level descriptions carefully. Performance-
level descriptions should, as the name implies, describe student performance at all levels
|
28 How to Create and Use Rubrics
Characteristics Explanation
Cover the whole range Performance is described from one extreme of the con-
of performance tinuum of quality to another for each criterion.
Distinguish among levels Performance descriptions are different enough from level
to level that work can be categorized unambiguously. It
should be possible to match examples of work to perfor-
mance descriptions at each level.
Center the target performance The description of performance at the level expected
(acceptable, mastery, passing) by the standard, curriculum goal, or lesson objective is
at the appropriate level placed at the intended level on the rubric.
Feature parallel descriptions from Performance descriptions at each level of the continuum
level to level for a given standard describe different quality levels for
the same aspects of the work.
of a continuum of performance. Evaluative terms (excellent, good, fair, poor, and the like)
are not used. The continuum should represent realistic expectations for the content and
grade level. Within that limit, descriptions should include all possible levels, including,
for example, a bottom level that is completely off target, even if no student is expected
to produce work at that level. The descriptions should be appropriate for the level they
are describing. For example, the description of performance at the Proficient level in
standards-based rubrics should match the level of intended accomplishment written in
the standard, goal, or objective.
Writing or Selecting Effective Rubrics | 29
The descriptions should be clear and based on the same elements of performance
from level to level. For example, consider the criterion Identifies the Problem in a math‑
ematics problem-solving rubric. If part of the description of proficiency is that a student
“states the problem in terms of its mathematical requirements,” then each level of that
criterion should have a description of the way students do that. Lesser instances of this
aspect of performance might be described like this: “States the problem but does not
use mathematical language” and “Does not state the problem.”
Top-down approach
A top-down approach is deductive. It starts with a conceptual framework that
describes the content and performance you will be assessing. Use the top-down
approach when your curriculum or standards have clearly defined the intended content
and performance. Here are the steps in the top-down approach:
1. Create (or adapt from an existing source) a conceptual framework for achieve-
ment. This should include a description of the intended achievement (e.g.,
what is good narrative writing?) and an outline of the qualities that you
intend to teach and to ask students to demonstrate (the achievement dimen‑
sions or criteria). The outline should describe the continuum of perfor‑
mance for each criterion.
2. Write general scoring rubrics using these dimensions and performance levels.
To do this, organize the criteria either analytically (one scale for each crite‑
rion) or holistically (one scale considering all criteria simultaneously) and
write descriptions for performance at each level. The general rubrics can
|
30 How to Create and Use Rubrics
and should be shared with students. For example, if you are constructing
mathematical problem-solving rubrics and one of the criteria is “mathemati‑
cal content knowledge,” the general rubrics may say “problem solution
shows understanding of major mathematical concepts and principles.”
Having students recognize the mathematical concepts and principles (e.g.,
“I know this problem involves the relationships among distance, rate, and
time, and those are the major concepts”) is part of the learning.
3. For teacher scoring, you may adapt the general scoring rubrics for the specific
learning goal for the performance you will be scoring. For example, if the
general rubrics say, “Problem solution shows understanding of major
mathematical concepts and principles,” to focus your scoring you might
say, “Problem solution shows understanding of the relationships among
distance, rate, and time.”
4. In either case (whether the rubrics remain general or are adapted to more
specific learning goals), use the rubrics to assess several students’ perfor-
mances, and adapt them as needed for final use. (Nitko & Brookhart, 2011,
pp. 267–268)
Bottom-up approach
A bottom-up approach is inductive. It starts with samples of student work and uses
them to create a framework for assessment. Use the bottom-up approach when you are
still defining the descriptions of content and performance or when you want to involve
students in creating the means of their own assessment. Here are the steps in the
bottom-up approach:
1. Get a dozen or more copies of students’ work. This student work should all
be relevant to the kind of performance for which you are building rubrics
(e.g., mathematics problem solving). However, if possible they should be
from several different tasks (Arter & Chappuis, 2006). The reason for this
is that you want the rubrics to reflect the content and performance descrip‑
tions for the general learning outcomes, not any particular task (e.g., not
any one particular mathematics problem).
2. Sort, or have students sort, the work into three piles: high, medium, and low
quality work. This is the reason that students need to be somewhat familiar
with the concepts and skills. If they are not, their sorting may resort to
Writing or Selecting Effective Rubrics | 31
surface-level skills like neatness and format rather than the quality of the
thinking and demonstration of skills.
3. Write, or have students write, specific descriptions of why each piece of work
is categorized as it is. Be specific; for example, instead of saying that the
problem was solved incorrectly, say what was done and why: the solution
used irrelevant information, or the problem was approached as a volume
problem when it was an area problem, or whatever.
4. Compare and contrast the descriptions of work and extract criteria or dimen-
sions. For example, if there are several descriptions of students using
relevant and irrelevant information, identifying relevant information in the
problem may emerge as a dimension.
5. For each of the criteria identified in step 4, write descriptions of quality along
the dimensions, for as many levels as needed. You may use three categories as
you did for the sorting, or you may use four, five, or six, depending on how
many distinctions are useful to make and/or how many levels you need for
grading or other purposes. (Nitko & Brookhart, 2011, p. 268)
The whole body Cheeks scrunch Lips open, face Lips may open
is involved, up. At least smiles. or may stay
which may one body part closed.
include (but is besides the face
not limited to) moves; perhaps
Body
shoulder rolls, the shoulders
Involvement
head bob- roll or the head
bling, whole- is thrown back.
body shaking,
doubling over, or
falling down.
Some of the descriptions in this rubric are low-inference, which means that the
observer does not have to draw a conclusion or make any surmises about what the
Writing or Selecting Effective Rubrics | 33
observation might mean. “Lips open” is a low-inference description. Most people observ‑
ing the same person laughing would agree on whether the person’s lips were open or
not. Notice that even this description is not totally objective: How far apart do lips have
to be before they are described as “open”? Silly, sure, but it’s easier to make this point
with laughing lips than with aspects of student work about which a teacher may hold
longstanding opinions. The point of a description is it makes you look and report what
you see, not your opinion about it.
Some of the descriptions in this rubric are high-inference, which means that the
observer has to draw a conclusion or make a surmise about what is observed. For
example, “laughter is loud” is fairly low-inference, but “verging on impolite” is high-
inference. Different people might draw different conclusions about how loud laughter
has to be before it verges on impolite.
It would be easy to say just don’t use descriptions that require inferences, but unfor‑
tunately that is too easy. Aim for the lowest-inference descriptors that you can use and still
accomplish your purpose of assessing important qualities. As you do this, you will find that
most descriptions you use will require some level of inference, even when they appear
to be objective. For example, a common description of the Proficient level for a Gram‑
mar and Usage criterion for written reports would read something like “Few errors in
grammar and usage, and errors do not interfere with meaning.” There are inferences to
be made. How few is few? How muddled does a sentence have to be before its meaning
is unclear to a reader?
The important point here is that leaving descriptions open to professional judgment
—to making some inferences—is better than locking things down with overly rigid
descriptions. Don’t be tempted to make everything so low-inference (for example,
“three errors in grammar”) that you don’t leave room for good judgment. The role of the
descriptions is interpreting the criteria along a continuum of quality. Three small errors
in grammar may characterize an essay that exhibits much more complex and sophisti‑
cated English communication than an essay that has only one but that doesn’t attempt
much beyond short, simple sentences. Aha! I hope you are thinking already that declar‑
ing writing “complex” and “sophisticated” also requires making inferences. If so, point
made. There is no way to make critical thinking about students’ demonstration of what
they know and can do completely inference-free. If you try, you end up with rubrics that
are pretty trivial, as we explored in Chapter 2.
|
34 How to Create and Use Rubrics
All the stages of the life One or more stages of the Not included.
Order of Life- cycle are in the correct life cycle are in the wrong
Cycle Stages order. Stages are correctly order.
labeled.
Illustrations Illustrations of each stage One or two illustrations of More than 2 illustrations Not included.
of Life-Cycle are evident. the life-cycle stages are of the life-cycle stages are
Stages missing. missing.
Poster is very neat and Poster is somewhat neat Poster is messy, many
organized. Title and all and organized. Some cor- errors, not colored, or
Overall Appear-
sentences have correct rect spelling, punctuation, unfinished. Poster shows
ance of Poster
spelling, capitalization, and and capitalization. Poster no signs of effort.
punctuation. shows signs of little effort.
Source: Used with permission from Courtney Kovatch, 3rd grade teacher, West Hills Primary School, Kittanning, PA.
|
35
|
36 How to Create and Use Rubrics
of Life-Cycle Stages more important than the other criteria by allocating more points
to them. Given her intended learning outcome, these were the appropriate criteria to
weight more heavily.
Revising the rubric. This rubric would be more effective if it were edited to
address the following points:
• Remove format criteria from the rubric and deal with them as work-habits issues.
• Replace points with proficiency levels.
• Edit performance-level descriptions to include fewer counts and more substantive
statements.
All the stages of the life One or more stages of the No order is specified, or
cycle are in the correct life cycle are in the wrong order is incorrect.
Order of Life-
order and correctly labeled. order.
Cycle Stages
Each stage has an Each stage has an illustra- Some stage illustrations Illustrations do not help
Illustrations illustration that gives an tion that helps show what do not show what happens show what happens to the
of Life-Cycle especially clear or detailed happens to the animal to the animal then. animal during its life cycle.
Stages view about what happens then.
to the animal then.
|
37
|
38 How to Create and Use Rubrics
Performance levels. The teacher’s original intent was to add up the points and take
a percentage grade, a common approach used in her school. For this intention, weight‑
ing the title, order, and overall appearance criteria 4 points instead of 6 made sense. For
reasons that are discussed more thoroughly in Chapter 10, using points and percentages
for grading with rubrics is not recommended. Doing so removes some of the observa‑
tion and judgment of work that is a strength of rubrics, and often the results do not
match actual student performance and achievement. The revised rubrics use profi‑
ciency-level descriptions (Advanced, Proficient, Nearing Proficient, and Novice) instead
of points. Then the descriptions can be written to those levels. Notice that one of the
criteria, Order of Life-Cycle Stages, does not have an Advanced level. Knowing the order
of an organism’s life-cycle stages is a characteristic of proficiency. Any advanced under‑
standing about the life cycle would be expressed in the descriptions and illustrations.
Descriptions of performance at each level. Several “wordsmithing” revisions have
been made in the descriptions of performance at each level from Figure 3.4 to Figure
3.5. First, numerical counts (“one stage,” “one detail”) are replaced with substantive
judgments. This revision actually makes the assessment more accurate, not less accu‑
rate, as you might think. Different animals have different life cycles, and some stages
and details are more important than others. The revised descriptions require figuring
out how clearly the students’ descriptions show their understanding of the research they
have done, rather than the number of facts they copied. This, in turn, will make for a
more accurate assessment of students’ understandings of animal life cycles. And it will
discourage students from copying facts instead of interpreting what they learned by
reading the facts.
Second, there is space for describing perfor‑
Self-reflection mance at an Advanced level—that is, beyond that
necessary for simply doing what was required.
Do you sometimes use rubrics that are more about
Because students have the rubrics ahead of time,
assignment directions than evidence of learning?
they know that they can include extra-detailed,
If you do, try to revise your rubrics in a similar
more complex descriptions if they are able. The
manner to the way we revised the Life-Cycle Proj-
first draft of the rubric provided no reason for
ect rubric. Even better, work with a colleague, so
students to do anything above or beyond listing
you can discuss the issues raised in this chapter
stages in their chosen animal’s life cycle, copying
as you revise.
two facts about each stage, and using some sort
Writing or Selecting Effective Rubrics | 39
of illustration. The revised rubric allows teachers and students to judge how deeply
students delve into the subject, and it encourages delving.
Summing up
The chapter provided suggestions for choosing criteria and writing descriptions of
levels of performance, intended to help you write rubrics or adapt rubrics that you find
on the Internet or in other resources. Chapters 4, 5, and 6 discuss three kinds of rubrics
that are effective for teaching and learning, depending on your purpose.
4
General Rubrics for Fundamental Skills
General rubrics are particularly useful for fundamental skills that develop over time.
Writing and mathematics problem solving are two examples. This chapter begins by
describing general rubrics for these skills. In both cases, the disciplines have agreed
on the skills involved. In writing, the 6+1 Trait Writing rubrics have become widely
accepted as clear statements of what good writing looks like. More recently, agree‑
ment has begun to converge in the field of mathematics on what good problem solving
looks like, and although there are many math problem-solving rubrics, they tend to be
more alike than different. Generally accepted criteria for mathematics problem solv‑
ing have included strategic knowledge and mathematical communication since at least
1989, when the National Council of Teachers of Mathematics standards (NCTM, 1989)
emphasized these skills as being on a par with mathematical knowledge.
The chapter ends by describing general rubrics for report writing and creativity that
I have developed. These are important school-based skills, and I have noticed that often
rubrics for these skills are wanting. For example, “creativity” rubrics are often about
artistic presentation rather than true creative accomplishment. I welcome comments,
suggestions, and additional examples from readers.
40
General Rubrics for Fundamental Skills | 41
Evidence of effectiveness
A theme of this chapter is that when rubrics clearly characterize what student work
should look like, instruction, assessment, and learning improve. For the 6+1 Trait Writ‑
ing rubrics, expert opinion and research bear this out. These rubrics have changed the
teaching and learning of writing all across the country.
I asked Judy Arter, a professional developer, author, and researcher who has done
extensive work with the 6+1 Trait Writing rubrics, to comment on this notion that clear
rubrics help with teaching and learning and that the 6+1 Trait Writing rubrics are per‑
haps the most widely known example of that. Here is her reply (personal communica‑
tion, November 21, 2011):
I agree that 6+1 Traits transformed not only the way we think about writ‑
ing, but also the way we think about classroom assessment. It certainly
changed the way I have viewed assessment. In 1980 people were saying,
“We can’t assess writing, it’s too individualistic.” Then it was math problem
solving. Now people are saying, “Of course we can assess writing and math
problem solving, but we can’t assess critical thinking.” All it takes is a group
of people that try to define, in writing, what good ________ looks like, try it
out repeatedly, revise it repeatedly, get examples, etc. The more we do that,
especially with learning objectives that are slippery and hard to define, the
better off we’ll be.
Jan Chappuis, director of the Pearson Assessment Training Institute, says she felt
she learned how to teach writing when she went through the Puget Sound Writing
Program in the early 1980s. Although she found the writing process transformational for
her, there were still problems when it came to conferencing with students and guiding
their revisions (personal communication, December 19, 2011). She says:
|
42 How to Create and Use Rubrics
What I believe the 6+1 Trait rubrics did was take what individual teachers did
and put it all together, not just responding to one piece here and one piece
there, and put it together to define the domain of writing. It was the first time
I’d seen everything I was trying to teach about writing [in one place]. As a
guide for teaching, and as a guide for students as they are responding to
other students’ writing—what do I want feedback on? The 6+1 Trait rubrics
really did a wonderful job of filling all those needs, in a way that felt more
rigorous, and less idiosyncratic.
Research bears out the experience of Judy Arter, Jan Chappuis, and many teachers,
schools, and districts: The 6+1 Trait Writing rubrics clarify the qualities of writing and
make it easier to teach and learn. A recent federally funded study (Coe, Hanita, Nish‑
ioka, & Smiley, 2011) included 196 teachers and more than 4,000 students in 74 schools
in Oregon. The researchers compared posttest essay scores of students of teachers who
did and did not have professional development in the 6+1 Trait Writing model. They
controlled for students’ previous writing performance and school characteristics (pov‑
erty level, average hours of writing practice, along with teacher experience in general
and in teaching writing) and used a statistical model that acknowledged students were
clustered within schools.
This state-of-the-art statistical analysis indicated that using the 6+1 Trait Writing
model significantly increased student writing scores, with an estimated effect size of
0.109, a small but stable effect. Students improved significantly in three of the six traits
(Organization, Voice, and Word Choice). In the other three traits (Ideas, Sentence Flu‑
ency, and Conventions), performance improved but not enough to be statistically signifi‑
cant. This study used more sophisticated research methods than two previous studies of
the 6+1 Trait Writing model, one of which found improvement (Arter, Spandel, Culham,
& Pollard, 1994) and one of which did not (Kozlow & Bellamy, 2004).
• Ideas
• Organization
General Rubrics for Fundamental Skills | 43
• Voice
• Word Choice
• Sentence Fluency
• Conventions
• (Presentation)
The “plus one” criterion, Presentation, is used when presenting a polished, published
written product is part of what students are intended to learn.
Originally, the criteria had five performance levels. A more recent version with six
performance levels, which also can be divided according to Proficient/Not Proficient,
has been developed. Appendix A shows the six-point rubrics for grades 3 through 12.
Notice that each element in the performance description is listed in its own lettered
row, making it easy to see the parallels in the descriptions across levels and also making
it easier for students to see how to engineer improvement on a particular trait in their
writing.
Education Northwest has also prepared a version of the six-point 6+1 Trait Writing
rubrics for K–2 students. The K–2 version includes examples of student work as part of
the performance-level descriptions. Appendix B presents this version of the rubrics.
When they are giving kids 5 points for this and 5 points for that, often the
teachers’ vision of quality [writing] is not clear enough. This problem is the
one the 6 Traits solve so beautifully. . . . The two things going on in. . . Ideas
are focus and details. How do I teach kids lessons on what is a narrow focus?
No amount of teaching how to write a topic sentence and supporting details
|
44 How to Create and Use Rubrics
will get you there. When you teach main idea, this is what you’re getting
at, but a better way is to work on focus and support. . . . Give them tightly
focused topics first, teach them how to select details that are interesting and
important, give them mini-lessons. . . . What are the main ideas inside each
trait, then you teach to those pieces, you ask students to self-assess on those
pieces. When a student’s piece is an organizational mess, often the problem
is focus. I’ve found the 6 Traits not only good for assessing and teaching . . .
but also how to diagnose problems and where to start.
Notice that each of the six traits (seven, if you count Presentation) employs a key
question to focus students and teachers on the meaning of the trait, and then four to six
elements identifying characteristics to look for in the writing. For clarity, the elements
are lettered. The elements are not the traits (or criteria, in the language I have been
using in this book). The elements are “look-fors”: indicators or pointers toward the
criteria.
For example, the key question for the Ideas trait is “Does the writer stay focused
and share original and fresh information or perspective on the topic?” The rubric identi‑
fies six elements to look for in the writing: (a) a narrow topic, (b) strong support for that
topic, (c) relevant details, (d) original ideas based on the author’s own experience, (e)
readers’ questions being answered, and (f) author helping readers make connections
with the writing. Each of these elements could support generating examples, targeted
instruction, student practice using the writing process, self-assessment, peer assess‑
ment, and teacher feedback.
Probably my favorite example of that is how these rubrics show teachers and other
educators another way to evaluate grammar besides counting errors. In the Conventions
trait, grammar per se is not the issue. Rather, as the key question shows, the issue is
how much editing would be needed for readers to be able to understand the meaning
the writer is trying to communicate. Even for Exceptional (Level 6) performance,
“Author uses standard writing conventions effectively to enhance readability; errors are
few and only minor editing is needed to publish.” The desired quality is not “zero
errors,” but rather “readable.”
Another favorite example of mine is how
the Organization trait allows for multiple routes Self-reflection
to quality work. Many elementary teachers
instruct their students in paragraph writing with Do you use the 6+1 Trait Writing rubrics in your
a formulaic approach. Students start with a topic teaching? Did you learn to write using the 6+1
sentence, list three supporting details, and end Trait Writing rubrics when you were in school?
with a concluding sentence. This is not a bad What has been your experience with them?
protocol, but it is also not the only way to write an
organized paragraph. Similarly, some high school writing instruction teaches a five-para‑
graph essay format. Again, this is not a bad protocol, but it is not the only approach. The
key question for the Organization trait again is reading for meaning: “Does the organiza‑
tional structure enhance the ideas and make the piece easier to understand?”
Renee Parker (Parker & Breyfogle, 2011) found the same elements were important
in teaching 3rd graders how to solve problems and write about them in ways that would
prepare them to do well on the Pennsylvania System of School Assessment (PSSA).
Using problems based on released items from the PSSA and a student-friendly rubric
she found on the Illinois State Department of Education website, she developed a prob‑
lem set and associated rubric. Ms. Parker adapted the Illinois rubric to be even more
appropriate for 3rd graders. She used nouns instead of pronouns (for example, “the
problem” instead of “it”), made sure all verbs were simple and active, and changed some
words to match the language of elementary mathematics instruction. Parker and Brey‑
fogle’s student-friendly rubric for elementary mathematics problem solving is shown
in Figure 4.1. This rubric assessed the same problem-solving elements—mathematics
concepts, planning and using strategies, and explaining mathematics work in writing—
as did Lane and her colleagues; as did California, Pennsylvania, and Illinois; and, in fact,
as do many other schools, districts, and states too numerous to cite here.
Parker and Breyfogle titled their project “Learning to Write About Mathematics.”
Ms. Parker had embarked on her project because, although her students could solve
problems, they had trouble explaining their reasoning. Mathematical communication was
the area in which her students needed to improve the most, and, in fact, they did just that.
By the end of five weeks, average and below-average students were able to explain their
reasoning as well as her above-average students. The rubric itself didn’t do the trick. What
did it was using the rubric in a series of class activities and individual conferences, helping
the students talk about the criteria and how their work and others’ work met them.
We’ll talk more about how Ms. Parker used the rubric in Chapter 10. The purpose
for showing it in this chapter is to analyze its construction. As noted, this rubric is stu‑
dent friendly. It is written from the students’ point of view, using first person, in language
the students can understand. It is a great example of how “student-friendly language”
Figure 4.1 Math Problem-Solving Rubric
|
47
|
48 How to Create and Use Rubrics
does not mean simply easy vocabulary. It means that the descriptions are expressed in
the manner that students would think about their work. Thus student-friendly language
is not simply a matter of writing style; it’s also about students’ ways of thinking.
Probably the most important illustration in this rubric of expressing thinking from
the students’ point of view is in the descriptions of levels of performance for the Showing
Math Knowledge criterion. Mathematics problem-solving rubrics written for adults
describe students’ work in terms like “shows understanding of mathematical concepts
and principles,” “uses appropriate terms and notations,” and “executes algorithms
completely and correctly.” But you can’t ask students to evaluate their own “understand‑
ing of mathematical concepts and principles.” That is a judgment that must be made by
an external observer. In this student-friendly rubric, the concept of understanding has
been flipped over, from what the adult would observe to what the student would do. So
the language became “I figure out . . . .” Student understanding of mathematical concepts
and principles is exhibited in the course of “figuring out” the solution to the problem.
The other two criteria, Using Problem-Solving
Self-reflection Strategies and Writing an Explanation, similarly
use this flipping principle, describing not what an
If you are an elementary school teacher, how can
adult would observe but what a student would do.
you envision using the Math Problem-Solving
For example, “I use all the important information
Rubric in your classroom? If you teach secondary
. . .” is what a student does when an adult would
school mathematics, how might you adapt this
conclude that the student identified all the impor‑
rubric for your students?
tant elements of a problem. In these two criteria,
incorporating how students would think, as well as
speak, about their work into student-friendly language is not quite as obvious as for the
knowledge criterion, but it’s there nonetheless.
Writing reports
Written reports are important assignments in many different subject areas. Typi‑
cally the teacher’s intention is for the students to learn some facts and concepts about
the topic, analyze or process the material so that it answers a question or in some way
becomes a property of the student and not just a regurgitation of sources, and com‑
municate the results in the format of a term paper or report. That means the content,
the thinking, and the report writing are all important criteria. The rubric in Figure 4.2
reflects these criteria.
Figure 4.2 General Rubric for Written Projects (may be adapted for specific projects)
4
The thesis is clear. A large amount and Information is clearly and explicitly related Few errors of grammar and usage; any
variety of material and evidence support to the point(s) the material is intended minor errors do not interfere with meaning.
the thesis. All material is relevant. This to support. Information is organized in a Language style and word choice are highly
material includes details. Information logical manner and is presented concisely. effective and enhance meaning. Style and
is accurate. Appropriate sources were Flow is good. Introductions, transitions, and word choice are appropriate to the project.
consulted. other connecting material take the listener/
reader along.
3
The thesis is clear. An adequate amount of Information is clearly related to the Some errors of grammar and usage; errors
material and evidence supports the thesis. point(s) the material is intended to sup- do not interfere with meaning. Language
Most material is relevant. This material port, although not all connections may be style and word choice are for the most part
includes details. Information is mostly explained. Information is organized in a effective and appropriate to the project.
accurate; any inaccuracies are minor and logical manner. Flow is adequate. Introduc-
do not interfere with the points made. tions, transitions, and other connecting
continued
|
49
50
|
How to Create and Use Rubrics
Figure 4.2 General Rubric for Written Projects (may be adapted for specific projects) (continued )
2
The thesis may be somewhat unclear. Some of the information is related to the Major errors of grammar and usage begin
Some material and evidence support the point(s) the material is intended to sup- to interfere with meaning. Language
thesis. Some of the material is relevant, port, but connections are not explained. style and word choice are simple, bland,
and some is not. Details are lacking. Infor- Information is not entirely organized in a otherwise not very effective or not entirely
mation may include some inaccuracies. At logical manner, although some structure appropriate.
least some sources were appropriate. is apparent. Flow is choppy. Introductions,
transitions, and other connecting material
may be lacking or unsuccessful.
1 The thesis is not clear. Much of the mate- Information is not related to the point(s) the Major errors of grammar and usage make
rial may be irrelevant to the overall topic or material is intended to support. Information meaning unclear. Language style and word
inaccurate. Details are lacking. Appropriate is organized in a logical manner. Material choice are ineffective and/or inappropriate.
sources were not consulted. does not flow. Information is presented as
a sequence of unrelated material.
Source: From How to give effective feedback to your students (pp. 63–64), by S. M. Brookhart, 2008, Alexandria, VA: ASCD. Copyright 2008 by ASCD. Reprinted with permission.
General Rubrics for Fundamental Skills | 51
This rubric also reflects changes in my own thinking about assessing term papers
and written reports (Brookhart, 1993). I have been persuaded in my own work with
teachers and students, and by advances in the field shown in the work of colleagues
(Arter & Chappuis, 2006), that general rubrics, used repeatedly for assessing similar
skills, help students learn.
General analytic rubrics that define for students what the criteria are for good report
writing as an overall skill, and that focus students on descriptions of quality of work for
those criteria, are useful not only for grading but also for learning. As students use these
rubrics on several different reports, they learn to focus on the elements of content (Do
I have a thesis? Do I support it with detailed, accurate, relevant material? Did I get the
material from appropriate sources?), reasoning and evidence (Did I write logically? Is it
clear how my details support my main points? Can a reader follow my reasoning?), and
clarity (Did I write clearly?).
Strategies for getting students to use rubrics to learn and to monitor their learning
are shared in Chapters 9 and 10. Strategies for using rubrics for grading are presented
in Chapter 11, although here I would foreshadow that discussion by noting that for some
written reports the Content criterion might count double. For now, it is sufficient to see
how the descriptions in these rubrics are general and would bear up under repeated use
for a fundamental skill such as report writing.
Creativity
Creativity is a general skill that is often incorporated as one of the criteria in task-
based rubrics for all sorts of written, oral, and graphic student products. “Wait!” you say.
“How can you assess creativity? Isn’t creativity some ineffable quality, some inspiration
that just springs from the mind in a flash of insight?” Actually, not so. Creative people do
have flashes of insight, but their creative processes are not different in kind from “nor‑
mal” thinking. Creativity is the exceptional use of “familiar mental operations such as
remembering, understanding, and recognizing” (Perkins, 1981, p. 274). If we can name
the sorts of things that creative students do, we can teach creativity and assess it. And
we need to do a better job of that than often happens.
Creativity is sometimes misinterpreted as a description of student work that is visu‑
ally interesting or persuasive or exciting (Brookhart, 2010). If this is the case, it is much
better to call the criterion what it is—visual attractiveness, persuasiveness, or whatever.
A pretty cover on a report may be “creative,” but it is much more likely to be simply a
|
52 How to Create and Use Rubrics
good use of media (hand lettering and coloring or computer clip art, perhaps), more
akin to a visual arts skill than creativity. Once the criterion is appropriately named, it
may drop off the list because it becomes clear that it is not really related to the learning
outcomes of interest.
I have seen creativity criteria in rubrics that intended to assess originality, and that’s
closer to the mark. The top category for a Creativity/Originality criterion describes
work as very original, creative, inventive, imaginative, unique, and so on. The levels
below devolve from that, with work described as using other people’s ideas, like every‑
one else’s, not very imaginative, and the like. Such rubrics work for me, and they can
work for students and teachers to the degree that they have good examples to show
what “original” means. These would be examples not for students to emulate the con‑
tent, but for them to emulate the way in which the content stands out from others.
However, there is more to creativity than just originality, and as we have seen in the
6+1 Trait Writing rubrics, the more clearly you define the criteria, the more helpful you
will be to students. If you ask, “What do creative students do?” the answer can be sum‑
marized into four categories. Creative students do the following:
If these are the characteristics of creative students, then these characteristics should
be evident in their work. Excluding the last one—which is more of a personal trait than
something that would result in evidence in any one specific piece of work—we can
derive four criteria for creative work:
Figure 4.3 organizes these criteria into an analytic rubric. I have written the descrip‑
tions of performance along a continuum that could be labeled 4, 3, 2, 1, with 3 being the
Proficient level. Because “proficient at creativity” doesn’t sound right, I have labeled the
levels Very Creative, Creative, Ordinary/Routine, and Imitative. Although no one wants
to be “imitative,” there are times when ordinary work is appropriate. For assignments
and assessments where this is the case, my advice is don’t ask for creative work and
don’t use a rubric (or any other means) to assess it.
Many major assignments already have
analytic rubrics associated with them. In that Self-reflection
situation, adding four more rubric scales to the
assessment might be a bit much. Figure 4.4 Do you use rubrics for written reports or for cre-
organizes the same four criteria for creativity— ativity in your teaching? What has been your expe-
Ideas, Sources, Organization/Combination, and rience with them? How does that experience help
Originality—into one holistic rubric for creativity. you interpret the information about rubrics for
Note that there are still four criteria; it’s just that written reports and for creativity in this chapter?
they are considered simultaneously. So although
the rubric in Figure 4.4 looks one-dimensional, it’s very different from a creativity scale
that lists, for example, “very creative, creative, not creative,” or something like that. And
although you might use the holistic rubric in Figure 4.4 for grading, the analytic version
in Figure 4.3 would be better for teaching and learning.
Summing up
This chapter had two main purposes. The first was to make the case for using
general, analytic rubrics for fundamental skills. General, analytic rubrics that are worth
students’ time and effort are the antithesis of the task-based, “directions”-style rubrics
that count things rather than evaluate quality. General, analytic rubrics are good for
learning as well as for grading.
The second purpose was to show several wonderful examples. Each of them illus‑
trates the two defining characteristics of rubrics: appropriate criteria and, for each cri‑
terion, descriptions of performance along a continuum of quality. Their use of language
and their treatment of both the criteria and performance-level descriptions will help you
as you prepare your own criteria and performance-level descriptions. Most important,
54
|
How to Create and Use Rubrics
Figure 4.3 Analytic Rubric for Creativity
Ideas represent a startling Ideas represent important Ideas represent important Ideas do not represent impor-
Depth and Quality variety of important concepts concepts from different con- concepts from the same or tant concepts.
of Ideas from different contexts or texts or disciplines. similar contexts or disciplines.
disciplines.
Created product draws on Created product draws on Created product draws on Created product draws on
Variety of Sources a wide-ranging variety of a variety of sources, includ- a limited set of sources and only one source, and/or
sources, including differ- ing different texts, media, media. sources are not trustworthy or
ent texts, media, resource resource persons, and/or appropriate.
persons, and/or personal personal experiences.
experiences.
Ideas are combined in original Ideas are combined in original Ideas are combined in Ideas are copied or restated
Organization and and surprising ways to solve a ways to solve a problem, ways that are derived from from the source(s) consulted.
Combination of problem, address an issue, or address an issue, or make the thinking of others (for
Ideas make something new. something new. example, of the authors in
sources consulted).
Created product is interesting, Created product is interesting, Created product serves its Created product does not
Originality of new, and/or helpful, mak- new, and/or helpful, making intended purpose (e.g., solv- serve its intended purpose
Contribution ing an original contribution an original contribution for its ing a problem or addressing (e.g., solving a problem or
that includes identifying a intended purpose (e.g., solv- an issue). addressing an issue).
previously unknown problem, ing a problem or addressing
issue, or purpose. an issue).
General Rubrics for Fundamental Skills | 55
Very Creative Ideas represent a startling variety of important concepts from different
contexts or disciplines. Created product draws on a wide-ranging variety
of sources, including different texts, media, resource persons, and/or
personal experiences. Ideas are combined in original and surprising
ways to solve a problem, address an issue, or make something new.
Created product is interesting, new, and/or helpful, making an original
contribution that includes identifying a previously unknown problem,
issue, or purpose.
Ordinary/Routine Ideas represent important concepts from the same or similar contexts
or disciplines. Created product draws on a limited set of sources and
media. Ideas are combined in ways that are derived from the thinking
of others (e.g., of the authors in sources consulted). Created product
serves its intended purpose (e.g., solving a problem or addressing an
issue).
the way the rubrics in this chapter use criteria and performance-level descriptions
should help you get a better sense of the nature of those two defining characteristics of
rubrics, another main theme of the book.
There are some occasions when task-specific rubrics are useful. The next chapter
considers task-specific rubrics and how to use them.
5
Task-Specific Rubrics and Scoring Schemes
for Special Purposes
For me and for others who work with teachers and rubrics (Arter & Chappuis, 2006;
Arter & McTighe, 2001; Chappuis, 2009), the advantages that come with using rubrics
to support student learning are so significant that we more or less recommend you
always use general rubrics, except in special cases. In this chapter we explore those
special cases. Don’t be fooled, however, into thinking that means you should use task-
specific rubrics if general rubrics are more appropriate.
learning, task-specific rubrics for individual test questions make for quick, reliable grad‑
ing. Figure 5.1 gives an example of a task-specific rubric for a 4th grade mathematics
problem that requires students to solve a multistep problem and explain their reasoning.
The Problem
Extended
24 games and 12 shows with correct explanation or work
continued
|
58 How to Create and Use Rubrics
Satisfactory
Has subtraction error but has games and shows in correct ratio (2:1)
OR
Has 12 games and 24 shows with work
OR
Has 24 games and 12 shows with no work
Partial
Finds 36, and has ratio of 2 to 1 (but not 24 to 12) and sum of games and shows is less than 36
OR
Has 36 games and 18 shows with or without work
OR
Has 72 games and 36 shows with or without work
OR
Shows a process that reflects understanding of the question, but does not find the correct ratio
Minimal
Finds 36 by subtraction or adding on to 34 to get 70
OR
Number of games plus number of shows is 36
OR
Has games and shows in a two to one ratio but nothing else correct
Incorrect
Incorrect response
Source: National Assessment of Educational Progress released items: 2011, grade 4, block M8, question #19. Available: http://nces.
ed.gov/nationsreportcard/itmrlsx/
Incorrect (2-1-0 or 3-2-1) scoring scheme needs descriptive information so you know
how to decide what level a student’s response exemplifies.
Brief essay questions on tests often use multipoint scoring as well. Figure 5.2 pres‑
ents an example of an essay question and a task-specific rubric a teacher would use to
score it.
Figure 5.2 A Science Essay Test Question Scored with a Task-Specific Rubric
Question
Lightning and thunder happen at the same time, but you see the lightning before you hear the thunder.
Explain why this is so.
________________________________________________________________________
Complete
Student responds that although the thunder and lightning occur at the same time, light travels faster
than sound so the light gets to your eye before the sound reaches your ear.
Partial
Student response addresses speed and uses terminology such as thunder for sound and lightning for
light, or makes a general statement about speed but does not tell which is faster.
Unsatisfactory/Incorrect
Student response does not relate the speeds at which light and sound travel.
Source: National Assessment of Educational Progress released items: 2005, grade 4, block S13, question #10. Available: http://nces.
ed.gov/nationsreportcard/itmrlsx/
As you look at the examples in Figures 5.1 and 5.2, you are probably noticing an
important point—namely, that they are holistic (as opposed to analytic) rubrics. The
criteria for good work are all considered together. In the mathematics problem-solving
example, identifying the operations required for the problem, selecting and using the
right numbers, calculating correctly, and communicating the explanation by showing all
work are all assessed at once. In the science essay example, identifying the issue as one
of relative speed of travel and communicating that clearly are assessed together. This
approach is appropriate for questions on a test, where the score for individual questions
will be combined with scores for other questions to make a total test score. The advan‑
tage of analytic rubrics, which allow students to receive feedback on the criteria indi‑
vidually and use it for improvement, makes little difference in this case.
|
60 How to Create and Use Rubrics
multipoint test questions can assess student understanding of a body of knowledge; for
example, a question might ask students to list and explain steps in a scientific process.
I hope these are not the only, or even the main, type of constructed-response test
questions you pose for your students (Brookhart, 2010). However, for questions like
this, a point-based scoring scheme works well—often better than task-specific rubrics
would. There are at least two reasons this is so. One, in a point-based scoring scheme,
points can be allocated to the various facts and concepts in the body of knowledge you
intend to assess in a manner that weights knowledge of the various elements according
to their importance. Two, a point-based scoring scheme enumerates the elements—the
facts and concepts. Thus if recalling specific information is what the question intends to
assess, this enumeration allows you to check for each one. Figure 5.3 shows an example
of a point-based scoring scheme for an elementary social studies test question.
Figure 5.3 A Social Studies Test Question Scored with a Point Scheme
Question
Fill in the chart below with the name of each of the three branches of government and the main pur-
pose of each branch.
________________________________________________________________________
Summing up
Task-specific rubrics serve a purpose—namely, grading. A book about rubrics
wouldn’t be complete without discussing task-specific rubrics, and that has been the pur‑
pose of this chapter. This chapter also considered point-based scoring schemes that are
Task-Specific Rubrics and Scoring Schemes for Special Purposes | 63
not rubrics, again for the sake of completeness. Task-specific rubrics and point-based
scoring schemes are two methods you should have in your scoring repertoire, even if
they aren’t the most important ones.
General rubrics are much more flexible and serve at least two purposes: learning
and grading. And general rubrics can be used with students, which is why I consider
them more important than task-specific rubrics. A special case of using general rubrics
occurs when schools and teachers adopt standards-based grading policies based on
demonstrating proficiency levels and coordinate all their rubrics. This situation requires
consensus from all the teachers in a grade or department that teach the same standards.
If the consensus exists, however, then assessment is simplified in some ways. Chapter 6
discusses proficiency-based rubrics that assist in standards-based grading.
6
Proficiency-Based Rubrics
for Standards-Based Grading
64
Proficiency-Based Rubrics for Standards-Based Grading | 65
whose answers required advanced thinking? That 100 percent really indicates that the
student is “Proficient.” You would need a different assessment, one that has questions
or tasks that would allow advanced students to demonstrate extended thinking, to know
whether students could do that.
The example in Figure 6.1 is too general to use as is and is not meant to be used for
formative assessment or grading. It is a general framework or template for your thinking
as you make more specific rubrics for individual assignments.
Standard: Understands the concept of area and relates area to multiplication and to addition.
area and relates area to multiplication and to addition” (CCSSI Standards for Mathemat‑
ics 3.MD) based on the general framework in Figure 6.1.
One important thing to notice is that the general rubric describes performance in
terms that are still too general to use for any particular assessment. The general rubric
begs a question at each level—for example, at the Proficient level: What does it look like
|
68 How to Create and Use Rubrics
when a student shows a complete and correct understanding of the concept of area and
the ability to relate this concept to multiplication and addition? This is what you will work
out for each assessment.
Note that these are not fully designed assessments. The list is intended to show that
many different assessments could be indicators of the standard.
Notice also that some of these assessments would not provide evidence of
Advanced-level understanding because such understanding entails the following:
“Shows a thorough understanding of the concept of area and the ability to relate this
concept to multiplication and addition, and extends understanding by relating area
Proficiency-Based Rubrics for Standards-Based Grading | 69
There is no Advanced (4) level because stating the explanation in your own words
does not match the performance expectations for Advanced. The description of Pro‑
ficient performance matches the description of Proficient in the general rubrics. The
teacher would need additional assessments to give evidence of Advanced performance
(extending understanding by relating area to other concepts or by offering new ideas or
by solving extended problems).
This specific proficiency-based rubric, then, describes what performance at each
level looks like on the specific assessment. It is still a general rubric, as opposed to a
task-specific rubric, because performance is described in general enough terms that
you can share the rubric with students. (A task-specific version of the rubric would
include the explanation of area itself, and that is not what is needed here.)
Suppose further that the teacher also designs a performance assessment in which
she asks students to write a real-life problem scenario whose solution requires finding
area, and then to solve the problem and explain their reasoning. (The performance
assessment would need more complete directions than that. I don’t mean to imply that
one sentence alone would suffice for the assignment to students. Because here we are
just concerned with the proficiency-based rubrics, we can proceed without that; but
it’s important enough that I want to clarify and make sure I don’t imply that one sen‑
tence constitutes a complete assessment.) The teacher might use the following specific
proficiency-based rubric:
|
70 How to Create and Use Rubrics
Notice that, like the rubric for explaining area in their own words, this rubric is general
enough that it may be shared with students at the time the assignment is made. This
rubric lends itself to looking at exemplars, supporting student self- and peer assessment,
and focusing teacher feedback on work in progress. And an important point is that this
specific proficiency-based rubric matches the general proficiency-based rubric for the
standard shown in Figure 6.2. A good way to think about it is that each proficiency-based
rubric is an instance or special case of the general one.
You can also use proficiency-based rubrics for describing performance on a unit
test. Sometimes—but not usually—you can simply define a percentage range for each
of the proficiency-based levels described by the general rubric for the standard. I say
“not usually” because you can only do that meaningfully when the test covers that stan‑
dard and no other, and when all the questions are all answerable at all levels, including
Proficiency-Based Rubrics for Standards-Based Grading | 71
Advanced. This is not usually the case. Most tests cover more than one standard or
include questions that do not allow Advanced performance.
Suppose the teacher had designed a unit test that included a set of proficiency-level
questions about area and its relationship to multiplication and addition, and an open-
ended question that allowed students to show extended insights and connections about
the concept of area (or not, depending on how the student answered). The teacher
might use the following specific proficiency-based rubric:
Notice that for proficiency-based rubrics, the percentage correct for the total test
may not be the appropriate percentage to use. Proficiency-based rubrics require consid‑
ering the question “percentage of what?” The test might have included questions about
other standards as well, and those would not figure in to the assessment of student
proficiency on this standard.
|
72 How to Create and Use Rubrics
Story The Wonderful Cat What the Moon Said One Hundred Apples Dalton and His Dog The Great Mistake
I can map the beginning, middle, and end of a story and tell how these parts work together to tell the story.
|
73
|
74 How to Create and Use Rubrics
Summing up
This chapter has described proficiency-based rubrics, which are coordinated with
definitions of various proficiency levels, standard by standard. Their common frame‑
work allows students to set goals and track progress. The common framework for
proficiency-based rubrics also simplifies teachers’ evaluation of students’ progress and
achievement.
Figure 6.4 Examples of Arriving at a Final Proficiency Grade on One Standard
assessments as needed.
Add sections for standards and
Student
9/9 9/14 9/22 9/27 10/3 10/6 9/8 9/14 9/21 9/26 10/3 10/7 Std. 1 Std. 2 Std. 3
Andrew 2 1 2 3 3 3 3
Bailey 2 2 4 3 4 4 4
Cort 3 1 3 2 3 1 2
Andrew: Andrew’s performance on Standard 1 shows the pattern of a learning curve, with a beginning practice period followed by a leveling off of achieve-
ment. After beginning at the level of Nearing Proficiency, Andrew’s performance on Standard 1 leveled out at a reliable 3, or Proficient, level. The
median of his performance after this leveling out is a 3 (median of 3, 3, and 3 = 3).
Bailey: Bailey’s performance on Standard 1 shows the pattern of a learning curve. After beginning at the level of Nearing Proficiency, Bailey’s performance on
Standard 1 leveled out at around 4, or Advanced. The median of her performance after this leveling out is a 4 (median of 4, 3, 4, and 4 = 4).
Cort: Cort’s performance does not form the pattern of a learning curve, with a beginning practice period followed by a leveling off of achievement. There is
no discernible improvement or decline in his performance on Standard 1 over time. The teacher should try to find out why this is the case. Unless the
teacher’s investigation finds some reason to revise the proficiency ratings over time, the best summary of Cort’s performance is the median of what he
has demonstrated, which is a 2, or the Nearing Proficiency level (median of 3, 1, 3, 2, 3, 1 = 2).
|
75
7
Checklists and Rating Scales:
Not Rubrics, but in the Family
This chapter has two goals. First, I want to distinguish checklists and rating scales from
rubrics, with which they are often confused. Don’t use checklists and rating scales in
situations when rubrics are more appropriate. Second, I want to describe some situa‑
tions when checklists and rating scales can be useful.
Sometimes people use the term rubric—incorrectly—to mean any list-like evalua‑
tion tool, and therefore checklists and rating scales are sometimes confused with
rubrics. The most important difference between checklists and rating scales on the one
hand and rubrics on the other is that checklists and rating scales lack descriptions of
performance quality. As we have seen, rubrics are defined by two characteristics:
criteria for students’ work and descriptions of performance levels. Because checklists
and rating scales lack one of these two pieces, they are not rubrics.
Checklists and rating scales do have criteria.
Self-reflection The criteria are the “list” of things that you check
or rate. Checklists and rating scales are great when
Do you use checklists or rating scales in your
you don’t need descriptions of performance quality,
teaching? For what purposes do you use them?
but rather just need to know whether something
How do you involve students in their use?
has been done (checklist) or how often or how well
it has been done (rating scale).
76
Checklists and Rating Scales: Not Rubrics, but in the Family | 77
Checklists
A checklist is a list of specific characteristics with a place for marking whether that char-
acteristic is present or absent. Checklists by definition break an assignment down into
discrete bits (the “list”). This clarifies what is required for the assignment—namely, to
do this list of things. Most checklists are easier to use than rubrics because they require
low-inference decisions—is something there or isn’t it?
Checklists are particularly useful in two kinds of situations. First, checklists are
great for both teachers and students to use for situations in which the learning out‑
comes are defined by the existence of an attribute, not its quality. Some simple learning
outcomes are like this. For example, many elementary teachers I have worked with
use some version of a student checklist for sentencing skills like the example in Figure
7.1. Putting a period at the end of a sentence is a yes-or-no outcome; either the period
is there or it isn’t. Checklists for writing can be simpler—for example, for kindergarten
students the list might include just capital letter, period, and complete idea. Or they can
be more complicated, such as including in a checklist for older students the elements of
spelling, grammar and usage, and so on.
My sentence
Second, checklists are helpful for students to use to make sure they have followed
directions for an assignment, that they have all the required parts of some project, or
that they have followed format requirements for a report. Wiliam (2011) calls these
“preflight checklists” (p. 141) if they are used before work is turned in. He recommends
a technique in which a partner uses the checklist to ascertain an assignment is ready to
turn in and becomes accountable for the completeness of the partner’s work.
|
78 How to Create and Use Rubrics
Notice that in this second use the checklist is not the evaluation of the quality of
the project. The checklist is used to make sure directions have been followed and all
required elements are present. These elements are what get “checked.” The teacher
will use rubrics based on the criteria for good work—which usually will not be the same
as the assignment’s required elements. For example, a report checklist might include
entries like “Has an introduction,” “Has a thesis sentence or research question,” “Has
at least three sources,” “Includes a chart or diagram,” and so on, essentially listing the
elements required by the directions for the report. The rubrics the teacher and students
use to evaluate the quality of the report would include criteria for the understanding and
analysis of the topic of the report, clear communication of reasoning and supporting
evidence, and so on.
Rating scales
A rating scale is a list of specific characteristics with a place for marking the degree to
which each characteristic is displayed. I think of rating scales as two-dimensional check‑
lists. Like checklists, they break down assignments into discrete bits. However, instead of
a yes-no or present-absent decision, rating scales use either frequency or quality ratings
—hence the name “rating scale.”
Frequency ratings are, unsurprisingly, scales that list the frequency with which some
characteristic is observed, from always (or very often) to never (or very seldom), or
something like that. They are great to use when you want to serve a purpose similar
to that of a checklist—evaluating whether various attributes exist in some work—but
the decision is not an all-or-nothing one. Frequency scales are excellent for assessing
performance skills (for example, in public speaking, “Makes eye contact” frequently,
occasionally, seldom, or never). Frequency scales are also excellent for assessing behav‑
iors, work habits, and other learning skills. Many behaviors are well described by noting
whether they occur always, frequently, sometimes, or never, for example.
Figure 7.2 lists several different kinds of frequency scales. To create a rating scale,
list the characteristics you wish to assess, as you would for a checklist, and then select
the frequency scale that best matches these characteristics. Show the frequency scale as
multiple-choice options, as boxes in a table, or as points on a line.
Figure 7.3 shows a frequency scale a high school chemistry teacher used to assist
her students in checking their work on volume and temperature problems. The list
includes six skills the students needed to use in their work. The frequency scale
Checklists and Rating Scales: Not Rubrics, but in the Family | 79
Always, frequently, sometimes, never To rate how often students exhibit behaviors or
learning skills (e.g., works independently; fol-
Consistently, often, sometimes, rarely lows directions; completes homework)
Always, usually, sometimes, never To rate how often students have certain feelings
or attitudes about their work (e.g., I am confi-
Almost always, usually, often, occasionally, dent in my work)
almost never
All, most, some, none To rate how often problems or exercises exhibit
certain characteristics (e.g., labeled the answer;
[Sometimes used with a noun—for example, all showed all work)
problems, most problems, some problems, none
of the problems; all sentences, most sentences, To rate how often a certain kind of work exhibits
some sentences, none of the sentences.] desired characteristics (e.g., instead of a check-
list for each sentence, students might make an
overall rating: My sentences . . . begin with capi-
tal letters, have proper punctuation, and so on)
indicates on how many problems (all, most, some, or none) the skill was demonstrated.
The student in this example could easily see he should recheck his problems, especially
to make sure he had written the Charles’s Law equation.
Quality ratings are scales that list judgments of quality—for example, excellent,
good, fair, poor. A big problem with quality rating scales is that they are often mistaken
for rubrics and used in place of rubrics. Quality ratings are almost never helpful for learn‑
ing. There are at least three reasons for this.
One, quality ratings constitute a rush to judgment in that they skip a step: they pro‑
nounce the verdict without describing the evidence. Quality ratings, in effect, declare,
“This is excellent because I rated it excellent,” and so on. There are “performance lev‑
els,” but they are not descriptions. I have seen many examples of what were titled
|
80 How to Create and Use Rubrics
Skill
All problems Most Some None of
problems problems the problems
level,” “Solves problems at a proficient level,” and so on, are just rating scales dolled up
into sentences. These sentences do not contain any descriptive information about
performance that will move learning forward.
Three, quality ratings often lure teachers into
using task-based criteria because quality ratings Self-reflection
are easy to apply to such criteria. For example,
for a written report, task-based criteria might be Can you identify any checklists or rating scales
Introduction, Text, Illustrations, and References. you use that you want to revise to become rubrics?
You just judge the quality—in effect, assign a Can you identify any rubrics you use that might
grade—to each part of the task. In fact, qual‑ be more effective if revised into checklists (for
ity rating scales used with schoolwork amount example, to lay out the requirements for an
to the same thing as assigning grades without assignment)? How would you proceed with
comments. Rubrics began to be popular in the these revisions?
1980s as an antidote to this very thing. As educa‑
tors began to see performance assessment as a solution to the problem of too much
minimum-competency testing, rubrics became the solution to the problem of the “just a
number” results of such tests. To co-opt rubrics into quality rating scales does violence,
in my mind, to the whole point and purpose of using rubrics in the first place.
Summing up
Why include a chapter on checklists and rating scales in a book about rubrics? I
hope that after reading the chapter several reasons are clear. First, distinguishing check‑
lists and rating scales from rubrics should make the characteristics of rubrics clearer.
Rating scales often masquerade as rubrics, and I hope you can identify those and avoid
them or revise them. Second, checklists and frequency rating scales have some impor‑
tant uses, on their own or in conjunction with rubrics. Checklists are great for helping
students see whether they have followed directions, included all required elements of an
assignment, adhered to format requirements, and the like. Frequency rating scales are
good for assessing certain kinds of performance skills and for assessing behavior, work
habits, and other learning skills. Finally, this chapter identified and defined quality rat‑
ing scales, which are often mistaken for rubrics. Be on the lookout for those and stamp
out their use whenever possible. They are Trojan horses that will allow old-fashioned
grading judgments to slip in where rubrics were intended.
8
More Examples
This chapter contains more examples of rubrics in several different content areas and
grade levels: elementary reading, middle school science, and high school technology
education. I encourage you to read all the examples, even if the content or grade level is
not one you currently teach.
Source: Used with permission from Katrina D. Kimmell, West Hills Primary School, Kittanning, PA.
students simultaneously with the learning target (oral reading fluency) and the criteria
(Expression, Phrasing, and Speed). Before using the rubrics for self-assessment, Ms.
Kimmell shared the learning target with students, using the rubrics but also using mod‑
eling and demonstration. Here is what she did.
First, she explained the context. She told students they were going to practice read‑
ing. After practicing a few times, students taped their oral reading with a tape recorder.
Then they evaluated their own performances using the Oral Reading Fluency Rubric.
Second, Ms. Kimmell explained the learning target and criteria for success. She did
this by using the rubric, but also by modeling and demonstration. She showed students
that the rubric has three different sections. Each section is something needed for stu‑
dents to become really good readers. The first one is Expression. She asked students,
“What do you think that means? What does reading sound like when it has good expres‑
sion?” Then she paused for class discussion.
The second criterion is Phrasing. The teacher explained that phrasing means
that you don’t read a sentence word by word like this (and modeled “robot reading”).
Instead, you read a few words at a time—just like we sound when we are talking. Then
|
84 How to Create and Use Rubrics
she paused for class discussion and demonstrations of “robot reading” versus “reading
like we talk.”
The last criterion is Speed. Good reading is not so fast that no one can understand
what you are reading, but not so slow that it’s boring or you lose your place. Then the
teacher went over the descriptions of top-level (4) performance in each category. The
students practiced reading, then taped themselves after practice. Finally, the students
used the Oral Reading Fluency Rubric to assess their own performance.
Before reading, Daniel, the boy whose example is shown, looked at a chart to see
how many correct words per minute he had read the previous week and found that it
was 51. Ms. Kimmell told him that she wanted him to try to read at least 53 words this
time, but he said he wanted to try for 61. She said, “That would be great, but anything
over 53 would be a good job.” He and the teacher discussed strategies he could use, like
finger tracking and sounding out words. Then when he read his passage, he read 61 cor‑
rect words per minute—and said, “I told you I would.”
Daniel’s self-assessment in Figure 8.1, coupled with the questions he asked and his
response to his success, suggest that learning how to read fluently was a target that he
understood. The rubric helped him engage with the target according to specific criteria
and helped him interpret his success as more multidimensional than just a words-per-
minute score.
4 3 2 1
Introduction— States a hypothesis that States a hypothesis that States a hypothesis, Does not state a hypoth-
Stating Research is based on research is based on research and/ although basis for the esis. Introduction may be
Questions and and/or sound reasoning or sound reasoning and is hypothesis is not clear a general statement of
Hypotheses and is testable. Report testable. Report title may or hypothesis is not test- the topic or the assign-
title reflects question or not reflect the question or able. Report title may not ment, or may be missing
hypothesis. hypothesis. reflect the question or or unclear.
hypothesis.
Results—Collecting Results and data are Results are clear and Results are unclear, miss- Results may be present,
Data accurately recorded, labeled. Trends are not ing labels, and trends are but too disorganized or
organized so it is easy for obvious. not obvious at all. poorly recorded to make
the reader to see trends. sense of.
All appropriate labels are
included.
More Examples
continued
|
85
86
|
How to Create and Use Rubrics
Figure 8.2 Science Laboratory Report Rubric (continued )
4 3 2 1
Analyzing Data The data and observa- Analysis is somewhat Analysis is lacking in Analysis is inaccurate
tions are analyzed accu- lacking in insight. There insight. Not enough data and based on insufficient
rately. Trends are noted. is enough data, although were gathered to estab- data.
Enough data were taken additional data would be lish trends, or analysis
to establish conclusion. more powerful. does not follow the data.
Interpreting Results Summarizes data used Summarizes data used to Conclusions about No conclusions about
and Drawing to draw logical conclu- draw conclusions about hypothesis are not hypothesis are evident.
Conclusions sions about hypothesis. hypothesis. Some logic derived from data. Some Logic and application of
Discusses real-world or real-world application logic or real-world appli- findings are missing.
applications of findings. may be unclear. cation may be unclear.
After using the rubric, the special education teacher reflected on the experience.
Most of his students, he said, “had a greater understanding of what constituted good-
quality work” and “a clear picture of what was expected.” Students who did this were
able to compare their work with the criteria and performance descriptions in the rubric
and, based on that comparison, make decisions about how to improve their work. In
addition, they took greater responsibility for their own learning.
One group did not meet expectations on every criterion, but even for that group the
rubric was helpful. The rubric allowed both teacher and students to identify the one area
(drawing and expressing conclusions that follow from the data) to work on. From this
perspective, the rubric was helpful even for unsuccessful work because it furnished the
information needed for the students’ next steps.
Welding
Technology education is an important content area that is outside my own teaching
background and experience. Andrew Rohwedder is a technology education teacher at
Richardton-Taylor High School in Richardton, North Dakota. Figure 8.3 presents Mr.
Rohwedder’s welding rubric.
The welding rubric is an excellent example of a well-constructed rubric. It is clear
and descriptive. It can be shared with students and would support learning and forma‑
tive assessment, especially student self-assessment, as well as grading.
Because technology education is not a content area I know anything about, I was
able to read this rubric as a new learner would. If that is the case for you, consider how
well-constructed rubrics clarify the learning target. When I first read the rubric, I was
able to envision what a good weld would look like.
I also had a question. Two of the criteria seemed to be about appearance (Weld Width
and Height, and Appearance). And yet, given how well the rubric was designed, I doubted
that Mr. Rohwedder had simply written the same thing twice. So I asked him what the dif‑
ference was between those two criteria, and I learned some more about welding.
He said, “The width of a weld will depend on many factors controlled by the welder.
Usually the width of a weld is proportional to the thickness of the metal and how the
joint is prepared. The height of the weld will depend on the heat and amount of filler
material laid down by the welder. Once again this is determined by the parent material
and joint preparation and type of joint. The appearance of a weld should be smooth,
uniform, and chipped free of slag.”
88
|
How to Create and Use Rubrics
Figure 8.3 Welding Rubric
Slag removed Bead is clean, has been Bead is somewhat clean. Bead needs major chip- Shows little care about
100% All slag chipped. chipped and wire-brushed. Minimum slag at the ping and brushing. quality.
Weld bead is clean. edges of the bead.
Weld width and height Bead is uniform width all Bead maintains width and Not a uniform thick- Weld is cut off in places,
100% Uniform width along the length of each length. Shows some small ness throughout the not uniform along the
and thickness throughout weld. Has a smooth ap- blemishes along the weld. weld. Thickness goes to weld. Shows bare spots.
the entire length of pearance. extremes.
each weld.
Appearance Weld shows a constant Weld shows a constant Weld shows definite areas Weld has been done too
100% Smooth, with speed and uniformity the speed with some blem- of speeding up and slow- fast or too slow. Weld is
uniform dense ripples; entire length. ishes that are minimal. ing down. Ripples tend to not complete. Trapped
doesn’t show the bead be coarse. impurities in the weld.
traveling too fast or slow.
Face of bead Has a nice rounded look. Bead is well rounded, Bead shows many high Weld does not blend into
100% Convex, free of Is not overly high, or low. mostly uniform over the and low areas. Total lack one single bead.
voids and high spots, Bead covers a wide area length of the weld. Shows of uniformity throughout
shows uniformity of each weld. some high spots and low the weld.
throughout the bead. spots.
Advanced Proficient Basic Below Basic Total
4 points 3 points 2 points 1 point Points
Edge of bead Sides and edges are Moderately smooth blend- Float and undercut are Metal is burned through.
100% Good fusion, smooth, blending into ing. Undercutting and float very apparent. Weld lacks Weld has no connection
no overlapping each weld. Undercutting is are present. Strength of strength and flow. to metal.
or undercutting. kept to a minimum. Weld the weld is still strong.
does not float on surface.
Beginning and End of each weld is Weld ending is full but Crater distinctly present at Metal is burned through
ending full size complete; the line doesn’t shows some tapering and the end of the bead. at the end.
100% Crater well filled. taper off. a crater present.
Surrounding plate Spatter is kept to a Some spatter is present Spatter is in large Spatter takes away from
100% Welding surface minimum. but not displeasing. amounts. the integrity of the weld.
free of spatter.
Penetration Weld penetrates deep Weld penetrates deep Weld is uneven in depth, Weld floats on top of the
100% Complete without into the metal and adds but does not resurface lacks uniformity along metal. Has no strength.
burn-through. strength and fusion to the through the bottom of a weld length.
edges and depth. jointed weld.
More Examples
Source: Used with permission from Andrew Rohwedder, Technology Educator, Richardton-Taylor High School, Richardton, ND.
|
89
|
90 How to Create and Use Rubrics
For me, this interchange was an object lesson in one of the main points this book
is trying to make. Good rubrics help clarify the learning target for students (or anyone
else who does not yet have a clear vision of it, like me with welding). Good rubrics
become a foundation for learning and formative assessment as well as for grading. Most
important, good rubrics are tools the students can use to help themselves learn.
Summing up
This book is full of examples, but I think you
Self-reflection almost can’t have too many! Chapter 2 included the
counterexample “My State Poster” rubric. Chapter
What is your current thinking about rubrics after
3 showed the silly example of a rubric for laughing
reading Part I of this book? How does it compare
and the example of the life-cycle project rubric, a
with your thinking from the first self-reflection,
work-in-progress that illustrated how you might
before you began to read?
approach revising and improving rubrics. Chapter
4 included examples of general rubrics for foun‑
dational skills: the 6+1 Trait Writing rubrics, student-friendly rubrics for mathematics
problem solving, and rubrics for report writing and for creativity. Chapter 5 presented
some examples of task-specific rubrics, and Chapter 6 contained examples of general
and specific proficiency-based rubrics for understanding the concept of area and its
relationship to multiplication and division. Chapter 7 contained examples of checklists
and rating scales, both to demonstrate their usefulness in their own right and to show by
contrast how they are not rubrics. This chapter added three more examples to the mix,
in elementary reading, middle school science, and high school technology education.
My hope is that from this collection of examples, you can by induction generalize
the characteristics of good rubrics yourself. Lay your conclusions beside what I have
listed as the characteristics of good rubrics in the book and—I hope!—see that they
match. At this point, then, you should have a firm idea of what effective rubrics look like.
This chapter concludes Part 1 of the book, which was about the various types of
rubrics (and, in Chapter 7, the related tools—checklists and rating scales) and how to
write them. Part 2 explains how to use rubrics. I hope that as you explore the different
uses of rubrics, you will see more and more why it is important to emphasize the two
defining factors of appropriate criteria and descriptions of performance along a con‑
tinuum of quality. These elements are the genius of rubrics because they are the “active
ingredients” in all of the uses described in Part 2.
Part 2
How to use
RubRics
9
Rubrics and Formative Assessment:
Sharing Learning Targets with Students
Learning targets describe what the student is going to learn, in language that the
student can understand and aim for during today’s lesson (Moss & Brookhart, 2012).
Learning targets include criteria that students can use to judge how close they are to the
target, and that is why rubrics (or parts of rubrics, depending on the focus of the lesson)
are good vehicles for sharing learning targets with students.
The idea that students will learn better if they know what they are supposed to learn
is so important! Most teacher preparation programs emphasize instructional objectives,
which are a great planning tool for teachers. However, instructional objectives are writ‑
ten in teacher language (“The student will be able to . . .”). Not only are the students
referred to in third person, but the statements about what they will be able to do are in
terms of evidence for teachers. In contrast, learning targets must imply the evidence
that students should be looking for. Sometimes, for simple targets, instructional objec‑
tives can be turned into learning targets by simply making them first-person (“I will
know I have learned this when I can . . .”). More often, however, the language of the
evidentiary part of the learning target—what students will look for—also needs to be
written and demonstrated in terms students will understand. After all, if most of your
students understand what your instructional objective means, you probably don’t need
to teach the lesson.
93
|
94 How to Create and Use Rubrics
The most powerful way to share with students a vision of what they are supposed to
be learning is to make sure your instructional activities and formative assessments (and,
later, your summative assessments) are performances of understanding. A performance
of understanding embodies the learning target in what you ask students to actually do.
To use a simple, concrete example, if you want students to be able to use their new
science content vocabulary to explain meiosis, design an activity in which students have
to use the terms in explanations. That would be a performance of understanding. A
word-search activity would not be a performance of understanding for that learning
target because what the students would actually be doing is recognizing the words.
Performances of understanding show students,
Self-reflection by what they ask of them, what it is they are sup‑
posed to be learning. Performances of understand‑
How do you share learning targets with your
ing develop that learning through the students’
students? Do you ever use rubrics as part of this
experience doing the work. Finally, performances
communication? Besides giving the students the
of understanding give evidence of students’ learning
rubrics, what do you do? What have you learned
by providing work that is available for inspection by
from doing this?
both teacher and student. Not every performance
of understanding uses rubrics. For those that do,
however, rubrics support all three functions (showing, developing, and giving evidence
of learning).
• Give students copies of the rubrics. Ask them, in pairs, to discuss what the rubrics
mean, proceeding one criterion at a time.
• As they talk, have them write down questions. These should be questions the pairs
are not able to resolve themselves.
• Try to resolve the questions with peers. Put two or three pairs together for groups of
four or six. Again, students write down any questions they still can’t resolve.
• Collect the final list of questions and discuss them as a whole group. Sometimes these
questions will illuminate unfamiliar terms or concepts, or unfamiliar attributes of
work. Sometimes the questions will illuminate a lack of clarity in the rubrics and
result in editing the rubrics.
Source: From Formative Assessment Strategies for Every Classroom, 2nd ed. (p. 90), by Susan M. Brookhart, 2010, Alexandria, VA:
ASCD. Copyright 2010 by ASCD. Reprinted with permission.
and have the students write in the bottom row. You will need one diagram like this for
each criterion.
Have students discuss each criterion in turn, using these questions or other similar
questions appropriate to the rubrics under consideration:
• How many criteria are there? This question ensures students can find the criteria on
the rubric.
• What are the names of each criterion, and what do these words mean? This question
focuses the students on the meaning of the criteria as traits or qualities before they
begin writing.
• For each criterion in turn, going one at a time, read the descriptions of work along the
whole range of progress. Discuss what elements are described and how they change
from level to level. For each criterion, students should do this for the range of
performance-level descriptions before they start to write.
Sharing Learning Targets with Students | 97
• Put the level descriptions in your own words, varying the same elements from level
to level as the teacher’s rubrics did. Students should discuss the wording with their
partners until they agree on what to write.
• If work samples have been provided, do the new “translated” rubrics still match the
work at the intended levels? This is a check on the rephrasing, so that students can
make sure their translations preserved the original meaning.
• Give a rubric to students before you give them an assignment. The assignment should
be a performance of understanding; that is, it should be a clear instance of demon‑
strating the knowledge and skills that you intend for students to learn.
• In pairs, students take turns explaining the rubric to their partners. This step lasts
until the students think they understand how the rubric applies to their work on the
assignment they are about to do.
• Students begin the assignment. Students do not need to remain in their “rubric pairs”
to work but should work on the assignment however it is designed—individually, in
other groups, or whatever.
• Halfway through the assignment, students return to their rubric partners and explain
how what they are doing meets the criteria and performance levels they discussed
at the beginning. Students may question each other about their work and their
explanations.
• Students finish the assignment. Students go back to work individually or in their
work groups, however the assignment is designed.
• When students have finished the assignment, they return to their rubric partners and
explain how what they have done meets the criteria and desired performance level.
After partners are satisfied with each other’s explanations, students turn in the
work. They may turn in the results of this final peer evaluation as well.
• Identify the content knowledge and skills the rubrics will be assessing. For co-con‑
structed rubrics, the knowledge and skills should be something students are
already somewhat familiar with—for example, writing an effective term paper that
requires library and Internet research.
• Give students some sample work. Have students review the work. For short pieces of
writing—for example, a brief essay—the work can be read aloud. Or students can
look over the examples in pairs or small groups.
• Students brainstorm their responses to the work in terms of strengths and weak-
nesses. The more specific the responses are, the better. For example, “The report
answered all my questions about stars and raised some new ones I hadn’t thought
of” is more specific than “It was a good report”; or “I didn’t understand the explana‑
tion of how stable stars burn” is more specific than “The report wasn’t clear.”
• Students categorize the strengths and weaknesses in terms of the attributes they
describe. The teacher may have to guide students here so the attributes are not
attributes of the task (for example, cover, introduction, body, references) but rather
aspects of the learning that was supposed to occur (for example, understanding of
the content, communication of the content, clarity and completeness of explanation,
and so on).
• Further discussion and wordsmithing of the attribute categories continue until there is
agreement on criteria for the rubric. Attributes can be grouped and ungrouped until
Sharing Learning Targets with Students | 99
they express criteria at the appropriate level of generality. Attributes that are not
important for the learning to be assessed may be removed from the list of criteria.
For example, handwriting may be an attribute that was noted but, upon discussion,
found to be unrelated to the content and skills the rubrics are concentrating on.
• For each criterion, students discuss what elements should be described and how they
might change from performance level to performance level. As this discussion pro‑
ceeds, record the results as drafts of descriptions of performance at each level. One
effective way of drafting performance-level descriptions is to start with the descrip‑
tion of ideal work and “back down” the quality for each level below it. The “rubric
machine” in Figure 9.2, or something like it, may help with this. A separate template
of this sort is needed for each criterion.
• When students arrive at a draft rubric, they apply it to the original work samples. Addi‑
tional work samples may be used here as well. Students note where questions arise
and use these observations to revise the rubric.
Source: From Formative Assessment Strategies for Every Classroom, 2nd ed. (p. 86), by Susan M. Brookhart, 2010, Alexandria, VA:
ASCD. Copyright 2010 by ASCD. Reprinted with permission.
pieces of work were difficult to evaluate or identify with a particular level of a given crite‑
rion (“cloudy”). Then discuss, again in pairs, small groups, or whole group, the reasons
for these designations. The “clear” and “cloudy” designations and discussion will illumi‑
nate what the criteria mean and how students are understanding them.
Highlighters or colored pencils. Students use highlighters or colored pencils to
mark qualities described in the rubrics and on sample work. For example, if the rubric
says “Identifies the author’s purpose and supports this conclusion with details from the
text,” the student would highlight this statement in the rubrics and at the location in
his paper identifying the author’s purpose and supporting details. Students learn what
the criteria and performance-level descriptions mean by locating and reviewing specific
instances in the work. A version of this activity can also be used with the students’ own
papers for formative assessment (see Chapter 10). When used with sample work before
Sharing Learning Targets with Students | 101
students have begun their own work, students can talk about what they highlighted and
why in pairs, small groups, or as a whole class. Comments from these discussions can
be used as an introduction to the knowledge and skills students will be learning.
Summing up
This chapter has explored ways to use rubrics for sharing learning targets and cri‑
teria for success with students. This is the first, and foundational, strategy for formative
assessment. It is also a foundational strategy for effective instruction. Although rubrics
are not the only way to communicate to students what they are about to learn, they are
an excellent resource for doing so. Rubrics make especially good vehicles for sharing
learning targets when the target is complex and not just a matter of recall of informa‑
tion. The reason is that rubrics bring together sets of relevant criteria. The nature of a
complex understanding or skill is that several qualities must operate at one time.
10
Rubrics and Formative Assessment:
Feedback and Student Self-Assessment
Formative assessment is an active and intentional learning process that partners the
teacher and students to continuously and systematically gather evidence of learning
with the express goal of improving student achievement (Moss & Brookhart, 2009, p. 6).
Formative assessment is about forming learning—that is, it is assessment that gives
information that moves students forward. If no further learning occurred, then whatever
the intention, an assessment was not formative.
Chapter 9 described how to use rubrics to
Self-reflection help clarify learning targets for students—the
foundational strategy of formative assessment.
How do you use rubrics for feedback and student
This chapter covers the use of rubrics for giving
self-assessment in your classroom?
feedback that feeds forward, for supporting student
self-assessment and goal setting, and for helping
students ask effective questions about their work.
102
Feedback and Student Self-Assessment | 103
presents several strategies for using rubrics as the basis for teacher and peer feedback.
Use one of them or design similar strategies that work in your context.
teachers use blue highlighters. Where there is agreement on what constitutes evidence
for performance as described in the rubric, the resulting highlights will be green.
This is not just a coloring-book exercise, however. Important information comes
with the comparison. If most of the highlighted area is green, both the student and the
teacher are interpreting the work in the same way and more or less agreeing on its
quality. If most of the highlighted area is yellow, the student is seeing evidence that the
teacher is not. It may be that the student is not clear on the meaning of the criterion, or
the student may be overvaluing the work. If most of the highlighted area is blue, the
teacher is seeing evidence that the student is not. The student may be not clear on the
meaning of the criterion or undervaluing the work.
Any place where teacher and student perspectives vary on the worth of the stu‑
dent’s work relative to criteria can be fertile ground for written feedback from the
teacher, student questioning, or conferencing. The feedback, questions, or conferences
should address more than just understanding the highlighting or the description of
current work. What should come next? Provide feedback on what the student can do to
improve the work.
Paired-peer feedback
Peers can use rubrics to give each other feedback. The rubrics provide structure
for peer discussions, making it easier for the students to focus on the criteria rather than
personal reactions to the work. The rubrics also aid dialogue. As the students use the
language of the rubrics to discuss each other’s work, they are developing their own con‑
ceptions of the meaning of the criteria while they are giving information to their peers.
The simplest form of peer feedback involves students working in pairs. The teacher
should assign peers that are well matched in terms of interest, ability, or compatibility,
depending on the particular assignment.
Peer feedback works best in a classroom where constructive criticism is viewed as
an important part of learning. In a classroom characterized by a grading-focused or eval‑
uative culture (“Whad-ja-get?”), peer feedback may not work well; students may hesitate
to criticize their peers so as not to imply there is anything “wrong.” Try peer feedback
only when you are sure that your students value opportunities to learn. If you try peer
feedback and it doesn’t work very well, even after careful preparation, be prepared to
ask yourself whether your students are telling you they are more focused on getting a
good grade than improving their work.
Feedback and Student Self-Assessment | 105
Assuming that you have a learning-focused classroom culture, you still need to
prepare students for peer feedback. Make sure that the students understand the rubrics
they will be using and that they can apply them to anonymous work samples accurately.
Make sure the students understand the assignments on which they will be using the
rubrics for the peer feedback. Set a few important ground rules and have students
explain, and even role-play, what they mean. Use rules that make sense for your grade
level, students, and content area. Here are examples of some common peer-feedback
ground rules:
Finally, peer feedback gets better with practice. When you use paired-peer feedback,
observe the pairs and give them feedback on their feedback, as it were. Look for, and
comment on, how students use the rubrics, how clearly they describe the work, how use‑
ful their suggestions for improvement are, how supportive they are, and so on. Just as for
any skill, giving and receiving peer feedback can (and should) be taught and learned.
This section presents some examples of how that might be done, and I encourage you to
devise others that fit the students, content, and grade level you teach.
For example, consider a student in a writing class that was using the 6+1 Trait Writing
rubrics. One student’s strategy for improving his performance (and learning) under the
Word Choice criterion was “I will check a thesaurus any time a word isn’t as powerful,
precise, or engaging as I would like.” Thus the student would ask himself, “Did I use a
thesaurus in my writing today? Did it help me choose more powerful, precise, or engag‑
ing words?” Students can make their own charts to record these checks, or they can
make a mark next to where they have written their strategies on their rubrics.
Journaling. In classes where regular journaling is part of student self-reflection,
students can record their strategies and their reflections upon the use of those strategies
as part of their regular self-reflection. Here the questions are similar—Did I actually use
the strategy I planned to and did it help me improve my work?—but there is room for
reflecting on what specifically the strategy helped (or did not help) the student do and
why that might be the case. Teachers may or may not read these reflections. The intent
is for students to exercise metacognition, to think about their thinking.
Think-pair-don’t share. Give students five minutes at the end of a work session
to work in pairs. Each partner will describe the strategies that were planned, whether
the strategies were used, to what degree the strategies helped, and why this might be
so. This activity is a sort of debriefing of the work session, strategy use, and perceptions
of learning. Similar to the conventional think-pair-share activity, students work with
partners for this self-reflection session. Unlike a conventional think-pair-share activity,
Feedback and Student Self-Assessment | 107
however, students do not share the results of their paired conversations with the whole
class. The teacher may speak with one or more of the pairs as they are reflecting, to help
them focus, as needed.
Charting progress
Charting progress means two different things. Students think of their progress
toward completing individual assignments or projects, and they think of progress more
broadly as learning. It is a good idea to have students chart at least the latter (learning
progress) and sometimes the former.
Charting progress on an individual assignment, with rubrics. Give students
the rubrics. Midway through the work, ask them to mark the rubrics at the level where
they are for each criterion. Students can place a vertical line or a large dot at the appro‑
priate level on each criterion. This can be done individually or in pairs. When the assign‑
ment or project is finished but before students turn it in, ask them to self-assess their
finished product with the rubrics. Then have them draw an arrow from the first dot or
line to the second, right on the rubrics, to make a graphic illustration of their progress.
Charting longer-term learning progress. General rubrics that are used across
tasks can be used for longer-term charting of progress during a report period, a semes‑
ter, or even a year. Depending on the purpose, students can use general rubrics for
foundational skills (Chapter 4) or standards-based grading rubrics (Chapter 6) to keep
track of their learning of those skills or standards. Have students construct a histogram
with time on the horizontal axis and performance levels on the vertical axis.
Figure 10.1 gives an example of one student tracking her progress on the criterion
“Writing an Explanation” in the Math Problem-Solving Rubric in Figure 4.1. Additional
charts would be needed for the other criteria in the rubric. Each performance is listed,
and then the student colors the bars in the graph to the height corresponding to her
developing ability to show mathematical knowledge.
I want to make several very important points right away, because such a chart is
prone to misinterpretation in classrooms that are grade oriented rather than learning
oriented. First, this chart is for formative assessment and represents the student’s
practice and learning. It does not represent final outcomes, except perhaps that the last
entry recorded shows the answer to the question “Where am I now?” The entries would
not be averaged or otherwise summarized into a grade. This chart is the student’s way
of keeping track of her progress as she is learning. She will eventually receive a grade
from a summative assessment of mathematical knowledge.
|
108 How to Create and Use Rubrics
WRITING AN EXPLANATION
5
• I write what I did
and why I did it.
• I explain each step
of my work.
• I use math words
and strategy
names.
• I write the answer
in a complete sen-
tence at the end of
my explanation.
4
• I write what I did
and a little about
why I did it.
• I explain most of
my work.
3
• I write a little
about what I did
or why I did it, but
not both.
• I explain some of
my work.
2
• I write something
that doesn’t make
sense.
• I write an unclear
answer.
1
• I don’t write
anything to explain
how I solved the
problem.
Oct. 7 Oct. 14 Oct. 21 Oct. 28 Nov. 4 Nov. 11
Problem Problem Problem Problem Problem Problem
set #1 set #2 set #3 set #4 set #5 set #6
Note: This example uses the Math Problem-Solving Rubric shown in Figure 4.1.
Feedback and Student Self-Assessment | 109
Second, the assessments or learning opportunities themselves are not “equal,” and
it is therefore mathematically inappropriate to summarize this chart by averaging. What
is constant is the existence of the descriptions of performance for the various levels of
Writing an Explanation, which are shown on the vertical axis. These performance levels
describe the student’s “steps” in learning. The assessments are simply opportunities for
the student to practice, learn, and show what she knows. The purpose of the chart is for
the student to see a learning curve. Bars that rise indicate progress. Bars that stay the
same or fall indicate lack of progress. The graphic representation helps students focus
on the performance levels and plan their next steps.
on the top category (or the category they are shooting for, if not the top one), and turn
the elements into questions:
For rubrics that do not use such straightforward, student-friendly terms, you can
do one of two things. You can construct student-friendly rubrics with the students, with
the constraint that the descriptions have to be “I” statements that can be turned into “Do
I” questions. That exercise in itself is good for helping students learn exactly what their
target is (see Chapter 9).
Alternatively, you can have students use the criteria and descriptive elements in
rubrics to pose their own questions. For example, Level 4 of the Content category in the
rubric for written projects in Figure 4.2 says:
The thesis is clear. A large amount and variety of material and evidence
support the thesis. All material is relevant. This material includes details.
Information is accurate. Appropriate sources were consulted.
Brainstorm with students how questions can be written from these statements. If you
start with “The thesis is clear,” students might suggest questions such as these:
• Is my thesis clear?
• How clear is my thesis?
• How do I know my thesis is clear?
• How can I make my thesis more clear?
You can continue with each element in the description until you have a set of reflection
questions for students to use. Or if your students get the hang of this quickly, you can
use a few rounds of question generating to demonstrate how to turn descriptions into
questions about work, and then have students write their own questions as they reflect.
Summing up
One of the advantages of rubrics is their usefulness for formative assessment.
Chapter 9 explored ways to use rubrics to share learning targets and criteria for success
with students. Chapter 10 explored ways to use rubrics to develop student work and to
give evidence of learning that students can use for further improvement. When students
have written, drafted, practiced, honed, and polished, eventually it is time for a grade to
certify the level of achievement or accomplishment the students have reached. Chapter
11 discusses ways to use rubrics in grading.
11
How to Use Rubrics for Grading
What is grading?
We commonly use the term grading to mean two different things. We say, “I graded
that assignment” or “I graded that test,” meaning a grade was assigned to an individual
assessment. We also use grading to refer to the process of summarizing a set of individ‑
ual grades to arrive at a grade for a report card. Report card grades are usually assigned
either to a standard or to a subject area, depending on whether the report cards are
standards based or traditional. In this chapter, I talk about using rubrics for grading
individual assessments and also about summarizing a set of grades that includes rubrics.
112
How to Use Rubrics for Grading | 113
example, one rubric score might be recorded under the standard for science content,
one for inquiry skills, and one for communication.
If you do need one overall grade (for example, “Science”) and must summarize an
assessment with one overall score, use the median or mode, not the mean, of the scores
for each criterion. Figure 11.1 summarizes how to calculate mean, median, and mode,
the three most common ways to combine more than one score into a typical score. The
figure summarizes all three, even though the median is recommended, so you can see
how all three summarize “typical” performance but do so in different ways.
Figure 11.1 Three Ways to Summarize a Set of Scores: Mean, Median, and Mode
Median Median = 4
• The score that has half of the scores above (line scores up in order first)
and half below it (even if it’s between two 6 5 3 3
scores) ^
4
• Also known as the 50th percentile
Mode Mode = 3
• The most frequently occurring score in the (line scores up in order first)
set of scores
• Sometimes helpful to think of it as the “most 6 5 3 3
popular” score
Figure 11.1 uses as an example a performance that was scored with a six-point
analytic rubric with four criteria, on which one student scored 6, 5, 3, and 3, respectively.
The example assumes all four criteria were of equal weight, which will not always be
the case. To weight a criterion more heavily than others to calculate the mean, multiply
the weight times the score. For example, to double the weight of the criterion on which
How to Use Rubrics for Grading | 115
a student scored 6, for the mean, use 12 instead of 6, changing the mean to 5.75. To
weight a criterion more heavily than others to calculate the median, repeat it. That is,
use two 6s in the lineup, changing the median to 5.
I recommend the median for most summarizing purposes. The median is less prone
to being pulled by extreme scores than is the mean, as the examples in Figure 11.1
show. And the median is more stable than the mode, as the examples also show. Sup‑
pose one of the 3s in the example had been a 5? One change in one criterion, probably
not representing a hugely different performance overall, would change the overall score
by two points—a lot on a six-point scale. Plus the median is easy to calculate—for most
analytical rubrics you can just count in your head. In the next section, when I will recom‑
mend using the median for summarizing sets of grades on individual assessments, you
can let a spreadsheet do your median calculations.
• What kinds of individual grades am I going to summarize for the report card grade?
Are all your individual grades on scales from rubrics, or are your individual grades
a mixture of rubrics and percentages? If all your grades are from rubrics, are they
all on the same scale? Or were some four-point rubrics, some six-point, and so on?
This makes a difference in how you combine them. It’s the familiar “apples and
oranges” logic. Before you combine numbers meaningfully, they should be on the
same scale.
• How must I report the students’ grades on the report card? Does your report card
use letter grades (for example, A, B, C, D, F) or percentages or standards-based
performance categories? This distinction makes a difference in how you combine
individual grades as well.
• What is my report card grade supposed to mean? I’ll take it as a given that your report
card grade is supposed to reflect achievement (as opposed to effort, attendance,
and so on). Is achievement reported by subject or by standard on your report
cards? The reason this makes a difference for combining grades is that if achieve‑
ment is separated by standard, you can privilege the most recent evidence; as
the student improves on the standard, the grade will go up, even if it started low,
because it represents learning in the same domain. If achievement is reported
|
116 How to Create and Use Rubrics
by subject, then the order of the evidence makes less difference, because differ‑
ent standards are covered in different units. Doing poorly on one standard at the
beginning of the report period is not subject to revision because of doing well on a
different standard toward the end of the report period.
You can use Figure 11.2 to help you decide on a method to use for summarizing
your students’ individual grades into their report card grades. You will notice in Figure
11.2 that the methods follow three general steps. Each of these steps is worked out in
different ways depending on the answers you gave to the three questions (what kinds of
grades, how must you report, and what the reported grade is supposed to mean).
The figure lists each question in turn, and then displays two flow charts that start
with the answer to the first question. All the recommended methods accomplish these
objectives:
• Identify the set of individual grades you are going to summarize, based on what the
report card grade is supposed to mean.
• Make sure the individual grades to be summarized are on the same scale.
• Use a summarizing method that expresses in one grade the “typical” achievement
level shown in the set of individual grades.
(Categories) (Percentages)
Letter grades (e.g., A, B, C, D, F ) Percentages (e.g., 98%,
OR 93%, etc.)
How must you Performance categories on the same
Performance categories (e.g.,
report the stu- proficiency scale (e.g., Advanced,
Advanced, Proficient, Basic, Below
dents’ grades on Proficient, Basic, Below Basic)
Basic; or Outstanding, Satisfactory,
the report card?
Needs Improvement)
What is your report Achievement for the report period sum- Achievement for the report period sum- Achievement for the report period Achievement for the report period
card grade sup- marized by standard (e.g., Numbers marized by standard (e.g., Numbers and summarized by subject (e.g., Reading, summarized by subject (e.g., Reading,
posed to mean? and operations, Finding main idea) operations, Finding main idea) Mathematics, Science, Social Studies) Mathematics, Science, Social Studies)
RECOMMENDED • Group individual grades by standard. • Group individual grades by standard. • Group individual grades by subject. Changing rubrics to percentages is not
METHOD: • Use the median proficiency level, • Transform each individual grade to the • Transform each individual grade to the recommended, but if policy requires it,
|
• Use the median or mean (average)
percentage, weighting as necessary.
Note: This chart summarizes the most common kinds of grading decisions. If the grading policies you must follow are not listed here, use the explanations in the text to figure out the best method to use.
117
|
118 How to Create and Use Rubrics
Therefore, it’s very important to know what specific set of information you will need
for your report card grade before you record your individual grades. If you have recorded
individual grades by standard or subject, as needed, you can easily calculate meaning‑
ful report card grades. If you have not, or if you have made overall grades out of rubric
results that should have been kept separate, you will not have the right information at
hand when it’s time to calculate final grades. Even worse, if you recorded improperly orga‑
nized grades into gradebook software that calculates final grades automatically, you may
not even be aware that your final grades do not mean what you intended them to mean.
Begin the report period by selecting the right organization method for your grade
records. It’s not hard to organize ahead of time. It’s very difficult, and sometimes impos‑
sible, to reorganize mixed-up results.
Make sure the individual grades to be summarized are on the same scale.
Here, the term scale means the numbers or levels in which the individual grade is
expressed. There might be several different kinds of scales within the set of grades you
identified as the set to be summarized for one report card grade. For example, you might
have percentages, some four-point rubrics, some six-point rubrics, and so on. Obviously
a 4 conveys very different information about achievement on those different scales, and
yet if you were to average them, those different meanings would become muddled.
If you have used rubrics with the same proficiency scale for every graded assessment,
whether it was a test or performance assessment or project or assignment of any type,
your individual grades are already on the same scale for grading purposes. Chapter 6
described how to create this kind of rubric.
If all your recorded grades for individual assignments are from rubrics, but the rubrics
have not been designed such that the levels have the same meaning, or if the rubrics do not
all have the same number of levels, or both, you need to put them all on the same scale
before you combine them. This follows the “comparing apples and oranges” principle.
You need to make sure all your rubrics are apples (or oranges, or bananas for that mat‑
ter); that is, that they are all comparable and can be meaningfully combined.
Whether your set of individual grades is by subject or by standard, if you are report-
ing in letter grades (for example, A, B, C, D, F) or performance categories (for example,
Advanced, Proficient, Nearing Proficiency, Not Yet), or in any other short scale that is
really a list of ordered categories of achievement, the easiest thing to do is to transform
each individual grade into a category on that scale. Then when you combine the grades,
your result will already be on the scale you need, and you’ll save yourself having to do a
How to Use Rubrics for Grading | 119
second transformation. I recommend you do this at the time you record each individual
grade, but if you haven’t, do the transformation before calculating the report card grade.
For example, consider Figure 11.3. The top section lists the grades for five assess‑
ments for four students. These grades illustrate a common situation that occurs when
teachers have used a mixture of tests or quizzes graded with percentages and perfor‑
mance assessments graded with rubrics. Using multiple, different measures is a good
practice. It allows for assessing different aspects of a content domain, at different cogni‑
tive levels, and with different performance modalities. It does, however, create a grading
situation with incompatible scales, as illustrated in Figure 11.3. Take a moment to verify
for yourself that if you simply “averaged” the numbers (added them up and divided by
5), you would get uninterpretable results.
The solution to this problem is to discern the meaning of each individual grade
in terms of the grading scale you must use for reporting so that you can meaningfully
combine the results. In this illustration, the report card requires letter grades. If the
report card required reporting proficiency levels (Advanced, Proficient, and so on), the
procedure would be the same but instead of converting to letters, you would convert to
proficiency levels.
In this example, Assessments #1 and #3 were tests, and their results were in per‑
centages. The percentages were transformed to letter grades using the scale 90–100=A,
80–89=B, and so on, for ease of illustration. You would, of course, use whatever scale was
in place in your district or school to do this transformation.
Assessment #2 was a performance assessment scored with four-point rubrics,
where 3 was Proficient. These rubric results were transformed to letter grades by a
judgment call, that 3 (Proficient) represented B-level work, 4 (Advanced) represented
A-level work, and 2 (Nearing Proficiency) represented C-level work. Assessments #4
and #5 were performance assessments scored with six-point rubrics, similar to the
6+1 Trait Writing rubrics, where 4 and above meant Proficient. In this example, to be
consistent with the decision to use a B to represent Proficient, the six-point rubric was
transformed as follows: 6=A, 5=A-, 4=B, 3=C, 2=D, 1=F. As for the percentage transfor‑
mations, you would use the conventions in your school or district to transform the rubric
results into letters, which would not have to match the transformations in this example.
If you are using rubrics but your report card grades must be expressed in percentages,
that’s a stickier wicket. Technically you can’t add precision (more distinctions or gra‑
dations in the scale) that wasn’t there in the first place. So mapping a small number of
performance categories from rubrics onto a scale with 101 possible points (0 to 100) is
not mathematically an appropriate thing to do. If you have to end up with percentages,
however, it won’t help you much if I just say don’t use percentages with rubrics. You
would be technically correct but left with no way to report students’ grades.
Therefore, I suggest that if you have to report in percentages you work to change
your district’s reporting system and in the meantime know that you are compromising
for the sake of following required policy. If you have to report final grades in percent‑
ages, it’s better to use judgments about student learning than mathematics that distorts
the meaning about student learning. Whether all your recorded grades for individual
assignments are from rubrics or some are from rubrics and others are expressed in per‑
centages, you need to put them all on the percentage scale before you combine them.
How to Use Rubrics for Grading | 121
The purpose is to make sure that they are all comparable and can be combined to yield a
meaningful result on the percentage scale.
If you know you need percentages, you can get such scores from rubrics in one of
two ways. You can calculate percentages mathematically from your rubrics, or you can
use a conversion chart based on judgment.
To calculate percentages from rubrics (I cringe even writing that! What a horrible
position to be in!), make sure all the rubrics you use for assignments have at least 20
total points. Thirty is even better. Using at least 20 total points helps address the prob‑
lem of percentage meanings not coinciding with rubric meanings, which was explained
in the introduction to this chapter. (Recall the example that three out of four is 75 per‑
cent, which is not “Proficient” on most percentage grading scales).
You won’t be able to avoid this problem entirely, but you can improve it a bit by
using larger numbers. Five 3s, on five 4-point rubrics, still yields 75 percent (15 out of
20 possible points). However, in the three-out-of-four case, the scale jumped from 75
percent to 100 percent; it was impossible to score anything in between. With five 4-point
rubrics for a total of 20 points, there are possible scores in between (80 percent for four
3s and a 4, 85 percent for three 3s and two 4s, and so on). The bottom line is, if you have
to convert rubrics to percentages—kicking and screaming at compromising information
about student learning—at least do it with total points of 20 or more.
I once had a question from a teacher during a workshop on grading. She asked
about rubrics in which the lowest category is a 1. Often the description of performance
in the low category includes things like “Answer was unreadable” or even “No answer
was given.” She was concerned that students would, in effect, be “getting points” for
doing no work. That was an interesting question, but it rests on the assumption that the
grades are for doing work. Grades are supposed to be measures of achievement, and
they are always on arbitrary scales invented by educators. If a student scores 25 percent
because of getting five 1s on an assignment scored with five 4-point rubrics, the student
still fails. In fact, that 25 percent is exactly the same as the chance (guessing) score for a
multiple-choice test with four-option questions scored right/wrong and then converted
to a percentage.
Using a conversion chart based on judgment to transform rubric scores into
percentages is a bit more defensible than calculating percentages from rubric scores
because the judgments are about what the scores say about student learning. It is still
mathematically impossible to make scores more precise than they were in the first
|
122 How to Create and Use Rubrics
place, but at least the judgments can be thoughtful. Figure 11.4 is an example of a con‑
version chart constructed by using teacher judgment.
4.0 99
3.5 92
3.0 85
2.5 79
2.0 75
1.5 67
1.0 59
The chart in Figure 11.4 reflects judgments about learning. The reasoning flowed
from the premise that the 3 was supposed to reflect proficiency and should therefore
end up being a “middle B” on the percentage scale used in the school. In this example,
for ease of illustration we are representing percentages on a scale where 90 to 100 is an
A, 80 to 89 is a B, and so on. Thus, in this conversion chart, a student with performance
at the bottom level of the rubric fails but is at the top of the F range on the percentage
scale. A conversion chart in a school with a different scale would have different num‑
bers. A conversion chart based on different judgments about what the four levels of
performance should represent on the scale would also have different numbers.
It is best if conversion charts like this one are constructed by several teachers or
a whole department or school. The more perspectives reflected in the judgments, the
better. And the more agreement there is about the judgments, the easier it will be to use
them and explain them to students and parents.
Finally, to end this section I want to repeat that making percentages out of rubrics is
a compromise, and one I’m not happy about. Do it only if your grading policies require it.
How to Use Rubrics for Grading | 123
all the individual grades to give the student the benefit of the doubt. However, you also
need to find out the reason why a student went into a slump. In fact, it’s better if you
notice the slump before report card time and can do something about it.
If there is no discernible pattern, take the median of all the individual grades. The
case of Cort in Figure 6.4 illustrates this scenario.
If you are using rubrics but your report card
grades must be expressed in percentages, you have Self-reflection
already transformed each of your individual
grades into percentages. Give more weight Which branches of the decision tree in Figure 11.2
to assessments of more important standards are most relevant to your school and classroom
and assessments that measure complex and grading policies and needs? How do your grading
extended thinking, and less weight to assess‑ practices match with the recommendations made
ments of less important standards and assess‑ in this chapter?
ments of recall of information. I still recommend
that you use the median percentage as your summary grade. That way you minimize the
drastic effect of extreme scores and still report a defensible average grade. (Actually, I
recommend the median even for summarizing grades that all began as percentages, not
rubrics, for that same reason.) However, you could use the mean percentage as your
summary grade as well.
Summing up
This chapter explained how to use rubrics for grading individual assessments and
then how to combine them with other individual assessment results for report card
grades. In my opinion, the most important aspect of rubrics is how they can be used to
describe, develop, and support learning. The grading recommendations in this chapter,
for both individual assessments and report cards, are aimed at handling the results of
using rubrics for learning in a way that preserves the intended meaning about learning.
Because scores from rubrics look like any other numbers, teachers often unknowingly
total or average them in ways that are not appropriate for short, ordered-category scales.
I hope this chapter has helped you think through the meaning of grades resulting from
rubrics.
The chapter recommended report card grading practices based on a series of deci‑
sions about grading that depend on the kinds of individual grades you have, the manner
in which you must express your report card grade, and the meaning your report card
How to Use Rubrics for Grading | 125
grade is intended to have. That’s why there are different recommendations. The chap‑
ter, of course, could not cover every possible answer to those three questions. If your
situation was not covered, you can still follow the general plan: (1) identify the set of
individual grades you need to summarize, (2) put them all on the same scale, and (3) use
a summarizing method that is appropriate to the kind of scores you have and that results
in the most appropriate message about student achievement.
Afterword
Rubrics are very common but, in my experience, are often poorly handled. It is common
to find trivial or list-based criteria (for example, “paragraph has four adjectives”). It is
also common to find rubrics used like any other point-based grading scheme, without
taking advantage of the formative and student-centered assessment opportunities they
afford. And it’s very common for grading practices to combine rubrics with test scores
and other grades in such a way as to misrepresent student achievement in the final
grade. One immediate benefit of this book is that it provides a resource that addresses
these common problems.
I believe, however, that this book will be
more than an antidote to problems associated Self-reflection
with rubrics. With clear explanations and a range
of examples, and with the inclusion of instruc‑ What is your current view of rubrics? Compare
tional strategies to use with rubrics, I hope this this reflection with the reflection you made at the
book inspires teachers to more effective use of beginning of this book.
rubric-based assessment and instruction and,
in particular, to more involvement of students in their own assessment and learning.
Therefore, I hope the book supports teachers and advances student learning. I also
126
Afterword | 127
hope that the examples and explanations support teachers in more active and thought‑
ful use of rubrics (designing and planning their own rubrics, not just grabbing rubrics
from a book or from the Internet). This, too, should lead to more strategic teaching and
learning.
|
128 How to Create and Use Rubrics
IDEAS
Not proficient
No main idea, purpose, or central Main idea is still missing, though Main idea is present; may be
theme exists; reader must infer possible topic/theme is emerging broad or simplistic
this based on sketchy or missing
details
A No topic emerges Several topics emerge; any might Topic becomes clear, though still
become central theme or main too broad, lacking focus; reader
idea must infer message
B Support for topic is not evident Support for topic is limited, Support for topic is incidental or
unclear; length is not adequate for confusing, not focused
development
C There are no details Few details are present; piece Additional details are present but
simply restates topic and main lack specificity; main idea or topic
idea or merely answers a question emerges but remains weak
D
Author is not writing from own Author generalizes about topic Author “tells” based on others’
knowledge/experience; ideas are without personal knowledge/ experiences rather than “showing”
not author’s experience by own experience
E No reader’s questions have been Reader has many questions due Reader begins to recognize focus
answered to lack of specifics; it is hard to with specifics, though questions
“fill in the blanks” remain
F Author doesn’t help reader make Author does not yet connect topic Author provides glimmers into
any connections with reader in any way, although topic; casual connections are
attempts are made made by reader
Key question: Does the writer stay focused and share original and fresh
information or perspective on the topic?
Appendix A | 129
IDEAS
Proficient
Topic or theme is identified as Main idea is well marked by detail Main idea is clear, supported, and
main idea; development remains but could benefit from additional enriched by relevant anecdotes
basic or general information and details
A
Topic is fairly broad, yet author’s Topic is focused yet still needs Topic is narrow, manageable, and
direction is clear additional narrowing focused
B Support for topic is starting to Support for topic is clear and rel- Support is strong and credible,
work; still does not quite flesh out evant except for a moment or two and uses resources that are
key issues relevant and accurate
C Some details begin to define main Accurate, precise details support Details are relevant, telling; quality
idea or topic, yet are limited in one main idea details go beyond obvious and are
number or clarity not predictable
D Author uses a few examples to Author presents new ways of Author writes from own knowl-
“show” own experience yet still thinking about topic based on edge/experience; ideas are fresh,
relies on generic experience of personal knowledge/experience original, and uniquely the author’s
others
E Reader generally understands Reader’s questions are usually Reader’s questions are all
content and has only a few anticipated and answered by answered
questions author
F Author begins to stay on topic and Author connects reader to topic Author helps reader make many
begins to connect reader through with a few anecdotes, text, or connections by sharing significant
self, text, world, or other resources other resources insights into life
Key question: Does the writer stay focused and share original and fresh
information or perspective on the topic?
|
130 How to Create and Use Rubrics
ORGANIZATION
Not proficient
A There is no lead to set up what The lead and/or conclusion are Either lead or conclusion or both
follows, no conclusion to wrap ineffective or do not work may be present but are clichés or
things up leave reader wanting more
B Transitions between paragraphs Weak transitions emerge yet Some transitions are used, but
are confusing or nonexistent offer little help to get from one they repeat or mislead, resulting
paragraph to next and not often in weak chunking of paragraphs
enough to eliminate confusion
C Sequencing doesn’t work Little useful sequencing is pres- Sequencing has taken over so
ent; it’s hard to see how piece fits completely, it dominates ideas; it
together as a whole is painfully obvious and formulaic
D Pacing is not evident Pacing is awkward; it slows to a Pacing is dominated by one part
crawl when reader wants to get of piece and is not controlled in
on with it, and vice versa remainder
E Title (if required) is absent Title (if required) doesn’t match Title (if required) hints at weak
content connection to content; it is unclear
F Lack of structure makes it almost Structure fails to fit purpose of Structure begins to clarify purpose
impossible for reader to under- writing, leaving reader struggling
stand purpose to discover purpose
Key question: Does the organizational structure enhance the ideas and
make the piece easier to understand?
Appendix A | 131
ORGANIZATION
Proficient
Organization moves reader Organization is smooth; only a few Organization enhances and
through text without too much small bumps here and there exist showcases central idea; order of
confusion information is compelling, moving
reader through text
A A recognizable lead and conclu- While lead and/or conclusion go An inviting lead draws reader
sion are present; lead may not beyond obvious, either could go in; satisfying conclusion leaves
create a strong sense of anticipa- even further reader with sense of closure and
tion; conclusion may not tie up all resolution.
loose ends
B Transitions often work yet are Transitions are logical, though Thoughtful transitions clearly
predictable and formulaic; para- may lack originality; ideas are show how ideas (paragraphs)
graphs are coming together with chunked in proper paragraphs connect throughout entire piece,
topic sentence and support and topic sentences are properly helping to showcase content of
used each paragraph
C Sequencing shows some logic, Sequencing makes sense and Sequencing is logical and effec-
but is not controlled enough to moves a bit beyond obvious, help- tive; moves reader through piece
consistently showcase ideas ing move reader through piece with ease from start to finish
D Pacing is fairly well controlled; Pacing is controlled; there are still Pacing is well controlled; author
sometimes lunges ahead too places author needs to highlight knows when to slow down to
quickly or hangs up on details that or move through more effectively elaborate, and when to move on
do not matter
E Uninspired title (if required) only Title (if required) settles for minor Title (if required) is original,
restates prompt or topic idea about content rather than reflecting content and capturing
capturing deeper theme central theme
F Structure sometimes supports Structure generally works well for Structure flows so smoothly
purpose, at other times reader purpose and for reader reader hardly thinks about it;
wants to rearrange pieces choice of structure matches and
highlights purpose
Key question: Does the organizational structure enhance the ideas and
make the piece easier to understand?
|
132 How to Create and Use Rubrics
VOICE
Not proficient
Author seems indifferent, unin- Author relies on reader’s good Author’s voice is hard to rec-
volved, or distanced from topic, faith to hear or feel any voice in ognize, even if reader is trying
purpose, and/or audience phrases such as “I like it” or desperately to “hear” it
“It was fun”
A Author does not interact with Author uses only clichés, resulting Author seems aware of reader yet
reader in any fashion; writing is in continued lack of interaction discards personal insights in favor
flat, resulting in a disengaged with reader of safe generalities
reader
B Author takes no risks, reveals Author reveals little yet doesn’t Author surprises reader with
nothing, lulls reader to sleep risk enough to engage reader random “aha” and minimal
risk-taking
C Tone is not evident Tone does not support writing Tone is flat; author does not
commit to own writing
E Voice inappropriate for Voice does not support purpose/ Voice is starting to support
purpose/mode mode; narrative is only an outline; purpose/mode though remains
expository or persuasive writing weak in many places
lacks conviction or authority to set
it apart from a mere list of facts
Key question: Would you keep reading this piece if it were longer?
Appendix A | 133
VOICE
Proficient
Author seems sincere, yet not Author attempts to address topic, Author speaks directly to reader in
fully engaged or involved; result purpose, and audience in sincere individual, compelling, and engag-
is pleasant or even personable, and engaging way; piece still ing way that delivers purpose and
though topic and purpose are still skips a beat here and there topic; although passionate, author
not compelling is respectful of audience and
purpose
A Author attempts to reach audi- Author communicates with reader Author interacts with and engages
ence and has some moments of in earnest, pleasing, authentic reader in ways that are personally
successful interaction manner revealing
B Author surprises, delights, or Author’s moments of insight and Author interacts with and engages
moves reader in more than one or risk-taking enliven piece reader in ways that are personally
two places revealing
C Tone begins to support and enrich Tone leans in right direction most Tone gives flavor and texture to
writing of the time message and is appropriate
D Commitment to topic is pres- Commitment to topic is clear and Commitment to topic is strong;
ent; author’s own point of view focused; author’s enthusiasm author’s passion about topic is
may emerge in a place or two starts to catch on clear, compelling, and energizing;
but is obscured behind vague reader wants to know more
generalities
E Voice lacks spark for purpose/ Voice supports author’s purpose/ Voice is appropriate for purpose/
mode; narrative is sincere, if not mode; narrative entertains, mode; voice is engaging, passion-
passionate; expository or per- engages reader; expository or ate, and enthusiastic
suasive writing lacks consistent persuasive writing reveals why
engagement with topic to build author chose ideas
credibility
Key question: Would you keep reading this piece if it were longer?
|
134 How to Create and Use Rubrics
WORD CHOICE
Not proficient
A Words are overly broad and/or so Words are so vague and mundane Words are adequate and correct
generic no message is evident that message is limited and in a general sense; message
unclear starts to emerge
B Vocabulary confuses reader and Vocabulary has no variety or Vocabulary is very basic;
is contradictory; words create spice; even simple words are simple words rule; variety starts to
no mental imagery, no lingering used incorrectly; no mental “show” rather than “tell”; mental
memory images exist images are still missing
C Words are incorrectly used, mak- Words are either so plain as to put Original, natural word choices
ing message secondary to word reader to sleep or so over the top start to emerge so piece sounds
misfires they make no sense authentic
D Misuse of parts of speech litters Redundant parts of speech Rote parts of speech reflect a lack
piece, confusing reader; no mes- and/or jargon or clichés distract of craftsmanship; passive verbs,
sage emerges from message overused nouns, and lack of
modifiers and variety create fuzzy
message
Key question: Do the words and phrases create vivid pictures and linger in your mind?
Appendix A | 135
WORD CHOICE
Proficient
Vocabulary is functional yet still Vocabulary is more precise and Vocabulary is powerful and
lacks energy; author’s meaning is appropriate; mental imagery engaging, creating mental
easy to understand in general emerges imagery; words convey intended
message in precise, interesting,
and natural way
A Words work and begin to shape In most cases words are “just Words are precise and accurate;
unique, individual piece; message right” and clearly communicate author’s message is easy to
is easy to identify message understand
B Vocabulary includes familiar Vocabulary is strong; it’s easy to Vocabulary is striking, powerful,
words and phrases that commu- “see” what author says because and engaging; it catches reader’s
nicate, yet rarely capture reader’s of figurative language—similes, eye and lingers in mind; recall
imagination; perhaps a moment metaphors, and poetic devices; of handful of phrases or mental
of two of sparkle or imagery mental imagery lingers images is easy and automatic
emerges
C Attempts at colorful word choice New words and phrases are usu- Word choice is natural yet original
show willingness to stretch and ally correct and never overdone; both words
grow, yet sometimes go too far and phrases are unique and
effective
D Accurate and occasionally refined Correct and varied parts of speech Parts of speech are crafted to
parts of speech are functional and are chosen carefully to commu- best convey message; lively verbs
start to shape message nicate message, and clarify and energize, precise nouns/modifiers
enrich writing add depth, color, and specificity
Key question: Do the words and phrases create vivid pictures and linger in your mind?
|
136 How to Create and Use Rubrics
SENTENCE FLUENCY
Not proficient
Sentences are incorrectly Sentences vary little; even easy Sentences are technically correct
structured; reader has to practice sentence structures cause reader but not varied, creating sing-song
to give paper a fair interpretive to stop and decide what is being pattern or lulling reader to sleep;
reading; it’s nearly impossible to said and how; it’s challenging to it sounds mechanical when read
read aloud read aloud aloud
A Sentence structure is choppy, Sentence structure works but has Sentence structure is usually cor-
incomplete, run-on, rambling, or phrasing that sounds unnatural rect, yet sentences do not flow
awkward
B No sentence sense—type, begin- There is little evidence of sen- Sentence sense starts to emerge;
ning, connective, rhythm—is tence sense; to make sentences reader can read through problems
evident; determining where flow correctly, most have to be and see where sentences begin
sentences begin and end is nearly totally reconstructed and end; sentences vary little
impossible
C Incomplete sentences make it Many sentences begin in same Simple and compound sentences
hard to judge quality of beginnings way and are simple (subject-verb- and varied beginnings help
or identify type of sentence object) and monotonous strengthen piece
D Weak or no connectives create “Blah’ connectives (and, so, but, Few simple connectives lead
massive jumble of language; then, and because) lead reader reader from sentence to sentence,
disconnected sentences leave nowhere though piece remains weak
piece chaotic
E Rhythm is chaotic, not fluid; piece Rhythm is random and may still Rhythm emerges; reader can read
cannot be read aloud without be chaotic; writing does not invite aloud after a few tries
author’s help, even with practice expressive oral reading
Key question: Can you feel the words and phrases flow together as you read it aloud?
Appendix A | 137
SENTENCE FLUENCY
Proficient
Sentences are varied and hum Some sentences are rhythmic Sentences have flow, rhythm,
along, tending to be pleasant or and flowing; a variety of sentence and cadence; are well built within
businesslike though may still be types are structured correctly; it strong, varied structure that
more mechanical than musical or flows well when read aloud invites expressive oral reading
fluid; it’s easy to read aloud
A Sentence structure is correct and Sentence structure flows well and Sentence structure is strong,
begins to flow but is not artfully moves reader fluidly through piece underscoring and enhancing
crafted or musical meaning while engaging and
moving reader from beginning to
end in fluid fashion
B Sentence sense is moderate; sen- Sentence sense is strong; correct Sentence sense is strong and
tences are constructed correctly construction and variety are used; contributes to meaning; dia-
with some variety, hang together, few examples of dialogue or frag- logue, if present, sounds natural;
and are sound ments are used fragments, if used, add style;
sentences are nicely balanced in
type, beginnings, connectives, and
rhythm
C Sentence beginnings vary yet are Sentence beginnings are varied Varied sentence beginnings add
routine, generic; types include and unique; four sentence types interest and energy; four sentence
simple, compound, and perhaps (simple, compound, complex, and types are balanced
even complex compound-complex) create bal-
ance and variety
D Connectives are original and hold Thoughtful and varied connectives Creative and appropriate connec-
piece together but are not always move reader easily through piece tives show how each sentence
refined relates to previous one and pulls
piece together
E Rhythm is inconsistent; some Rhythm works; reader can read Rhythm flows; writing has
sentences invite oral reading, aloud quite easily cadence; first reading aloud is
others remain stiff, awkward, or expressive, pleasurable, and fun
choppy
Key question: Can you feel the words and phrases flow together as you read it aloud?
|
138 How to Create and Use Rubrics
CONVENTIONS
Not proficient
Errors in conventions are the norm Many errors of various types Author continues to stumble in
and repeatedly distract reader, of conventions are scattered conventions even on simple tasks
making text unreadable throughout text and almost always on anything
trickier
A Spelling errors are frequent, even Spelling is phonetic with many Spelling on simple words is
on common words errors incorrect, although reader can
understand
C Capitalization is random, inconsis- Only the easiest capitalization Capitalization is applied inconsis-
tent, and sometimes nonexistent rules are correctly applied tently except for proper nouns and
sentence beginnings
E Extensive editing (on virtually There is still a lot of editing Too much editing is still needed to
every line) is required to polish required for publication; meaning publish, although piece begins to
text for publication; reader must is uncertain communicate meaning
read once to decode, then again
for meaning
Key question: How much editing would have to be done to be ready to share with an outside source?
(Note: For the trait of conventions, grade level matters. Expectations should be based on grade level and include only
skills that have been taught. Expectations for secondary students are obviously much higher than those of
the elementary grade levels.)
Appendix A | 139
CONVENTIONS
Proficient
Author has reasonable control Author stretches, trying more Author uses standard writing
over standard conventions for complex tasks in conventions; conventions effectively to enhance
grade level; conventions are several mistakes still exist; for readability; errors are few and
sometimes handled well; at other secondary students, all basic only minor editing is needed to
times, errors distract and impair conventions have been mastered publish
readability
A Spelling is usually correct or Spelling on common grade-level Spelling is usually correct, even on
reasonable phonetic on common words is correct but sometimes more difficult words
grade-level words, but not on incorrect on more difficult words
more difficult words
B End punctuation is usually correct; Punctuation is correct and Punctuation is correct, creative,
internal punctuation is sometimes enhances readability in all but a and guides reader through entire
correct; for secondary students all few places piece
punctuation is usually correct
E Moderate editing (a little of this, a Several things still need editing Hardly any editing is needed to
little of that) is required to publish; before publishing; conventions are publish; author may successfully
meaning is clear more correct than not; meaning is manipulate conventions for stylis-
easily communicated tic effect; meaning is crystal clear
Key question: How much editing would have to be done to be ready to share with an outside source?
(Note: For the trait of conventions, grade level matters. Expectations should be based on grade level and include only
skills that have been taught. Expectations for secondary students are obviously much higher than those of
the elementary grade levels.)
|
140 How to Create and Use Rubrics
PRESENTATION
Not proficient
A Handwritten letters are irregular, Handwritten letters and words are Handwriting creates little or no
formed inconsistently or incor- readable with limited problems in stumbling in readability; spacing
rectly; spacing is unbalanced letter shape and form; spacing is is consistent
or absent; reader can’t identify inconsistent
letters
B Many fonts/sizes make piece Few fonts/sizes make piece hard Fonts/sizes are limited in number;
nearly unreadable to read or understand piece starts to come together
visually
C No thought is given to white Understanding of white space White space begins to frame and
space—it is random and confus- begins to emerge, though piece balance piece; margins may be
ing; identifying beginning and seems “plopped” on paper without present, though some text may
ending of text is difficult margins or boundaries crown edges; usage is inconsis-
tent; paragraphs begin to emerge
E No markers (title, bullets, page Perhaps one marker (a title, and Markers are used but do not
numbers, subheads, etc.) are single bullet or page number) is organize or clarify piece
present used
Key question: Is the finished piece easy to read, polished in presentation, and pleasing to the eye?
Appendix A | 141
PRESENTATION
Proficient
A Handwriting is correct and read- Handwriting is neat, readable, Handwriting borders on calligra-
able; spacing is consistent and consistent; spacing is uniform phy; is easy to read and uniformly
neat between letters and words; text is spaced; pride of author is clear
easy to read
B Fonts/sizes are consistent and Fonts/sizes invite reader into text; Fonts/sizes enhance readability
appropriate; piece is easy to understanding is a breeze and enrich overall appearance;
understand understanding is crystal clear
C White space frames text by White space helps reader focus White space is used to optimally
creating margins; usage is still on text; margins frame piece, frame and balance text with mark-
inconsistent on the whole; some other white space frames markers ers and graphics; all paragraphs
paragraphs are indented, some and graphics; usage is consistent are either indented or blocked
are blocked and purposeful; most paragraphs
are either indented or blocked
E Markers are used to organize, Markers serve to integrate Markers help reader comprehend
clarify, and present whole piece graphics and articulate meaning message and extend or enrich
of piece piece
Key question: Is the finished piece easy to read, polished in presentation, and pleasing to the eye?
Source: Copyright 2010 by Education Northwest. Available at educationnorthwest.org. Reprinted with permission.
|
142 How to Create and Use Rubrics
IDEAS
Exceptional • The Big Idea is clear and original; the topic is narrowed
6 • Supporting details are relevant, accurate, and specific
• Pictures, graphs, charts (if present) clarify the text
• Focus: The writing stays on topic
• Development is generous and complete
Capable • The Big Idea is clear, but general—a simple story or explanation
4 • Support is presented in the text
• Pictures (if present) support the text
• Focus: Generally on topic, with a few missteps
• Development is adequate
Emerging • Ideas are conveyed in a general way through text, labels, symbols
2 • Support: Not present in the text
• Pictures: Connect with a word, label, symbol
• Focus: Unclear or extremely limited
• Development: Not present
IDEAS
Exceptional
6
Experienced
5
Capable
4
Developing
3
Emerging
2
Beginning
1
|
144 How to Create and Use Rubrics
ORGANIZATION
ORGANIZATION
Exceptional
6
Experienced
5
Capable
4
Developing
3
Emerging
2
Beginning
1
|
146 How to Create and Use Rubrics
VOICE
Experienced • The writer’s feelings about the subject are loud and clear
5 • Pictures (if present) enrich the mood, atmosphere
• Engages the audience (“Did you know?”)
• Individual and sincere expression
VOICE
Exceptional
6
Experienced
5
Capable
4
Developing
3
Emerging
2
Beginning
1
|
148 How to Create and Use Rubrics
WORD CHOICE
Developing • Word groups, phrases convey the topic with some help from pictures
3 • Word choice makes sense
• Vocabulary is limited to “known” or “safe” words
• Repetition of “safe” words and phrases
WORD CHOICE
Exceptional
6
Experienced
5
Capable
4
Developing
3
Emerging
2
Beginning
1
|
150 How to Create and Use Rubrics
SENTENCE FLUENCY
Exceptional • Several sentences are present that vary in structure and length
6 • Sentence beginnings are varied
• Rhythm is fluid and pleasant to work with
• Connective words work smoothly
Experienced • Several sentences are present and employ more than one sentence pattern
5 • Sentence beginnings are varied
• Rhythm is more fluid than mechanical—easy to read aloud
• Connective words do not interfere with the fluency
Developing • Most of a sentence is present, decodable in the text (“Like bunne becuz their riree Fas”)
3 • Sentences begin the same way (“I like. . .”)
• Rhythm is choppy and repetitive
• Connective transitions serve as links between phrases (“and,” “then,” etc.)
SENTENCE FLUENCY
Exceptional
6
Experienced
5
Capable
4
Developing
3
Emerging
2
Beginning
1
|
152 How to Create and Use Rubrics
CONVENTIONS
Experiences • Capitalization: Capitals for sentence beginnings, proper names, titles usually correct
5 • Punctuation: End punctuation usually correct—some varied uses present
• Spelling: Usually accurate for grade level words
• Grammar and usage: Usually accurate
• Paragraphing: First line indented
CONVENTIONS
Exceptional
6
Experiences
5
Capable
4
Developing
3
Emerging
2
Beginning
1
Source: Copyright 2010 by Education Northwest. Available at educationnorthwest.org. Reprinted with permission.
References
Andrade, H. L., Du, Y., & Mycek, K. (2010). Rubric-referenced self-assessment and middle school students’ writing.
Assessment in Education, 17(2), 199–214.
Andrade, H. L., Du, Y., & Wang, X. (2008). Putting rubrics to the test: The effect of a model, criteria generation,
and rubric-referenced self-assessment on elementary students’ writing. Educational Measurement: Issues and
Practice, 27(2), 3–13.
Arter, J. A., & Chappuis, J. (2006). Creating and recognizing quality rubrics. Boston: Pearson.
Arter, J. A., & McTighe, J. (2001). Scoring rubrics in the classroom. Thousand Oaks, CA: Corwin Press.
Arter, J. A., Spandel, V., Culham, R., & Pollard, J. (1994). The impact of teaching students to be self-assessors of writing.
Paper presented at the annual meeting of the American Educational Research Association, San Francisco. ERIC
Document Reproduction Service No. ED370975.
Brookhart, S. M. (1993). Assessing student achievement with term papers and written reports. Educational Mea-
surement: Issues and Practice, 12(1), 40–47.
Brookhart, S. M. (1999). Teaching about communicating assessment results and grading. Educational Measure-
ment: Issues and Practice, 18(1), 5–13.
Brookhart, S. M. (2010). How to assess higher-order thinking skills in your classroom. Alexandria, VA: ASCD.
Brookhart, S. M. (2011). Grading and learning: Practices that support student achievement. Bloomington, IN: Solu‑
tion Tree.
California State Department of Education. (1989). A question of thinking: A first look at students’ performance on open-
ended questions in mathematics. Sacramento, CA: Author. ERIC Document No. ED315289.
Chapman, V. G., & Inman, M. D. (2009). A conundrum: Rubrics or creativity/metacognitive development? Educa-
tional HORIZONS, 87(3), 198–202.
Chappuis, J. (2009). Seven strategies of assessment for learning. Boston: Pearson.
Chappuis, J., Stiggins, R., Chappuis, S., & Arter, J. (2012). Classroom assessment for student learning: Doing it right—
using it well (2nd ed.). Boston: Pearson.
Chappuis, S., & Stiggins, R. J. (2002). Classroom assessment for learning. Educational Leadership, 60(1), 40–43.
Coe, M., Hanita, M., Nishioka, V., & Smiley, R. (2011, December). An investigation of the impact of the 6+1 Trait
Writing model on grade 5 student writing achievement: Final report. NCEE Report 2012-4010. Washington, DC:
U.S. Department of Education.
Goldberg, G. L., & Roswell, B. S. (1999–2000). From perception to practice: The impact of teachers’ scoring experi‑
ence on performance-based instruction and classroom assessment. Educational Assessment, 6(4), 257–290.
154
References | 155
Hafner, J. C., & Hafner, P. M. (2003). Quantitative analysis of the rubric as an assessment tool: An empirical study of
student peer-group rating. International Journal of Science Education, 25(12), 1509–1528.
Heritage, M. (2010). Formative assessment: Making it happen in the classroom. Thousand Oaks, CA: SAGE.
Higgins, K. M., Harris, N. A., & Kuehn, L. L. (1994). Placing assessment into the hands of young children: A study
of student-generated criteria and self-assessment. Educational Assessment, 2(4), 309–324.
Kozlow, M., & Bellamy, P. (2004). Experimental study on the impact of the 6+1 Trait® Writing Model on student
achievement in writing. Portland, OR: Northwest Regional Educational Laboratory. Retrieved January 16, 2012,
from http://educationnorthwest.org/webfm_send/134
Lane, S., Liu, M., Ankenmann, R. D., & Stone, C. A. (1996). Generalizability and validity of a mathematics perfor‑
mance assessment. Journal of Educational Measurement, 33(1), 71–92.
Moss, C. M., & Brookhart, S. M. (2009). Advancing formative assessment in every classroom: A guide for instructional
leaders. Alexandria, VA: ASCD.
Moss, C. M., & Brookhart, S. M. (2012). Learning targets: Helping students aim for understanding in today’s lesson.
Alexandria, VA: ASCD.
National Council of Teachers of Mathematics (NCTM). (1989). Curriculum and evaluation standards for school
mathematics. Reston, VA: Author.
Nitko, A. J., & Brookhart, S. M. (2011). Educational assessment of students (6th ed.). Boston: Pearson.
O’Connor, K. (2011). A repair kit for grading: 15 fixes for broken grades (2nd ed.). Boston: Pearson.
Parker, R., & Breyfogle, M. L. (2011). Learning to write about mathematics. Teaching Children Mathematics, 18(2),
90–99.
Perkins, D. N. (1981). The mind’s best work. Cambridge, MA: Harvard University Press.
Ross, J. A., Hogaboam-Gray, A., & Rolheiser, C. (2002). Student self-evaluation in grade 5–6 mathematics: Effects on
problem-solving achievement. Educational Assessment, 8, 43–58.
Ross, J. A., & Starling, M. (2008). Self-assessment in a technology-supported environment: The case of grade 9 geog‑
raphy. Assessment in Education, 15(2), 183–199.
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18,
119–144.
Wiliam, D. (2011). Embedded formative assessment. Bloomington, IN: Solution Tree.
Index
159
Related ASCD Resources: Formative Assessment
At the time of publication, the following ASCD resources were available; for the most up-to-date
information about ASCD resources, go to www.ascd.org. ASCD stock numbers are noted in
parentheses.
Mixed Media
Formative Assessment Strategies for Every Classroom: An ASCD Action Tool by Susan Brookhart
(one three-ring binder) (#707010)
Online Courses
Formative Assessment: The Basics (#PD09OC69)
Formative Assessment: Deepening Understanding (#PD11OC101)
Print Products
Checking for Understanding: Formative Assessment Techniques for Your Classroom by Douglas
Fisher and Nancy Frey (#107023)
Classroom Assessment & Grading That Work by Robert J. Marzano
Great Performances: Creating Classroom-Based Assessment Tasks, 2nd ed., by Larry Lewin and
Betty Jean Shoemaker
Formative Assessment Strategies for Every Classroom: An ASCD Action Tool, 2nd ed. by Susan M.
Brookhart
How to Give Effective Feedback to Your Students by Susan M. Brookhart (#108019)
Transformative Assessment by W. James Popham (#108018)
Learning Targets: Helping Students Aim for Understanding in Today’s Lesson by Connie M. Moss
and Susan M. Brookhart. (#112002)
The Whole Child Initiative helps schools and communities create learning envi‑
ronments that allow students to be healthy, safe, engaged, supported, and challenged. To learn
more about other books and resources that relate to the whole child, visit www.wholechildeduca‑
tion.org.
What is a rubric?
A rubric is a coherent set of criteria for student work that describes levels of perfor-
mance quality. Sounds simple enough, right? Unfortunately, rubrics are commonly
misunderstood and misused.
The good news is that when rubrics are created and used correctly, they are strong tools
that support and enhance classroom instruction and student learning. In this compre-
hensive guide, author Susan M. Brookhart identifies two essential components of
effective rubrics: (1) criteria that relate to the learning (not the “tasks”) that students
are being asked to demonstrate and (2) clear descriptions of performance across a
continuum of quality. She outlines the difference between various kinds of rubrics (for
example, general versus task-specific, and analytic versus holistic), explains when using
each type of rubric is appropriate, and highlights examples from all grade levels and
assorted content areas. In addition, Brookhart addresses
Intended for educators who are already familiar with rubrics as well as those who are
not, this book is a complete resource for writing effective rubrics and for choosing
wisely from among the many rubrics that are available on the Internet and from other
sources. And it makes the case that rubrics, when used appropriately, can improve
outcomes by helping teachers teach and helping students learn.
STUDY
GUIDE
ONLINE