Chapter 02. Assessment 02
Chapter 02. Assessment 02
LESSON 4: PROCESS-ORIENTED
PERFORMANCE-BASED ASSESSMENT
OBJECTIVES
a) identify errors and indicate the right statement/s in sample learning competencies of process-
oriented performance-based assessment; and
b) develop rubrics (analytic and holistic) on a given task.
INTRODUCTION
Too often, we tend to assess students learning through their outputs or products or through
Some Kind or traditional testing. However, it is important to assess not only these competencies but also
the processes which the students underwent in order to arrive at these products or outputs. It is p0ssible
to explain why the students' outputs are as they are through an assessment of the processes which they
did in order to arrive at the final product. This Chapter is concerned with process-oriented, performance-
based assessment. Assessment 1s not an end in itself but a vehicle for educational improvement. Its
effective practice, then, begins with and enacts a vision of the kinds of learning we most value for
students and strive to help them achieve.
Assessment is most effective when it reflects an understanding of learning as multidimensional,
integrated, and revealed in performance over time. Learning is a complex process. It entails not only
what students know but what they can do with what they know; it involves not only knowledge and
abilities but values, attitudes, and habits of mind that affect both academic success and performance
beyond the classroom. Assessment should reflect these understandings by employing a diverse array of
methods, including those that call for actual performance, using them over time so as to reveal change,
growth, and increasing degrees of integration. Such an approach aims for a completer and more
accurate picture of learning.
1|P a g e
A G U S A N D E L S U R S T A T E C O L L E G E
O F A G R I C U L T U R E A N D T E C H N O L O G Y |
ASSESSMENT OF LEARNING 2 T R E N T O E X T E R N A L S T U D I E S C E N T E R
DISCUSSION
2|P a g e
A G U S A N D E L S U R S T A T E C O L L E G E
O F A G R I C U L T U R E A N D T E C H N O L O G Y |
ASSESSMENT OF LEARNING 2 T R E N T O E X T E R N A L S T U D I E S C E N T E R
Identifying an activity that would highlight the competencies to be evaluated, e.g. reciting poem, writing an
essay, manipulating the microscope etc.
Identifying an activity that would entail more or less the same sets of competencies. If an activity would
result in too many possible competencies, then the teacher would have difficulty assessing the student's
competency on the task.
Finding a task that would be interesting and enjoyable for the students. Tasks such as writing an essay are
often boring and cumbersome for the students.
Example: The topic is on understanding biological diversity.
Possible task Design; Bring the students to a pond or creek. Ask them to find all living organisms they
can find living near the pond or creek. Also, bring them to the school playground to find as many living
organisms they can. Observe how the students will develop a system for finding such organisms,
classifying the organisms and concluding the differences in biological diversity or the two sites.
Science laboratory classes are particularly suitable for a process-oriented performance-based
assessment technique.
3) Scoring Rubrics
Rubric is scoring scale used to assess student performance along a task specific set of criteria.
Authentic assessments typically are criterion-referenced measures, that is, a student's aptitude on a task is
determined by matching the student's performance against a set of criteria to determine the degree to which the
student's performance meets the criteria for the task. To measure student performance against a pre-
determined set of criteria, a rubric, or scoring scale, is typically created which contains the essential criteria for
the task and appropriate levels of performance for each criterion. For example, the following rubric (scoring
scale) covers the recitation portion of a task in English.
3|P a g e
A G U S A N D E L S U R S T A T E C O L L E G E
O F A G R I C U L T U R E A N D T E C H N O L O G Y |
ASSESSMENT OF LEARNING 2 T R E N T O E X T E R N A L S T U D I E S C E N T E R
As in the given example, a rubric is comprised of two components: criteria and levels of performance. Each
rubric has at least two criteria and at least two levels of performance. The criteria, characteristics of good performance
on a task, are listed in the left- hand column in the illustrated rubric number of hand gestures, appropriate facial
features, voice inflection and ambiance). Actually, as is common in rubrics, a snort hands are used for each criterion to
make it fit easily into the table. The full criteria are statements of performance such as "include a sufficient number of
hand gestures" and "recitation captures the ambiance through appropriate feelings and tone in the voice”.
For each criterion, the evaluator applying the rubric can determine to what degree the student has met the
criterion, i.e., the level of performance. In the given rubric, there are three levels of performance for each criterion. For
example, the recitation can contain lots of inappropriate, few inappropriate or no inappropriate hand gestures.
Finally, the illustrated rubric contains a mechanism for assigning a score to each project. (Assessments and
their accompanying rubrics can be used for purposes other than evaluation and, thus, do not have to have points or
grades attached to them.) In the second- to-left column a weight is assigned each criterion. Students can receive 1, 2 or
3 points for "number of sources”. But appropriate ambiance, more important in this teacher's mind, is weighted three
times (x3) as heavily. So, students can receive 3, 6 or 9 points (i.e., 1, 2 or 3 times 3) for the level of appropriateness in
this task.
DESCRIPTORS
The rubric includes another common, but not a necessary, component of rubrics descriptors. Descriptors spell
out what is expected of students at each level of performance for each criterion. In the given example, "lots of
inappropriate facial expressions,” "monotone voice used" are descriptors. A descriptor tells students more precisely
what performance looks like at each level and how their work may be distinguished from the work of others for each
criterion. Similarly, the descriptors help the teacher more precisely and consistently distinguish between student work.
WHY INCLUDE LEVELS OF PERFORMANCE?
i. Clearer expectations
It is very useful for the students and the teacher if the criteria are identified and communicated prior to
completion of the task. Students know what is expected of them and teachers know what to look for in student
performance. Similarly, students better understand what good (or bad) performance on a task looks like if levels
of performance are identified, particularly if descriptors for each level are included.
ii. More consistent and objective assessment
In addition to better communicating teacher expectations, levels of performance permit the teacher to
more consistently and objectively distinguish between good and bad performance, or between superior,
mediocre and poor performance, when evaluating student work.
iii. Better feedback Furthermore, identifying specific levels of student performance allows the teacher to provide
more detailed feedback to students. The teacher and the students can more clearly recognize areas that need
improvement.
Analytic Versus Holistic
Rubrics For a particular task you assign students, do you want to be able to assess how well the students
perform on each criterion, or do you want to get a more global picture of the students' performance on the entire task?
The answer to that question is likely to determine the type of rubric you choose to create or use: Analytic or holistic.
ANALYTIC RUBRIC
Most rubrics, like the Recitation rubric mentioned, are analytic rubrics. An analytic rubric articulates
levels of performance for each criterion so the teacher can assess student performance on each criterion. Using
4|P a g e
A G U S A N D E L S U R S T A T E C O L L E G E
O F A G R I C U L T U R E A N D T E C H N O L O G Y |
ASSESSMENT OF LEARNING 2 T R E N T O E X T E R N A L S T U D I E S C E N T E R
the Recitation rubric, a teacher could assess whether a student has done a poor, good or excellent Job or
“creating ambiance” and distinguish that from how well the student did on "voice inflection.
HOLISTIC RUBRIC
In contrast, a holistic rubric does not list separate levels of performance for each criterion. Instead, a
holistic rubric assigns a level of performance by assessing performance across multiple criteria as a whole. For
example, the analytic research rubric above can be turned into a holistic rubric:
5|P a g e
A G U S A N D E L S U R S T A T E C O L L E G E
O F A G R I C U L T U R E A N D T E C H N O L O G Y |
ASSESSMENT OF LEARNING 2 T R E N T O E X T E R N A L S T U D I E S C E N T E R
appropriate. Also, it is not true that there must be an even number or an odd number of levels. Again, that will depend
on the situation.
Generally, it is better to start with a smaller number of levels of performance tor a criterion and then expand if
necessary. Making distinctions in student performance across two or three broad categories is difficult enough. As the
number of levels increases and those Judgments become finer and finer, the likelihood of error increases.
Thus, start small. For example, in an oral presentation rubric, amount t eye contact might be an important
criterion. Performance on that criterion could be judged along three levels of performance:
Although these three levels may not capture all the variations in student performance on the criterion, it may be
sufficient discrimination 1or your purposes. Or, at the least, it is a place to start. Upon applying the three levels of
performance, you might discover that you can effectively group your students’ performance in these three categories.
Furthermore, you might discover that the labels of "never" "sometimes" and always sufficiently communicate to your
students the degree to which they can improve on making eye contact.
On the other hand, after applying the rubric you might discover that you cannot effectively discriminate among
student performances with just three levels of performance. Perhaps, in your view, many students fall in between never
and sometimes, or between sometimes and always, and neither label accurately captures their performance. So, at this
point, you may decide to expand the number of levels of performance to include never, rarely, sometimes, usually and
always.
There is no right" answer as to how many levels of performance there should be tor a criterion in an analytic
rubric; that will depend on the nature of the task assigned, the criteria being evaluated, the students involved and your
purposes and preferences. For example, another teacher might decide to leave off the "*always" level in the above
rubric because "usually" is as much as normally can be expected or even wanted in some instances. Thus, the "makes
eye contact” portion of the rubric for that teacher might be:
We
recommend that fewer levels of performance be included initially because such is:
easier and quicker to administer;
easier to explain to students (and others) easier to expand than larger rubrics to shrink.
6|P a g e
A G U S A N D E L S U R S T A T E C O L L E G E
O F A G R I C U L T U R E A N D T E C H N O L O G Y |
ASSESSMENT OF LEARNING 2 T R E N T O E X T E R N A L S T U D I E S C E N T E R
LESSON 5: Product-oriented performance-based
assessment
OBJECTIVES
a) explain the nature of product-oriented performance-based and develop a task design; and
b) design general scoring and task-specific rubrics.
The role of assessment in teaching happens to be a hot issue in education today. This has led to an increasing
interest in “performance-based education”. Performance-based education poses a challenge for teachers to design
instruction that is task-oriented. The trend is based on the premise that learning needs to be connected to the lives of
the students through relevant tasks that focus on students’ ability to, use their knowledge and skills in meaningful ways.
In this case, performance-based tasks require performancebased assessments in which the actual student performance
is assessed through a product, such as a completed project or work that demonstrates levels of task achievement. At
times, performance-based assessment has been used interchangeably with "authentic assessment and "alternative
assessment." In all cases, performance-based assessment has led to the use of a variety of alternative ways of
evaluating student progress (journals, checklists, portfolios, projects, rubrics, etc.) as compared to more traditional
methods of measurement (paper-and-pencil testing).
1) Product-Oriented Learning Competencies
Student performances can be defined as targeted tasks that lead to a product or overall learning
outcome. Products can include a wide range of student works that target specific skills. Some examples include
communication skills such as those demonstrated in reading, writing, speaking, and listening, or psychomotor
skills requiring physical abilities to perform a given task. Target tasks can also include behavior expectations
targeting complex tasks that students are expected to achieve. Using rubrics is one way that teachers can
evaluate or assess student performance or proficiency in any given task as it relates to a final product or
learning outcome. Thus, rubrics can provide valuable information about the degree to which a student has
achieved a defined learning outcome based on specific criteria that defined the framework for evaluation.
The learning competencies associated with products or outputs are linked with an assessment of the
level of "expertise" manifested by the product. Thus, product-oriented learning competencies target at least
three (3) levels: novice or beginner's level, skilled level, and expert level. Such levels correspond to Bloom's
taxonomy in the cognitive domain in that they represent progressively higher levels of complexity in the thinking
processes.
There are other ways to state product-oriented learning competencies. For instance, we can define learning
competencies for products or outputs in the following way:
7|P a g e
A G U S A N D E L S U R S T A T E C O L L E G E
O F A G R I C U L T U R E A N D T E C H N O L O G Y |
ASSESSMENT OF LEARNING 2 T R E N T O E X T E R N A L S T U D I E S C E N T E R
Level 1: Does the finished product or project illustrate the minimum expected parts or functions?
(Beginner)
Level 2: Does the finished product or project contain additional parts and functions on top of the
minimum requirements which tend to enhance the final output? (Skilled level)
Level 3: Does the finished product contain the basic minimum parts and functions, have additional
features on top of the minimum, and is aesthetically pleasing? (Expert level)
Example: The desired product is a representation of a cubic prism made out of cardboard n an elementary geometry
class Learning
Competencies: The final product submitted by the students must:
possess the correct dimensions (5 x 5"X 5)-(minimum specifications)
be sturdy, made of durable cardboard and properly fastened together- (skilled specifications)
be pleasing to the observer, preferably properly colored for aesthetic purposes- (expert level)
Example: The product designed is a scrapbook illustrating the historical event called EDSA T People Power Learning
Competencies: The scrapbook presented by the students must:
contain pictures, newspaper clippings and other illustrations for the main characters of EDSA 1 People Power
namely; Corazon Aquino, Fidel V. Ramos, Juan Ponce Enrile, Ferdinand E. Marcos, Cardinal Sin. (Minimum
specifications)
contain remarks and captions for the illustrations made the student himself for the roles played by the
characters of EDSA 1 People Power- (skilled level)
be presentable, complete, informative and pleasing to the reader of the scrapbook- (expert level).
Performance-based assessment for products and projects can also be used for assessing outputs of short-term
tasks such as the one illustrated below for outputs in a typing class.
Example: The desired output consists of the output in a typing class.
Learning Competencies: The final typing outputs of the students must:
possess no more than five (5) errors in spelling- (minimum specifications)
possess no more than 5 errors in spelling while observing proper format based on the document to be
typewritten - (skilled level)
possess no more than 5 errors in spelling, has the proper format, and is readable and presentable (expert
level).
Notice that in all of the above examples, product-oriented performance-based learning competencies are evidence-
based. The teacher needs concrete evidence that the student has achieved a certain level of competence based on
submitted products and projects.
2) Task Designing
How should a teacher design a task for product-oriented performance-based assessment? The design
of the task in this context depends on what the teacher desires to observe as outputs of the students. The
concepts that may be associated with task designing include:
a) Complexity. The level of complexity of the project needs to be within the range of ability of the students.
Projects that are too simple tend to be uninteresting for the students while projects that are to0 complicated
will most likely frustrate them.
8|P a g e
A G U S A N D E L S U R S T A T E C O L L E G E
O F A G R I C U L T U R E A N D T E C H N O L O G Y |
ASSESSMENT OF LEARNING 2 T R E N T O E X T E R N A L S T U D I E S C E N T E R
b) Appeal. The project or activity must be appealing to the students. It should be interesting enough so that
students are encouraged to pursue the task to completion. It should lead to self-discovery of information by
the students.
c) Creativity. The project needs to encourage students to exercise creativity and divergent thinking. Given the
same set of materials and project inputs, how does one best present the project? It should lead the
students into exploring the various possible ways of presenting the final output.
d) Goal-Based. Finally, the teacher must bear in mind that the project is produced in order to attain a learning
objective. Thus, projects are assigned to students not just for the sake of producing something but for the
purpose of reinforcing learning.
Example: Paper folding is a traditional Japanese art. However, it can be used as an activity to teach the concept
of plane and solid figures in geometry. Provide the students with a given number of colored papers and ask them to
construct as many plane and solid figures from these papers without cutting them (by paper folding only).
3) Scoring Rubrics
Scoring rubrics are descriptive scoring schemes that are developed by teachers or other evaluators to
guide the analysis of the products or processes of students’ efforts (Brookhart, 1999). Scoring rubrics are
typically employed when a judgment of quality is required and may be used to evaluate a broad range of
subjects and activities. For instance, scoring rubrics can be most useful in grading essays or in evaluating
projects such as scrapbooks, Judgments concerning the quality of a given writing sample may vary depending
upon the criteria established by the individual evaluator. One evaluator may heavily weigh the evaluation
process upon the linguistic structure while another evaluator may be more interested in the persuasiveness of
the argument. A high-quality essay is likely to have a combination of these and other factors. By developing a
pre-defined scheme for the evaluation process, the subjectivity involved in evaluating an essay becomes more
objective.
i. Criteria Setting. The criteria for scoring rubrics are statements which identify "what really counts" in
the final output. The following are the most often used major criteria for product assessment:
Quality
Creativity
Comprehensiveness
Accuracy
Aesthetics
From the major criteria, the next task is to identify sub statements that would make the major criteria
more focused and objective. For instance, if we were scoring an essay on: "Three Hundred Years of Spanish
Rule in the Philippines", the major criterion "Quality" may possess the following sub statements:
interrelates the chronological events in an interesting manner
identifies the key players in each period of the Spanish rule and the roles that they played
succeeds in relating the history of Philippine Spanish rule (rated as Professional, not quite
professional, and Novice).
The example below displays a scoring rubric that was developed to aid in the evaluation of essays
written by college students in the classroom (based loosely on Leydens & Thompson, 1997).
9|P a g e
A G U S A N D E L S U R S T A T E C O L L E G E
O F A G R I C U L T U R E A N D T E C H N O L O G Y |
ASSESSMENT OF LEARNING 2 T R E N T O E X T E R N A L S T U D I E S C E N T E R
10 | P a g e
A G U S A N D E L S U R S T A T E C O L L E G E
O F A G R I C U L T U R E A N D T E C H N O L O G Y |
ASSESSMENT OF LEARNING 2 T R E N T O E X T E R N A L S T U D I E S C E N T E R
The scoring rubrics in this particular example exemplifies what is called a "holistic scoring rubric. It will be noted
that each score category describes the characteristics of a response that would receive the respective score. Describing
the characteristics of responses within each score category increases the likelihood that two independent evaluators
11 | P a g e
A G U S A N D E L S U R S T A T E C O L L E G E
O F A G R I C U L T U R E A N D T E C H N O L O G Y |
ASSESSMENT OF LEARNING 2 T R E N T O E X T E R N A L S T U D I E S C E N T E R
would assign the same Score to a given response in effect, this increases the objectivity of the assessment procedure
using rubrics. In the language of test and measurement, we are actually increasing the "inter-rater reliability”.
WHEN ARE SCORING RUBRICS AN APPROPRIATE EVALUATION TECHNIQUE?
Grading essays is just one example of performances that may be evaluated using scoring rubrics. There are
many other instances in which scoring rubrics may be used successfully: evaluate group activities, extended projects
and oral presentations (e.g., Chicago Public Schools, 1999; Danielson, 1997a; 1997b; Schrock, 2000; Moskal, 2000).
Also, rubrics scoring cuts across disciplines and subject matter for they are equally appropriate to the English,
Mathematics and Science classrooms (e.g. Chicago Public Schools, 1999; State of Colorado, 1999; Danielson, 1997a;
1997b; Danielson & Marquez, 1998, Schrock, 2000). Where and when a scoring rubric is used does not depend on the
grade level or subject, but rather on the purpose of the assessment.
Other Methods
Authentic assessment schemes apart from scoring rubrics exist in the arsenal of a teacher. For example,
checklists may be used rather than scoring rubrics in the evaluation of essays. Checklists enumerate a set of desirable
characteristics for a certain product and the teacher marks those characteristics which are actually observed. As such,
checklists are an appropriate choice for evaluation when the information that is sought is limited to the determination of
whether not specific criteria have been met. On the other hand, scoring rubrics are based on descriptive scales and
support the evaluation of the extent to which criteria have been met.
The ultimate consideration in using a scoring rubric for assessment is really the “purpose of the assessment.”
Scoring rubrics provide at least two benefits in the evaluation process, First, they support the examination of the extent
to which the specified criteria have been reached, Second, they provide feedback to students concerning how to
improve their performances. If these benefits are consistent with the purpose of the assessment, then a scoring rubric is
likely to be an appropriate evaluation technique.
GENERAL VERSUS TASK-SPECIFIC
In the development of the scoring rubrics, it is well to bear in mind that it can be used to assess or evaluate
specific tasks or general or broad category of tasks. For instance, suppose that we are interested in assessing the
student's oral communication skills. Then, a general scoring rubric may be developed and used to evaluate each of the
oral presentations given by that student. After each such oral presentation of the students, the general scoring rubrics
are shown to the students which then allow them to improve on their previous performances. Scoring rubrics have this
advantage of instantaneously providing a mechanism for immediate feedback.
In contrast, suppose the main purpose of the oral presentation is to determine the students’ knowledge of the
facts surrounding the EDSA I revolution, then perhaps a specific scoring rubric would be necessary. A general scoring
rubric tor evaluating a sequence of presentations may not be adequate since, in general, events such as EDSA I (and
EDSA I) differ on the situations surrounding factors (what caused the revolutions) and the ultimate outcomes of these
events. Thus, to evaluate the students’ knowledge of these events, it will be necessary to develop specific rubrics
scoring guide for each presentation.
PROCESS OF DEVELOPING SCORING RUBRICS
The development of scoring rubrics goes through a process. The first step in the process entails the
identification of the qualities and attributes that the teacher wishes to observe in the students’ outputs that would
demonstrate their level of proficiency. (Brookhart, 1999). These qualities and attributes form the top level of the scoring
criteria for the rubrics. Once done, a decision has to be made whether a holistic or an analytical rubric would be more
appropriate. In an analytic scoring rubric, each criterion is considered one by one and the descriptions of the scoring
12 | P a g e
A G U S A N D E L S U R S T A T E C O L L E G E
O F A G R I C U L T U R E A N D T E C H N O L O G Y |
ASSESSMENT OF LEARNING 2 T R E N T O E X T E R N A L S T U D I E S C E N T E R
levels are made separately. This will then result in separate descriptive scoring schemes to reach of the criterion or
scoring factor. On the other hand, for holistic scoring rubrics, the collection of criteria is considered throughout the
construction of each level of the scoring rubric and the result is a single descriptive scoring scheme.
The next step after defining the criteria for the top level of performance is the identification and definition of the
criteria for the lowest level of performance. In other words, the teacher is asked to determine the type of performance
that would constitute the worst performance or a performance which would indicate lack of understanding of the
concepts being measured. The underlying reason for this step is for the teacher to capture the criteria that would suit a
middle level performance for the concept being measured. In particular, therefore, the approach suggested would result
in at least three levels of performance.
It is of course possible to make greater and greater distinctions between performances. For instance, we can
compare the middle level performance expectations with the best performance criterion and come up with an above
average performance criterion; between the middle level performance expectations and the worst level of performance
to come up with a slightly below average performance criterion and so on. This comparison process can be used until
the desired number of score levels is reached or until no further distinctions can be made. If meaningful distinctions
between the score categories cannot be made, then additional score categories should not be created (Brookhart,
1999). It is better to have a few meaningful score categories then to have many score categories that are difficult or
impossible to distinguish.
A note of caution, it is suggested that each score category should be defined using descriptors of the work
rather than value-judgement about the work (Brookhart, 1999). For example, "Student's sentences contain no errors in
subject-verb agreements, 1s preferable over, "Students sentences are good.” The phrase "are good require the
evaluator to make a judgement whereas the phrase "no errors” is quantifiable. Finally, we can test whether our scoring
rubrics is "reliable" by asking two or more teachers to score the same set of projects or outputs and correlate their
individual assessments. High correlations between the raters imply high interrater reliability. If the scores assigned by
teachers differ greatly, then such would suggest a way to refine the scoring rubrics we have developed. It may be
necessary to clarity the scoring rubrics so that they would mean the same thing to different scorer.
RESOURCES
Currently, there is a broad range of resources available to teachers who wish to use scoring rubrics in their
classrooms. These resources differ both in the subject that they cover and the level that they are designed to assess.
The examples provided below are only a small sample of the information that is available.
For K-12 teachers, the State of Colorado (1998) has developed an online set of general, holistic scoring rubrics
that are designed for the evaluation of various writing assessments. The Chicago Public Schools (1999) maintain an
extensive electronic list of analytic and holistic scoring rubrics that span the broad array of subjects represented
throughout K-12 education. For the mathematics teaches, Danielson has developed a collection of reference books that
contain Scoring rubrics that are appropriate to the elementary, middle school and high school mathematics classrooms
(1997a, 19976; Danielson & Marquez, 1998).
Resources are also available to assist college instructors who are interested in developing and using scoring
rubrics in their classrooms. Kathy Schrock's Guide for Educators (2000) contains electronic materials for both the pre-
college and the college classroom. In The Art and Science of Classroom Assessment: The Missing Part Pedagogy,
Brookhart (1999) provides a brief, but comprehensive review of the literature on assessment in the college classroom.
this includes a description of scoring rubrics and why their use is increasing in the college classroom. Moskal (1999) has
developed a web site that contains links to a variety of college assessment resources, including scoring rubrics.
13 | P a g e
A G U S A N D E L S U R S T A T E C O L L E G E
O F A G R I C U L T U R E A N D T E C H N O L O G Y |
ASSESSMENT OF LEARNING 2 T R E N T O E X T E R N A L S T U D I E S C E N T E R
The resources described represent only a fraction of those that are available. The ERIC Clearinghouse on
Assessment and Evaluation [ERIC/AE] provides several additional useful web sites. One of these, Scoring Rubrics
Definitions && Constructions (2000b), specifically addresses questions that are frequently asked with regard to scoring
rubrics. This site also provides electronic links t web resources and bibliographic references to books and articles that
discuss scoring rubrics. For more recent developments within assessment and evaluation, a search can be completed
on the abstracts of papers that will soon be available through ERIC/AE (2000a). This site also contains a direct link to
ERIC/AE abstracts that are specific to scoring rubrics.
Search engines that are available on the web may be used to locate additional electronic resources. When
using this approach, the search criteria should be as specific as possible. Generic searches that use the terms "rubrics"
or "scoring rubrics" will yield a large volume of references. When seeking information on scoring rubrics from the web, it
is advisable to use an advanced search and specify the grade level, subject area and topic of interest. If more resources
are desired than result from this conservative approach, the search criteria can be expanded.
14 | P a g e
A G U S A N D E L S U R S T A T E C O L L E G E
O F A G R I C U L T U R E A N D T E C H N O L O G Y |
ASSESSMENT OF LEARNING 2 T R E N T O E X T E R N A L S T U D I E S C E N T E R