0% found this document useful (0 votes)
37 views17 pages

Basic Concept in Assessment

Assessment in Learning

Uploaded by

Werson De Asis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views17 pages

Basic Concept in Assessment

Assessment in Learning

Uploaded by

Werson De Asis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Basic Concept in Assessment

1. 1. JARRY FUENTES MARYDEN ANDALECIO BEVERLY DADIVAS BSEd 3D-TLE


MARIA SHEILA D. SIMON, Ed. D. Course Facilitator
2. 2. According to Linn and Miller (2005) define assessment as any of a variety of
procedures used to obtain information about student performance. Assessment refers to
the full range of information gathered and synthesized by teachers about their students
and their classrooms (Arends, 1994) Assessment is a method for analyzing and
evaluating student achievement or program success.
3. 3. Assessment for Learning Is practiced, students are encouraged to be more active in
their learning and associated assessment. The ultimate purpose of assessment for
learning is to create self-regulated learners who can leave school able and confident to
continue learning throughout their lives. Teachers need to know at the outset of a unit of
study where their students are in terms of their learning and then continually check on
how they are progressing through strengthening the feedback they get from their
learners.
4. 4. Measurement, Evaluation and Assessment Measurement as used in education refers
to the process of quantifying an individual’s achievement, personality, and attitudes
among others by means of appropriate measuring instruments. Educational
Measurement The first step towards elevating a field of study into a science is to take
measurements of the quantities and qualities of interest in the field.
5. 5. Basic concepts in Assessment As teachers, we are continually faced with the
challenge of assessing the progress of our students as well as our own effectiveness as
teachers. Educational Measurement The first step towards elevating a field of study into
a science is to take measurements of the quantities and qualities of interest in the field.
Types of Measurement Objective measurements- are measurements that do not depend
on the person or individual taking the measurements. Subjective measurements- often
differ from one assessor to the next even if the same quantity or quality is being
measured.
6. 6. The underlying principle in educational measurement is summarized by the following
formula: Measurement of quantity or quality of interest = true value plus random error.
7. 7. Evaluation is the process of systematic collection and analysis of both qualitative and
quantitative data for the purpose of making some decision and judgments. Assessment,
Test, and Measurement Test: An instrument or systematic procedure for measuring a
sample of behavior by posing a set of questions in a uniform manner. Measurement: The
process of obtaining a numerical description of the degree to which an individual
possesses a particular characteristic. Measurement answers the question “How much?”
8. 8. Test, Non- test, Examination, Test item and Quiz A test in the educational setting is a
question or a series of question which aims to determine how well a student learned
from a subject or topic taught. A non- test is a question or activity which determines the
interests, attitude and other student’s characteristics whose answer or answers is/are
not judged wrong or incorrect. Examples: Personality inventory,” What is your favorite
sports?”, “Why do you prefer green vegetables?” An examination is a long test which
may or may be composed of one or more test formats. Examples: Mid- term
examination, Licensure Examination for Teachers, comprehensive examination. A test
item is any question included in a test or examination. Examples: Who was the President
of the Philippines when World War 2 broke out? Is “Little Red Riding Hood” a short
story? A quiz is a short test usually given at the beginning or at the end of a discussion
period.
9. 9. Indicators, variables and Factors An educational variable (denoted by an English
alphabet, like X) is a measurable characteristic of a student. Variables may be directly
measurable as in X= age or X= height of a student. An indicator, I, denotes the presence
or absence of a measured characteristics. Thus: I= 1, if the characteristics is present =
O, if the characteristic is absent
10. 10. Various Roles of Assessment Assessment plays a number of roles in making
instructional decisions. Summative Role- An assessment may be done for summative
purposes as in the illustration given above for grade VI mathematics achievement.
Diagnostic Role- Assessment may be done for diagnostic purposes. In the case, we are
interested in determining the gaps in learning or learning processes, hopefully, to be
able to bridge these gaps. Formative Assessment- Another purpose of assessment is
formative. In this role, assessment guides the teachers on his/ her day- to- day teaching
activity. Placement- The final role of assessment in curricular decisions concerns
placement. Assessment plays a vital role in determining the appropriate placement of a
student both in terms of achievements and aptitude. Aptitude- refers to the area or
discipline where a student would most likely excel or do well.
11. 11. A Systems Model for Evaluation Evaluation provides a tool for determining the extent
to which an educational process or program is effective and all the same time indicates
directions foe remediating processes of the curriculum that do not contribute to
successful student performance.( Jason , 2003) CONTEX INPUTS PROCESS OUTPUT
OUTCOME
12. 12. Evaluation Is the process of gathering and interpreting evidence regarding the
problems and progress of individuals in achieving desirable educational goals. Chief
Purposes of Evaluation The improvement of the individual learner Other Purposes of
Evaluation  To maintain standard  To select students  To motivate learning  To
guide learning  To furnish instruction  To appraise educational instrumentalities
13. 13. Function of Evaluation  Prediction  Diagnosis  Research Areas of Educational
Evaluation  Achievement  Aptitude  Interest  Personality A well defined system of
evaluation:  Enable one to clarify goals  Check upon each phase of development 
Diagnose learning difficulties  Plan carefully for remediation
14. 14. Principles of Educational Evaluation • Evaluation must be based on previously
accepted educational objectives. • Evaluation should be continuous comprehensive and
cumulative process. • Evaluation should recognize that the total individual personality is
involved in learning. • Evaluation should be democratic and cooperative. • Evaluation
should be positive and action-directed • Evaluation should give opportunity to the pupil to
become increasingly independent in self- appraisal and self- direction. • Evaluation
should include all significant evidence from every possible source. • Evaluation should
take into consideration the limitations of the particular educational situations.
15. 15. Measurements Is the part of the educational evaluation process whereby some tools
or instruments are use to provide a quantitative description of the progress of students
towards desirable educational goals. Test or Testing Is a systematic procedure to
determine the presence or absence of certain characteristics of qualities in a learner.
Types of Evaluation • Placement • Formative • Diagnostic • Summative
16. 16. Educational Assessment serves three important functions (Bernardo, 2003): 1.
Student selection and certification -To make decisions, about which students get
admitted, retained, promoted, and certified for graduation. 2. Instructional monitoring -
To provide information about student learning and teaching performance to help
teachers monitor manage, and make decisions about the instructional system. 3. For -
Public accountability and program evaluation - Making decisions about the different
aspects of the educational process - Helping make GOOD decisions, if they provide
accurate, authentic, reliable and valid information about educational: LEARNING
GOALS.
17. 17. Principles of Educational Assessment • Educational assessment always begins with
educational values and standards. • Assessment is not an end in itself but a vehicle for
attaining educational goals and for improving on these educational goals. • These
educational goals (values and standards) should be made explicit to all concerned from
the very beginning. • Desired learning competencies (skills, knowledge, values, ways of
thinking and learning) determine what we choose to assess. • Educational values and
standards should also characterize how we assess. • Assessment systems should lead
educators to help students attain the educational goals, values, and standards.
18. 18. Characteristics of Assessment • Assessment is not a single event but a continue
cycle. • Assessment must be an open process. • Assessment must promote valid
inferences. • Assessment that matters should always employ multiple measures of
performance. • Assessment should measures what is worth learning, not just what is
easy to measure. • Assessment should support every student’s opportunity to learn
important mathematics.
19. 19. Elements of the Assessment Process -assessment should center on the learner and
the learning process. Huba and Freed (2000) explained the four elements of learner
centered assessment. 1. Formulating statements of intended learning outcomes 2.
Developing or Selecting Assessment Measures 3. Creating Experiences Leading to
Outcomes 4. Discussing and Using Assessment Results to Improve Learning
20. 20. The Three Types of Learning Believing that there were more than one (1) type of
learning, Benjamin Bloom and a committee of colleagues in 1956, identified three
domains of educational activities: the cognitive, referring to mental skills; affective
referring to growth in feeling or emotion; and psychomotor, referring to manual or
physical skills.
21. 21. The Three Types of Learning Believing that there were more than one (1) type of
learning, Benjamin Bloom and a committee of colleagues in 1956, identified three
domains of educational activities: the cognitive, referring to mental skills; affective
referring to growth in feeling or emotion; and psychomotor, referring to manual or
physical skills.
22. 22. DOMAIN II: Psychomotor (Skills) In the early seventies, E Simpson, Dave and A, S,
Harrow recommended categories for the psychomotor domain which included physical
coordination, movement and use of the skills body parts. DOMAIN III: Affective (Attitude)
-the affective domain refers to the way in which in which we deal with the situation
emotionally such as feelings, appreciation, enthusiasm, motivation, value, and attitude.
The taxonomy is ordered into 5 levels as the person progresses towards internalization
in which the attitude or feeling consistently guides or controls a person’s behavior.
23. 23. Principles of Good Practice in Assessing Learning Outcomes 1. The assessment of
student learning starts with the institutions mission and core values. 2. Assessment
works best when the program has clear statement of objectives aligned with the
institutional missions and core values. 3. Outcomes- based assessment focuses on the
student activities that will be relevant after schooling concludes. 4. Assessment requires
attention not only to outcomes but also and equally to the activities and experiences that
lead to the attainment of learning outcomes. 6. Assessment works best when it is
continuous, ongoing and not episodic. 7. Assessment should be cumulative because
improvement is best achieved through a linked series of activities done over time in
24. 24. Kinds of Assessment Formative assessment Formative assessment is an integral
part of teaching and learning. It does not contribute to the final mark given for the
module; instead it contributes to learning through providing feedback. It should indicate
what is good about a piece of work and why this is good; it should also indicate what is
not so good and how the work could be improved. Effective formative feedback will
affect what the student and the teacher does next. Summative assessment Summative
assessment demonstrates the extent of a learner's success in meeting the assessment
criteria used to gauge the intended learning outcomes of a module or program, and
which contributes to the final mark given for the module. It is normally, though not
always, used at the end of a unit of teaching. Summative assessment is used to quantify
achievement, to reward achievement, to provide data for selection (to the next stage in
education or to employment).
25. 25. Diagnostic assessment Like formative assessment, diagnostic assessment is
intended to improve the learner’s experience and their level of achievement. However,
diagnostic assessment looks backwards rather than forwards. It assesses what the
learner already knows and/or the nature of difficulties that the learner might have, which,
if undiagnosed, might limit their engagement in new learning. It is often used before
teaching or when a problem arises. Dynamic assessment Dynamic assessment
measures what the student achieves when given some teaching in an unfamiliar topic or
field. An example might be assessment of how much Swedish is learnt in a short block
of teaching to students who have no prior knowledge of the language. It can be useful to
assess potential for specific learning in the absence of relevant prior attainment, or to
assess general learning potential for students who have a particularly disadvantaged
background. It is often used in advance of the main body of teaching.
26. 26. Synoptic assessment Synoptic assessment encourages students to combine
elements of their learning from different parts of a program and to show their
accumulated knowledge and understanding of a topic or subject area. A synoptic
assessment normally enables students to show their ability to integrate and apply their
skills, knowledge and understanding with breadth and depth in the subject. It can help to
test a student's capability of applying the knowledge and understanding gained in one
part of a program to increase their understanding in other parts of the program, or across
the program as a whole. Synoptic assessment can be part of other forms of assessment.
Criterion referenced assessment Each student’s achievement is judged against specific
criteria. In principle no account is taken of how other students have performed. In
practice, normative thinking can affect judgments of whether or not a specific criterion
has been met. Reliability and validity should be assured through processes such as
moderation, trial marking, and the collation of exemplars. Ipsative assessment This is
assessment against the student’s own previous standards. It can measure how well a
particular task has been undertaken against the student’s average attainment, against
their best work, or against their most recent piece of work. Ipsative assessment tends to
correlate with effort, to promote effort-based attributions of success, and to enhance
motivation to learn.
27. 27. Evaluative assessment provides instructors with curricular feedback (e.g., the value
of a field trip or oral presentation technique) Educative assessment Integrated within
learning activities themselves, educative assessment builds student (and faculty) insight
and understandings about their own learning and teaching. In short, assessment is a
form of learning.
28. 28. The Effective Assessment Enhancing learning by enhancing assessment
Assessment is a central element in the overall quality of teaching and learning in higher
education. Well designed assessment sets clear expectations, establishes a reasonable
workload (one that does not push students into rote reproductive approaches to study),
and provides opportunities for students to self-monitor, rehearse, practice and receive
feedback. Assessment is an integral component of a coherent educational experience.
Three objectives for higher education assessment • Assessment that guides and
encourages effective approaches to learning; • Assessment that validly and reliably
measures expected learning outcomes, in particular the higher-order learning that
characterizes higher education • Assessment and grading that defines and protects
academic standards.
29. 29. 16 indicators of effective assessment in higher education A checklist for quality in
student assessment 1. Assessment is treated by staff and students as an integral and
prominent component of the entire teaching and learning process rather than a final
adjunct to it. 2. The multiple roles of assessment are recognized. The powerful
motivating effect of assessment requirements on students is understood and
assessment tasks are designed to foster valued study habits. 3.There is a
faculty/departmental policy that guide individuals’ assessment practices. Subject
assessment is integrated into an overall plan for course assessment. 4. There is a clear
alignment between expected learning outcomes, what is taught and learnt, and the
knowledge and skills assessed — there is a closed and coherent ‘curriculum loop’. 5.
Assessment tasks assess the capacity to analyze and synthesis new information and
concepts rather than simply recall information previously presented.
30. 30. 6. A variety of assessment methods is employed so that the limitations of particular
methods are minimized. 7. Assessment tasks are designed to assess relevant generic
skills as well as subject- specific knowledge and skills. 8. There is a steady progression
in the complexity and demands of assessment requirements in the later years of
courses. 9. There is provision for student choice in assessment tasks and weighting at
certain times. 10. Student and staff workloads are considered in the scheduling and
design of assessment tasks. 11. Excessive assessment is avoided. Assessment tasks
are designed to sample student learning.
31. 31. 12. Assessment tasks are weighted to balance the developmental (‘formative’) and
judgmental (‘summative’) roles of assessment. Early low-stakes, low-weight assessment
is used to provide students with feedback. 13. Grades are calculated and reported on
the basis of clearly articulated learning outcomes and criteria for levels of achievement.
14. Students receive explanatory and diagnostic feedback as well as grades. 15.
Assessment tasks are checked to ensure there are no inherent biases that may
disadvantage particular student groups. 16. Plagiarism is minimized through careful task
design, explicit education and appropriate monitoring of academic honesty.
32. 32. The Assessment Cycle Good assessment follows an intentional and reflective
process of design, implementation, evaluation, and revision. The Assessment Cycle
relies on four simple but dynamic words to represent this process.
33. 33.  What do I want students to learn?  How do I teach effectively?  Are my
outcomes being met?  How do I use what I've learned?
34. 34. JARRY FUENTES BSEd 3D-TLE MARIA SHEILA D. SIMON, Ed. D. Course
Facilitator MARYDEN ANDALECIO BSEd 3D-TLE BEVERLY DADIVAS BSE
35. assessing student learning is something that every teacher has to do,
usually quite frequently. Written tests, book reports, research papers,
homework exercises, oral presentations, question-and-answer sessions,
science projects, and artwork of various sorts are just some of the ways in
which teachers measure student learning, with written tests accounting for
about 45 percent of a typical student's course grade (Green & Stager,
1986/1987). It is no surprise, then, that the typical teacher can spend
between one-third and one-half of her class time engaged in one or another
type of measurement activity (Stiggins, 1994). Yet despite the amount of
time teachers spend assessing student learning, it is a task that most of
them dislike and that few do well. One reason is that many teachers have
little or no in-depth knowledge of assessment principles (Crooks, 1988;
Hills, 1991; Stiggins, Griswold, & Wikelund, 1989). Another reason is that
the role of assessor is seen as being inconsistent with the role of teacher
(or helper). Since teachers with more training in assessment use more
appropriate assessment practices than do teachers with less training
(Green & Stager, 1986/1987), a basic goal of this chapter is to help you
understand how such knowledge can be used to reinforce, rather than
work against, your role as teacher. Toward that end, we will begin by
defining what we mean by the term assessment and by two key elements of
this process, measurement and evaluation.
36.

Top
37.
38. What is Assessment?
39. Broadly conceived, classroom assessment involves two major types of
activities: collecting information about how much knowledge and skill
students have learned (measurement) and making judgments about the
adequacy or acceptability of each student's level of learning (evaluation).
Both the measurement and evaluation aspects of classroom assessment
can be accomplished in a number of ways. To determine how much
learning has occurred, teachers can, for example, have students take
exams, respond to oral questions, do homework exercises, write papers,
solve problems, and make oral presentations. Teachers can then evaluate
the scores from those activities by comparing them either to one another
or to an absolute standard (such as an A equals 90 percent correct).
Throughout much of this chapter we will explain and illustrate the various
ways in which you can measure and evaluate student learning.
40.
41. Measurement
42. Measurement is the assignment of numbers to certain attributes of objects,
events, or people according to a rule-governed system. For our purposes,
we will limit the discussion to attributes of people. For example, we can
measure someone's level of typing proficiency by counting the number of
words the person accurately types per minute or someone's level of
mathematical reasoning by counting the number of problems correctly
solved. In a classroom or other group situation, the rules that are used to
assign the numbers will ordinarily create a ranking that reflects how much
of the attribute different people possess (Linn & Gronlund, 1995).
43.
44. Evaluation
45. Evaluation involves using a rule-governed system to make judgments
about the value or worth of a set of measures (Linn & Gronlund, 1995).
What does it mean, for example, to say that a student answered eighty out
of one hundred earth science questions correctly? Depending on the rules
that are used, it could mean that the student has learned that body of
knowledge exceedingly well and is ready to progress to the next unit of
instruction or, conversely, that the student has significant knowledge gaps
and requires additional instruction.
46.

Top
47.
48. Why Should We assess Students' Learning?
49. This question has several answers. We will use this section to address four
of the most common reasons for assessment: to provide summaries of
learning, to provide information on learning progress, to diagnose specific
strengths and weaknesses in an individual's learning, and to motivate
further learning.
50. Summative Evaluation
51. The first, and probably most obvious, reason for assessment is to provide
to all interested parties a clear, meaningful, and useful summary or
accounting of how well a student has met the teacher's objectives. When
testing is done for the purpose of assigning a letter or numerical grade, it is
often called summative evaluationsince its primary purpose is to sum up
how well a student has performed over time and at a variety of tasks.
52. Formative Evaluation
53. A second reason for assessing students is to monitor their progress. The
main things that teachers want to know from time to time is whether
students are keeping up with the pace of instruction and are understanding
all of the material that has been covered so far. For students whose pace of
learning is either slower or faster than average or whose understanding of
certain ideas is faulty, you can introduce supplementary instruction (a
workbook or a computer-based tutorial program), remedial instruction
(which may also be computer based), or in-class ability grouping (recall
that we discussed the benefits of this arrangement in Chapter 6). Because
the purpose of such assessment is to facilitate or form learning and not to
assign a grade, it is usually called formative evaluation.
54. Diagnosis
55. A third reason follows from the second. If you discover a student who is
having difficulty keeping up with the rest of the class, you will probably
want to know why in order to determine the most appropriate course of
action. This purpose may lead you to construct an assessment (or to look
for one that has already been made up) that will provide you with specific
diagnostic information.
56.
57. Effects on Learning
58. A fourth reason for assessment of student performance is that it has
potentially positive effects on various aspects of learning and instruction.
As Terence Crooks points out, classroom assessment guides students'
"judgment of what is important to learn, affects their motivation and self-
perceptions of competence, structures their approaches to and timing of
personal study (e.g., spaced practice), consolidates learning, and affects
the development of enduring learning strategies and skills. It appears to be
one of the most potent forces influencing education" (1988, p. 467).
59.

Top
60.
61. Ways to Measure Student Learning
62. Just as measurement can play several roles in the classroom, teachers
have several ways to measure what students have learned. Which type of
measure you choose will depend, of course, on the objectives you have
stated. For the purposes of this discussion, objectives can be classified in
terms of two broad categories: knowing about something (for example, that
knots are used to secure objects, that dance is a form of social expression,
that microscopes are used to study things too small to be seen by the
naked eye) and knowing how to do something (for example, tie a square
knot, dance the waltz, operate a microscope). Measures that attempt to
assess the range and accuracy of someone's knowledge are usually called
written tests. And measures that attempt to assess how well somebody can
do something are often referred to as performance tests. Again, keep in
mind that both types have a legitimate place in a teacher's assessment
arsenal. Which type is used, and to what extent, will depend on the purpose
or purposes you have for assessing students. In the next two sections, we
will briefly examine the nature of both types.
63.
64. Written Tests
65. Teachers spend a substantial part of each day assessing student learning,
and much of this assessment activity involves giving and scoring some
type of written test. Most written tests are composed of one or more of the
following item types: selected response (multiple choice, true-false, and
matching, for example), short answer, and essay. They are designed to
measure how much people know about a particular subject. In all
likelihood, you have taken hundreds of these types of tests in your school
career thus far. In the next couple of pages, we will briefly describe the
main features, advantages, and disadvantages of each test.
66.

Top
67.
68. Selected-Response Tests
69. Characteristics
70. Selected-response tests are so named because the student reads a
relatively brief opening statement (called a stem) and selects one of the
provided alternatives as the correct answer. Selected-response tests are
typically made up of multiple-choice, true-false, or matching items. Quite
often all three item types are used in a single test. Selected-response tests
are sometimes called "objective" tests because they have a simple and set
scoring system. If alternative (b) of a multiple-choice item is keyed as the
correct response and the student chose alternative (d), the student is
marked wrong, regardless of how much the teacher wanted the student to
be right. But that doesn't mean selected-response items are totally free of
subjective influences. After all, whoever created the test had to make
subjective judgments about which areas to emphasize, how to word items,
and which items to include in the final version. Finally, selected-response
tests are typically used when the primary goal is to assess what might be
called foundational knowledge. This is the basic factual information and
cognitive skills that students need in order to do such high-level tasks as
solve problems and create products (Stiggins, 1994).
71.
72. Advantages
73. A major advantage of selected-response tests is efficiency -- a teacher can
ask many questions in a short period of time. Another advantage is ease
and reliability of scoring. With the aid of a scoring template (such as a
multiple-choice answer sheet that has holes punched out where the correct
answer is located), many tests can be quickly and uniformly scored.
74.
75. Disadvantages
76. Because items that reflect the lowest level of Bloom's Taxonomy (verbatim
knowledge) are the easiest to write, most teacher-made tests are composed
almost entirely of knowledge-level items (a point we made initially in
Chapter 7). As a result, students focus on verbatim memorization rather
than on meaningful learning. Another disadvantage is that, while we get
some indication of what students know, such tests tell us nothing about
what students can do with that knowledge.
77.

Top
78.
79. Short-Answer Tests
80. Characteristics
81. Instead of selecting from one or more alternatives, the student is asked to
supply a brief answer consisting of a name, word, phrase, or symbol. Like
selected-response tests, short-answer tests can be scored quickly,
accurately, and consistently, thereby giving them an aura of objectivity.
They are primarily used for measuring foundational knowledge.
82.
83. Advantages
84. Short-answer items are relatively easy to write, so a test, or part of one, can
be constructed fairly quickly. They allow for either broad or in-depth
assessment of foundational knowledge since students can respond to
many items within a short space of time. Since students have to supply an
answer, they have to recall, rather than recognize, information.
85.
86. Disadvantages
87. This item type has the same basic disadvantages as the selected-response
items. Because these items ask only for short verbatim answers, students
are likely to limit their processing to that level, and these items provide no
information about how well students can use what they have learned. In
addition, unexpected but plausible answers may be difficult to score.
88.

Top
89.
90. Essay Tests
91. Characteristics
92. The student is given a somewhat general directive to discuss one or more
related ideas according to certain criteria. One example of an essay
question is "Compare operant conditioning theory and information-
processing theory in terms of basic assumptions, typical research findings,
and classroom applications."
93.
94. Advantages
95. Essay tests reveal how well students can recall, organize, and clearly
communicate previously learned information. When well written, essays
tests call on such higher-level abilities as analysis, synthesis, and
evaluation. Because of these demands, students are more likely to try to
meaningfully learn the material over which they are tested.
96.
97. Disadvantages
98. Consistency of grading is likely to be a problem. Two students may have
essentially similar responses, yet receive different letter or numerical
grades. These test items are also very time consuming to grade. And
because it takes time for students to formulate and write responses, only a
few questions at most can be given.
99.

Top
100.

Performance Tests
101. In recent years many teachers and measurement experts have
argued that the typical written test should be used far less often because it
reveals little or nothing of the depth of students' knowledge and how
students use their knowledge to work through questions, problems, and
tasks. The solution that these experts have proposed is to use one or more
of what are called performance tests.
102. Performance tests attempt to assess how well students use
foundational knowledge to perform complex tasks under more or less
realistic conditions. At the low end of the realism spectrum, students may
be asked to construct a map, interpret a graph, or write an essay under
highly standardized conditions. That is, everyone completes the same task
in the same amount of time and under the same conditions. At the high end
of the spectrum, students may be asked to conduct a science experiment,
produce a painting, or write an essay under conditions that are similar to
those of real life. For example, students may be told to produce a compare-
and-contrast essay on a particular topic by a certain date, but the
resources students choose to use, the number of revisions they make, and
when they work on the essay are left unspecified. As we noted in Chapter
5, when performance testing is conducted under such realistic conditions,
it is also called authentic assessment (Meyer, 1992). Another term that is
often used to encompass both performance testing and authentic
assessment, and to distinguish them from traditional written tests,
is alternative assessment. In this section we will first define the four
different types of performance tests and then look at their most important
characteristics.
103.

Top
104.
105. Types of Performance Tests
106. Currently, there are four ways in which the performance capabilities
of students are typically assessed: direct writing assessments, portfolios,
exhibitions, and demonstrations.
107.
108. Direct Writing Assessments
109. These tests ask students to write about a specific topic ("Describe
the person whom you admire the most, and explain why you admire that
person.") under a standard set of conditions. Each essay is then scored by
two or more people according to a set of defined criteria.
110.
111. Portfolios
112. A portfolio may contain one or more pieces of a student's work,
some of which demonstrate different stages of completion. For example, a
student's writing portfolio may contain business letters; pieces of fiction;
poetry; and an outline, rough draft, and final draft of a research paper.
Through the inclusion of various stages of a research paper, both the
process and the end product can be assessed. Portfolios can also be
constructed for math and science as well as for projects that combine two
or more subject areas. Often the student is involved in the selection of
what is included in his portfolio. The portfolio is sometimes used as a
showcase to illustrate exemplary pieces, but it also works well as a
collection of pieces that represent a student's typical performances. In its
best and truest sense, the portfolio functions not just as a housing for
these performances but also as a means of self-expression, self-reflection,
and self-analysis for an individual student (Templeton, 1995).
113.
114. Exhibitions
115. Exhibitions involve just what the label suggests -- a showing of such
products as paintings, drawings, photographs, sculptures, videotapes, and
models. As with direct writing assessments and portfolios, the products a
student chooses to exhibit are evaluated according to a predetermined set
of criteria.
116.
117. Demonstrations
118. In this type of performance testing, students are required to show
how well they can use previously learned knowledge or skills to solve a
somewhat unique problem (such as conducting a scientific inquiry to
answer a question or diagnosing the cause of a malfunctioning engine and
describing the best procedure for fixing it) or perform a task (such as
reciting a poem, performing a dance, or playing a piece of music).
119.

Top
120.
121. Ways to Evaluate Student Learning
122. Once you have collected all the measures you intend to collect -- for
example, test scores, quiz scores, homework assignments, special
projects, and laboratory experiments -- you will have to give the numbers
some sort of value (the essence of evaluation). As you probably know, this
is most often done by using an A to F grading scale. Typically, a grade of A
indicates superior performance; a B, above-average performance; a C,
average performance; a D, below-average performance; and an F, failure.
There are two general ways to approach this task. One approach involves
comparisons among students. Such forms of evaluation are called norm-
referenced since students are identified as average (or normal), above
average, or below average. An alternative approach is called criterion-
referenced because performance is interpreted in terms of defined criteria.
Although both approaches can be used, we favor criterion-referenced
grading for reasons we will mention shortly.
123.
124. NORM-REFERENCED GRADING
125. A norm-referenced grading system assumes that classroom
achievement will naturally vary among a group of heterogeneous students
because of differences in such characteristics as prior knowledge, learning
skills, motivation, and aptitude. Under ideal circumstances (hundreds of
scores from a diverse group of students), this variation produces a bell-
shaped, or "normal," distribution of scores that ranges from low to high,
has few tied scores, and has only a very few low scores and only a very few
high scores. For this reason, norm-referenced grading procedures are also
referred to as "grading on the curve."
126.
127. CRITERION-REFERENCED GRADING
128. A criterion-referenced grading system permits students to benefit
from mistakes and to improve their level of understanding and
performance. Furthermore, it establishes an individual (and sometimes
cooperative) reward structure, which fosters motivation to learn to a
greater extent than other systems.
129. Under a criterion-referenced system, grades are determined through
comparison of the extent to which each student has attained a defined
standard (or criterion) of achievement or performance. Whether the rest of
the students in the class are successful or unsuccessful in meeting that
criterion is irrelevant. Thus, any distribution of grades is possible. Every
student may get an A or an F, or no student may receive these grades. For
reasons we will discuss shortly, very low or failing grades tend to occur
less frequently under a criterion-referenced system.
130. A common version of criterion-referenced grading assigns letter
grades on the basis of the percentage of test items answered correctly. For
example, you may decide to award an A to anyone who correctly answers
at least 85 percent of a set of test questions, a B to anyone who correctly
answers 75 to 84 percent, and so on down to the lowest grade. To use this
type of grading system fairly, which means specifying realistic criterion
levels, you would need to have some prior knowledge of the levels at which
students typically perform. You would thus be using normative information
to establish absolute or fixed standards of performance. However, although
norm-referenced and criterion-referenced grading systems both spring
from a normative database (that is, from comparisons among students),
only the former system uses those comparisons to directly determine
grades.
131. Criterion-referenced grading systems (and criterion-referenced tests)
have become increasingly popular in recent years primarily because of
three factors. First, educators and parents complained that norm-
referenced tests and grading systems provided too little specific
information about student strengths and weaknesses. Second, educators
have come to believe that clearly stated, specific objectives constitute
performance standards, or criteria, that are best assessed with criterion-
referenced measures. Third, and perhaps most important, contemporary
theories of school learning claim that most, if not all, students can master
most school objectives under the right circumstances. If this assertion is
even close to being true, then norm-referenced testing and grading
procedures, which depend on variability in performance, will lose much of
their appeal.
132.

Top
133.
134. Suggestions for Teaching in Your Classroom: Effective Assessment
Techniques
135. 1. As early as possible in a report period, decide when and how often
to give tests and other assignments that will count toward a grade, and
announce tests and assignments well in advance.
136. 2. Prepare a content outline and/or a table of specifications of the
objectives to be covered on each exam, or otherwise take care to obtain a
systematic sample of the knowledge and skill acquired by your students.
137. 3. Consider the purpose of each test or measurement exercise in
light of the developmental characteristics of the students in your classes
and the nature of the curriculum for your grade level.
138. 4. Decide whether a written test or a performance test is most
appropriate.
139. 5. Make up and use a detailed answer key.
140. a. Evaluate each answer by comparing it to the key.
141. b. Be willing and prepared to defend the evaluations you make.
142. 6. During and after the grading process, analyze questions and
answers in order to improve future exams.
143.

Top
144.
145. Resources for Further Investigation
146. Suggestions for constructing Written and Performance Tests
147. For specific suggestions on ways to write different types of items for
paper-and-pencil tests of knowledge and on methods for constructing and
using rating scales and checklists to measure products, performances, and
procedures, consult one or more of the following books:Measurement and
Evaluation in Teaching (7th ed., 1995), by Robert Linn and Norman
Gronlund; How to Make Achievement Tests and Assessments (5th ed.,
1993), by Norman Gronlund; Classroom Assessment: What Teachers Need
to Know (1995), by W. James Popham;Student-Centered Classroom
Assessment (1994), by Richard Stiggins; Classroom Assessment (2d ed.,
1994), by Peter Airasian; and Practical Aspects of Authentic
Assessment (1994), by Bonnie Campbell Hill and Cynthia Ruptic.
148. The Learning Resources Development Center (LRDC) at the
University of Pittsburgh publishes a large number of briefs, articles, and
reviews related to assessment and learning, particularly emphasizing
cognitive-based approaches. An online resource of the LRDC can be found
at http://www.lrdc.pitt.edu/publications.html. The most extensive on-line
database of assessment information is the ERIC/AE Test Locater, which is
found at www.cua.edu/www/eric_ae/testcol.html. It includes numerous
topics, reviews of tests, suggestions and digests relating to alternative
assessment, and broader standards and policy-making information as it
relates to evaluation and assessment of students.
149.
150. Writing Higher-Level Questions
151. As Benjamin Bloom and others point out, teachers have a
disappointing tendency to write test items that reflect the lowest level of
the taxonomy-knowledge. To avoid this failing, carefully read Part 2
of Taxonomy of Educational Objectives: The Classification of Educational
Goals, Handbook I: Cognitive Domain (1956), edited by Benjamin Bloom,
Max Englehart, Edward Furst, Walker Hill, and David Krathwohl. Each level
of the taxonomy is clearly explained and followed by several pages of
illustrative test items.
152.

Top
153.

Analyzing Test Items


154. Norman Gronlund briefly discusses item-analysis procedures for
norm-referenced and criterion-referenced tests in Chapter 6 of How to
Make Achievement Tests and Assessments (5th ed., 1993). For norm-
referenced multiple-choice tests, these include procedures for assessing
the difficulty of each item, the discriminating power of each item, and the
effectiveness of each alternative answer. For criterion-referenced tests,
they include a measure for assessing the effects of instruction. More
detailed discussions of item-analysis procedures can be found in Chapter 8
of Educational Testing and Measurement: Classroom Application and
Practice (4th ed., 1993), by Tom Kubiszyn and Gary Borich.
155. Also, Question Mark Software, based in Britain, produces a software
program that can help teachers generate quality test items. Information on
the software can be found at http://www.qmark.com or by calling the U.S.
distributor at 800-863-3950.
156.
157. This was excerpted from Chapter 12 of Biehler/Snowman, PSYCHOLOGY APPLIED TO TEACHING, 8/e, Houghton Mifflin
Co., 1997.
158.

159. For more information on assessment -- especially on how to


construct items-- see Orlich et al., TEACHING STRATEGIES, 5/e, Houghton
Mifflin Co., 1998, Chapter 8, "Small Group Discussions and Cooperative
Learning."
160. For more information on assessment in the Grabes' INTEGRATING
TECHNOLOGY FOR MEANINGFUL LEARNING, 2/e, Houghton Mifflin Co.,
1998, see the "Spotlight on Assessment" sections on pages 7, 52, 171, 316,
and 357.
161. For more information on assessment in Gage/Berliner,
EDUCATIONAL PSYCHOLOGY, 6/e, 1998, see Chapter 13, "Basic Concepts
in Assessment and the Interpretation of Standardized Testing," and
Chapter 14, "The Teacher's Assessment of Student Learning."
162.
163.

164. Top
165.
166. Copyright Houghton Mifflin Company. All Rights Reserved.
Terms and Conditions of Use, Privacy Statement, and Trademark Information

Role of Assessment in Instructional Decision -Kaye

1. 1. By:Kayce Joy L. Saliendrez


2. 2.  Pre-instruction-(prior knowledge)  During instruction-(students’ progress)  Post-
instruction assessment –(mastery)
3. 3.  Provides information about the lacking competencies such as knowledge and skill.
4. 4.  Monitors student’s learning  Sets his teaching at a level that Challenges students
metacognitive skills or high-level thinking
5. 5.  Monitors whether the learner mastered the basic contents ,knowledge and skills
required for the learner activity
6. 6.  To determine the students’ entry behaviour  To determine the objectives has been
attained or not  To determine students’ strength and weakness  To rate the students’
performance for the purpose of giving grades  To improve teaching-learning process.
7. 7. Placement •What the students are and what they already know Diagnostic •Students'
weakness for remedial instruction Formative •If instructional objectives are achieved
Summative •If the students master the objectives for the purpose of giving grades
Evaluation is used to determine
8. 8.  Placement  Diagnostic  Formative  Summative
9. 9.  Used to determine the entry behaviour of the pupils  Used to determine the
performance at the beginning of instruction  GOAL: to determine the position in
instructional sequence and the mode of evaluation .
10. 10.  To determine the specific learning needs of the students .  Strength and
weaknesses
11. 11. Pre-tests (on content and abilities) Self-assessments (identifying skills and
competencies) Discussion board responses (on content-specific prompts) Interviews
(brief, private, 10-minute interview of each student)
12. 12.  Assessment during the instruction  Helps detect which students need attention
13. 13. Observations during in-class activities; of students’ non-verbal feedback during
lecture Homework exercises as review for exams and class discussions) Reflections
journals that are reviewed periodically during the semester Question and answer
sessions, both formal— planned and informal—spontaneous
14. 14. Conferences between the instructor and student at various points in the semester In-
class activities where students informally present their results Student feedback
collected by periodically answering specific question about the instruction and their self-
evaluation of performance and progress
15. 15.  To determine the mastery at the end of the course  Overall assessment 
Achievement at the end  Used to primarily for assigning course grade
16. 16.  Examinations (major, high-stakes exams)  Final examination (a truly summative
assessment)  Term papers (drafts submitted throughout the semester would be a
formative assessment)  Projects (project phases submitted at various completion points
could be formatively assessed)
17. 17.  Portfolios (could also be assessed during it’s development as a formative
assessment)  Performances  Student evaluation of the course (teaching effectiveness)
 Instructor self-evaluation
18. 18.  Used to determine the effectiveness of the teacher’s method  Used to give
meaning to students’ effort in their quest for quality learning  Used to justify the request
and utilization of supplies ,materials and equipment of the schools operation
19. 19.  Used to plan for and improve the next educational activities  Used to give
recognition and awards to best-performing individual  Used to promote quality
assurance within and outside of the school.
20. 20.  https://www.azwestern.edu/academic_servic
es/instruction/assessment/resources/downlo ads/formative
%20and_summative_assessmen t.pdf retrieved on June 26,2014

You might also like