0% found this document useful (0 votes)
24 views55 pages

Chapter 4 - Assessment of Learning - Administering, Analyzing, And Improving Tests

Chapter 4 focuses on administering, analyzing, and improving tests in educational settings, outlining key learning objectives and guidelines for effective test administration. It discusses item analysis, including difficulty and discrimination indices, to evaluate test quality and improve assessment tools. The chapter emphasizes the importance of analyzing student responses to enhance test items and instructional strategies.

Uploaded by

ellamae.frando
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views55 pages

Chapter 4 - Assessment of Learning - Administering, Analyzing, And Improving Tests

Chapter 4 focuses on administering, analyzing, and improving tests in educational settings, outlining key learning objectives and guidelines for effective test administration. It discusses item analysis, including difficulty and discrimination indices, to evaluate test quality and improve assessment tools. The chapter emphasizes the importance of analyzing student responses to enhance test items and instructional strategies.

Uploaded by

ellamae.frando
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 55

Chapter 4

Administering, Analyzing,
and Improving Tests
Assessment of Learning I

Reporters:
Christine Suguitan, Ella Mae Tejaro,
Rommel Ramos, and Loren Venus Tabago
Learning Objectives
1. define the basic concepts regarding item analysis;
2. identify the steps in improving test items;
3. solve difficulty index and discrimination index;
4. identify the level of difficulty of an item;
5. perform item analysis properly and correctly;
6. identify the item to be rejected, revised, or
retained; and
7. interpret the results of item analysis
Activity: Role Play
Group the students into
5 members.
Make a 2 minutes role
play showing the
preparation and proper
administration of the
students examination.
Analysis:
1. What are the required preparations for the teachers for
administering the examination?

2. What are the common challenges faced by the teachers before,


during and after administering the examination?

3. What are the recommended actions to improve the administration


process?
Introduction:

To assess the performance of


the students is one of the
most important functions of a
teacher which is a very
complicated task.
PACKAGING AND REPRODUCING
TEST ITEMS

1.Put the items with


the same format
together.
2. Arrange the test
items from easy to
difficult.
3. Give proper spacing for each items
for easy reading.
4. Keep questions and options in the
same page.
5. Place illustrations
near the options.
6. Check the key
answer.
7. Check the direction of the test.
8. Provide space for name, date and score.
9. Proofread
the test.
10. Reproduce
the test.
ADMINISTERING THE EXAMINATION
This procedure greatly
affects the performance of
the students in the test.
There are guidelines in
proper administration of
the test before, during
and after the test.
GUIDELINES BEFORE ADMINISTERING
EXAMINATIONS
1. Try to induce positive
test-taking attitude.

2. Inform the students


about the purpose of
the test.
3. Give oral direction as
early as possible before
distributing the tests.

4. Giving test-taking hints


about guessing, skipping,
and the like, are strictly
prohibited.
5. Inform the students
about the length of time
allowed for the test.

6. Tell the students how to


signal or call your
attention if they have
question.
7. Tell the students how
the paper are to be
collected.

8. Tell students what to


do when they are done
with the test.
9. Rotate the
method of
distributing papers.

10. Make sure the


room is well lighted
and ventilated.
11. Remind the
students to write their
names on their paper.

12. Tell the students


that test papers must
be complete.
GUIDELINES DURING THE EXAMINATIONS

1. Do not give instructions


or avoid talking.

2. Avoid giving hints.


3. Monitor student
progress and
discourage cheating.

4. Give time warnings


to students.
5. Make a note of any
questions student
may ask.

6. Test papers must


be collected
uniformly.
GUIDELINES AFTER THE EXAMINATIONS
1. Grade the papers, do test
analysis after scoring and
before returning papers to
the students.

2. Record the grades in


pencil in your class record.
GUIDELINES AFTER THE EXAMINATIONS

3. Return papers in
timely manner.

4. Discuss test items


with the students.
ANALYZING THE
TEST
■Evaluate the quality of each
item in the tests
■Item that needs improvement
or items to be removed in the
test
■ When do we consider that the
test is good?
■How do we evaluate the quality
of each item in the test?
■Why is it necessary to evaluate
each item in the test?
ITEM ANALYSIS
■A process of examining the student's
response to individual item in the test.
■The purpose of item analysis is to improve
the quality of the assessment tools.
■identify the item that is to be retained,
revised or rejected and also the content of
the lesson that is mastered or not.
USES OF ITEM ANALYSIS
1.Item analysis data provides a
basis for efficient class
discussion of the test results.
2.Item analysis data provide a
basis for remedial work.
3.Item analysis data provide a
basis for general improvement of
classroom instruction.
4. Item analysis data provide a
basis for increased skills in test
construction.
5. Item analysis procedures
provide a basis for constructing
test bank.
Types of Quantitative Item Analysis
1.Difficulty Index - Refers to the
proportion of the number of students
in the upper and lower group who
answered and item correctly.
*The higher the value of the index of difficulty, the
easier the item is.Hence, more students got the
correct answer and more students mastered the
content measured by that item.
To compute the difficulty index of an item,
To compTo compute the difficulty index of an item, use the formula:ute the difficulty index of an item, use the formula:To compute the difficulty index of an item,
use the formula:To compute the difficulty index of an item, use the formula:

use the formula:


n

DF = N

DF= difficulty index


n = number of the students selecting item correctly
in the upper group and in the lower group
N = total number of students who answered the
test
Level of Difficulty of an Item

INDEX RANGE DIFFICULTY LEVEL DECISION/INTERPRETATION

0.00 - 0.20 Very Difficult Discard

0.21 - 0.40 Difficult Discard/ Revise

0.41 - 0.60 Moderate Revise/ Retain

0.61 - 0.80 Easy Revise

0.81 - 1.00 Very Easy Discard


Types of Quantitative Item Analysis

2. Discrimination Index - The power of the item


to discriminate the students between those
scored high and those who scored low in the
overall test

* It is the power of the item to discriminate the


students who know the lesson and those who
do not know the lesson.
Types of Discrimination Index

1. Positive discrimination happens when


more students in the upper group got the
item correctly than those students in the
lower group.
2. Negative discrimination occurs when
more students in the lower group got the item
correctly than the students in the upper group.
3. Zero discrimination happens when a
number of students in the upper group and
lower group who answer the test correctly are
equal, hence, the test item cannot distinguish
the students who performed in the overall test
and the students whose performance are very
poor.
Level of Discrimination

INDEX INTERPRETTION

0.19 and below Poor item, should be eliminated or need to be revised.

0.20 - 0.29 Marginal item, need some revisions

0.30 - 0.39 Reasonably, good item but possibly for improvement

0.40 and above Very good Item

Ebel and Frisbie (1986) as cited by Hetzel (1997)


Discrimination Interpretation
YES NO
1. Does the key discriminate positively
2. Does the incorrect option discriminate negatively?

If the answers to questions 1 & 2 are both YES, retain the item.
If the answers to questions 1 & 2 are either YES or NO, revise.
If the answers to questions 1 & 2 are both NO, reject.
Types of Quantitative Item Analysis

3. Analysis of Response - Through


this, you can determine whether
the distracters or incorrect options
are effective ot attractive to those
who don’t know the correct
answers
Distracter Analysis
1. Distracter - is the term for incorrect option in the multiple choice
type of test while the correct answer represents as the KEY. It is
very important for the test writer to know of the distracters are
effective or good.
2. Miskeyed item - is a potential miskey if there are more students
from upper group who choose to incorrect option than the key.
3. Guessing item - Students from upper group have equal spread of
choices among given alternatives. Student from the UG guess their
answers because of the ff reasons:
• The content of the test is not discussed
• Items are very difficult
• The question is trivial
IMPROVING
TEST ITEMS
How to improve test item?
● Analyze Your Tests: Look at how students answered each question.

Example 1: We compared how many high-performing students and low-performing students


chose each answer. We wanted to see if the right answer was chosen more by the high-
performing students.
Example 2: Just like in Example 1, we compared the number of high and low performing
students that chose each answer.
Example 3: We looked at how many students in the high and low groups chose the correct
answer, and we found that more low performing students chose the correct answer, which is not
good.
Example 4: We saw that the best students chose a different answer than the one marked
correct, which told us there might be a mistake in the answer key.
Example 5: We noticed that many students chose two different answers, which told us the
question might be confusing.
Example 6: We saw that students chose all the answers almost equally, which told us they were
probably guessing.
Example 7: We saw that one of the wrong answers was not chosen by any student, which told
us that the wrong answer was not good.
● Check Difficulty: Are questions too hard or too easy?

Example 1 & 2: We didn't focus on the exact DF, but we checked if


enough students got the answer right.
Example 3: We found the DF was 46%. Meaning, less than half of
the students got the correct answer. The question was moderately
difficult.
Example 4: The DF was very low, only 10%. This means the question
was very hard, and it made us suspect there was a mistake in the
answer key.
Example 5: The DF was 36%, showing the question was difficult.
Example 6: The DF was extremely low, only 18%. This told us the
question was very hard, and students were probably guessing.
Example 7: The DF was 38%. Meaning this question was difficult.
● See Who Gets It Right: Do better students get it right more
often?

We use the 'Discrimination Index,' or DI, to check if a question


helps us tell apart the students who know the material well from
those who don't. Ideally, the students who do better in the class
overall should also get the right answer to a specific question
more often.
In Example 3: When we looked at the results, we found
something very strange. More of the students who didn't do as
well in the class got the correct answer than the students who did
well. This means the DI was negative, a -21%.
● Fix Bad Answers: Make wrong answers believable, but not right.

When we make test questions, the wrong answers are called


'distractors.' They should trick students who don't know the right
answer, but they shouldn't trick students who do.

Example 1 & 2: We looked at how many students chose each wrong answer. We
wanted to see if the wrong answers were chosen by enough students, especially the
lower-performing students. If no one chose a wrong answer, it's not doing its job.
Example 6: In this example, we saw that students choose all the answers almost
equally. This means the wrong answers weren't better than the right answer, making it
seem like students were just guessing. This means the wrong answers were not
effective.
Example 7: Here, we saw that one of the wrong answers, Option D, was chosen by no
one. It wasn't 'distracting' anyone! It was too obvious that it was wrong. We need to
make the wrong answers believable, so students who don't know the answer have to
think carefully.
● Clear Questions: Avoid tricky or confusing questions.

Test questions can be confusing, like when they have unclear


wording or seem to have more than one right answer. This is
called 'ambiguity,' and it's a big problem.
In Example 5: We saw that many students, especially the
ones who usually do well, were split between two different
answers. They couldn't decide which one was correct. This
showed us that the question was probably confusing or
'ambiguous.'
● Check Your Answers: Make sure your answer key is
correct.

The problem isn't the test question itself, but the answer key. It's
easy to make a mistake when marking which answer is correct.
In Example 4: We saw that the students who usually did well in
the class kept choosing a differen t answer than the one marked
as correct. This made us think that maybe the answer key was
wrong.
● Change or Remove Bad Questions: Don't be afraid to fix or delete
questions.

Not all test questions can be saved. Sometimes, they are so flawed that the best
thing to do is remove them from the test entirely.
In Example 3: We found that more of the students who didn't do well in the class
got the correct answer than the students who did well. This is a big problem
called 'negative discrimination.' It means the question is confusing the good
students and helping the others in the wrong way. Because of this major issue,
the question had to be rejected.
In Example 6: We saw that students were choosing all the answers almost
equally. This means they were probably just guessing. The question wasn't
helping us see who understood the material. It was too hard and confusing, so it
also had to be rejected.
● Teach Again if Needed: If many miss a good question, teach
that part again.

Even if a test question is well-written, many students might


still get it wrong. This can be a sign that they didn't fully
understand the topic.
In Example 5: We saw that many students were confused
and split between two different answers. This indicated that
the question was ambiguous, but it also suggested that
maybe the students needed more help understanding the
material.

You might also like