0% found this document useful (0 votes)
79 views31 pages

Item Analysis

The document discusses item analysis, which examines student responses to test items. It describes calculating the difficulty index and discrimination index of items to evaluate them. The ideal difficulty range is 0.41-0.60, and discrimination above 0.30 is desirable. An example item analysis calculates the difficulty and discrimination indices of a sample item using student response data. Reliability and validity are also summarized, with reliability referring to consistent measurement and validity to a test measuring what it intends to measure.

Uploaded by

Mary Jean Lopez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views31 pages

Item Analysis

The document discusses item analysis, which examines student responses to test items. It describes calculating the difficulty index and discrimination index of items to evaluate them. The ideal difficulty range is 0.41-0.60, and discrimination above 0.30 is desirable. An example item analysis calculates the difficulty and discrimination indices of a sample item using student response data. Reliability and validity are also summarized, with reliability referring to consistent measurement and validity to a test measuring what it intends to measure.

Uploaded by

Mary Jean Lopez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 31

Item Analysis

Dana Faye T. Salundaguit, Ed.D.


Item Analysis
Refers to the process of examining the student’s
response to each item in the test.

Three criteria in determining the desirability


and undesirability of an item.
1. Difficulty of the item
2. Discriminating power of an item
3. Measure of attractiveness.
Difficulty Index

• Refers to the proportion of the


number of students in the upper and
lower groups who answered an item
correctly.
Level of Difficulty of an Item
Index Range Difficulty Level

0.00-0.20 Very Difficult

0.21-0.40 Difficult

0.41-0.60 Moderately difficult

0.61-0.80 Easy

0.81-1.00 Very Easy


4
Difficulty Index (De Guzman-Santos, 2007)

Range of Interpretation Action


Difficulty Index
Revise or
0 – 0.25 Difficult Discard

0.26 - 0.75 Right Difficult Retain

Revise or
0.76 - above Easy retain
The ideal test
The ideal test should contain
items whose difficulty indices
are from 0.41-0.60, but for
most teachers-made test,
0.30-0.70 would be
acceptable.

6
Difficulty
Index Range
Level
Item Analysis
Options A 0.00-0.20
B C D E* Very Difficult
Upper (10) 2 1 1 2 4
Lower (10) 1 2 2 2 3
0.21-0.40 Difficult

Moderately
0.41-0.60 difficult
Index of difficulty= (∑x / N) * 100
= 7/20 *100 0.61-0.80 Easy
= 0.35*100
= 35%
0.81-1.00 Very Easy
7
Making an Item Analysis

Dana Faye T. Salundaguit, Ed.D.


Making an Item Analysis

An item has 50 index of difficulty,


this means it is either easy nor
difficult; 49%, difficult; 51% easy
and 50 on.
Result of Item 7 taken by 30 Students in Statistics Test
which is Subject for Item Analysis

Students Score Answer Students Score Answer

1 86 D 16 60 D
2 81 A 17 80 A
3 73 E 18 50 C
4 82 E 19 80 B
5 85 D 20 89 C
6 74 E 21 90 E
7 94 A 22 77 E
8 74 E 23 63 A
Students Score Answer Students Score Answer

9 75 C 24 57 B
10 76 D 25 70 B
11 75 E 26 95 A
12 79 A 27 72 C
13 65 D 28 79 E
14 87 E 29 83 B
15 98 E 30 97 E
1. Arrange the test scores from the highest to the
lowest
98 – E 82 – E 74 - E
97 – E 81 – A 74 - E
95 – A 80 – A 73 - A
94 – A 80 – B 72 - C
90 – E 79 – E 70 - B
89 – C 79 – A 65 - D
87 – E 77 – E 63 - A
86 – D 76 – D 60 - D
85 – D 75 – E 57 - B
83 – B 75 – C 50 - C
Get one-third of the papers from
the highest scores and one-third
from the lowest scores. The
former is called the upper group
(U) and the latter, lower group (L).
Set aside the middle group. In our
example, 10 papers from the
upper group and another 10
papers from the lower group
should be analyzed.
The scores and answers of these papers for upper
group and lower group are as follows:
Upper Group Lower Group
98 – E* 74 – E*
97 – E* 74 – E*
95 – A 73 – E*
94 – A 72 - C
90 – E* 70 – B
89 – C 65 - D
87 – E* 63 - A
86 – D 60 - D
85 – D 57 - B
83 – B 50-C
3. Count the number of students in the upper and
lower groups, respectively, who chose the options
4. Record the frequency from Step 3. the
frequency may also be recorded on the item card
or on a separate sheet.
These are as follows:
Item 7
Options A B C D E*
Upper(10) 2 1 1 2 4
Lower (10) 1 2 2 2 3
*Correct Answers
5. Estimate the index of difficulty. Index of
Difficulty

= x 100

= x100

= 0.35 X 100 = 35%


Index of Discrimination

• Is the difference between the proportion of high


performing students who got the item right and the
proportion of low performing students who got an
item right.

• The high and low performing students usually


defined as the upper 27% of the students based on
the total examination score and the lower 27% of
the students based on the total examination score.
Index of Discrimination
• Is the degree to which the item discriminates between high
performing group and low performing group in relation of
scores on the total test. Index of discrimination are classified
into:
– Positive Discrimination if the proportion of students who
got an item right in the upper performing group greater
than the proportion of the low performing group.
– Negative Discrimination if the proportion of the students
who got an item right in the low performing group is
greater than the students in the upper performing group.
– Zero Discrimination if the proportion of the students who
got an item right in the upper performing group and low
performing group are equal.
Discrimination Item Evaluation
Index
0.41-1.00 Very Discriminating

0.31-0.40 Discriminating

0.21-0.30 Moderately Discriminating

0.11-0.20 Not discriminating

Below-0.10 Questionable Item


19
Index Range Interpretation Action

Can
discriminate
-1.0 - -0.05 Discard
but item is
questionable

Non
-0.55 – 0.45 Revise
Discriminating

0.46 - 1.0 Discriminating Include


Item
6. Estimate item discriminating power.

Index of Discrimination =

=
=
= 0.10
Index of Discrimination

Index of Discrimination
= 4-3 / 10
= 1/10
= 0.10
22
Distribution of Numbered Items According.
To Difficulty and Discrimination Indices
Measures of Attractiveness
The incorrect
option is said to be Option A- poor
effective distracter
if there are more Option B- good
students in the
lower group choose Option C- good
the incorrect option
than those students Option D- Fair
in the upper group.
Option E- good
25
Activity
This is confidential, name of student should be coded.
Ask one of your professor, one set of Midterm or Final
Examination
Choose 30 students (name coded)
Analyze at least 5 questions of the midterm or Final
Examination.
Format
 Name of Faculty
 Course
 Program-Year
 Summary Table for Item Analyzed
 Summary Table for Item Distribution

26
Validity of a Test

 Refers to the appropriateness of the score based


inferences or decisions; or decisions made based
on the students’ test results. The extent to
which a test measures what it's supposed to
measure.
 Types of Validity:
1. Content Validity- refers to the relationship
between a test and the instructional
objectives, establishes content so that the test
measures what it is supposed to measure. 27
1. Criterion-related Validity- refers to a measure of the
extent to which the score from a test relate to
theoretically similar measure.
 Construct Validity- refers to a measure of the extent
to which a test measure a hypothetical and
unobservable variable or quality such as intelligence
etc.
 Predictive Validity- refers to a measure of the extent
to which a person’s current test results can be used
to estimate accurately what that person’s
performance or other criterion.
2. Concurrent Validity- require the correlation of the
predictor or concurrent measure with the criterion
measure.
28
Reliability of a Test
Refers to the consistency of measurement; that
is, how consistent test results or other assessment
results from one measurement to another.

Factors Affecting the Reliability of a Test


1. Length of the test
2. Moderate item difficulty
3. Objective scoring
4. Heterogeneity of the student group
5. Limited time

29
• Refers to the consistency of measurement, that is, how
consistent test results or other assessment results from one
measurement to the other.
• Four Methods of Establishing Reliability
1. Test-retest method- administering the same test twice.
2. Equivalent-form method- administering two different
but equivalent forms of the test to the same group of
students in close succession.
3. Split half method- Administer test once. Score two
equivalent halves of the test. The usual procedure is to
split the score by odd or even numbered separately.
Each students has two sets of score. (SB)
4. Kuder -Richardson Formula. Administer test at once.
Dichotomy scores.
Thank
You

31

You might also like