60% found this document useful (5 votes)
4K views24 pages

Properties of Assessment Methods

The quality of the assessment instrument and method used in education is very important since the evaluation and the judgement that the teacher gives on the student are based from the information which will be obtained using these instruments.

Uploaded by

Anne Balaas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
60% found this document useful (5 votes)
4K views24 pages

Properties of Assessment Methods

The quality of the assessment instrument and method used in education is very important since the evaluation and the judgement that the teacher gives on the student are based from the information which will be obtained using these instruments.

Uploaded by

Anne Balaas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Properties of

Assessment Methods
VALIDITY
RELIABILITY
FAIRNESS
PRACTICALITY &
EFFICIENCY
ETHICS
VALIDITY
 The instrument’s ability to measure what it
purports to measure.

 The appropriateness, correctness,


meaningfulness and usefulness of the
specific conclusions that a teacher reaches
regarding the teaching-learning situation.
TYPES OF VALIDITY
 CONTENT VALIDITY
 FACE VALIDITY
 CRITERION-RELATED VALIDITY
 CONSTRUCT VALIDITY
FACE VALIDITY
 refers to the outward appearance of the
test.
 it is the lowest form of test validity

What do students think of the


test?
CONTENT VALIDITY
 refers
to the content and format of the
instrument.

 Students’ adequate experience


 Coverage of sufficient material
 Reflect the degree of emphasis

Am I testing what I taught?


CRITERION-RELATED
VALIDITY
 also called predictive validity.
 the test is judge against a specific criterion.
 It can also be measured by correlating the test with
a known valid test.

How does this compare with the


existing valid test?
CONSTRUCT VALIDITY
 the test is loaded on a “construct” or factor
Group of variables that
correlate highly with
each other form of
factor

Am I testing in the way I


taught?
RELIABILITY
 Reliability is the degree to which a test
consistently measures whatever it
measures.
 Something reliable is something that works
well and that you can trust.
 It is a term synonymous with dependability
and stability.
Questions:

 Can we trust the results of the test?


 Would we get the same results if the tests were
taken again and scored by a different person?

Tests can be made more reliable by


making them more objective
(controlled items).
TYPES OF RELIABILITY
 EQUIVALENCY RELIABILITY
 STABILITY RELIABILITY
 INTERNAL CONSISTENCY
RELIABILITY
 INTER-RATER RELIABILITY
EQUIVALENCY RELIABILITY
 also called equivalent forms reliability or
alternative-forms.
 is the extent to which two items measure
identical concepts at an identical level of
difficulty.
 Equivalency reliability is determined by
relating two sets of test scores to one
another to highlight the degree of
relationship or association.
STABILITY RELIABILITY
 sometimes called test, re-test reliability

 is the agreement of measuring instruments


over time.
 Equivalency reliability is determined by
relating two sets of test scores to one
another to highlight the degree of
relationship or association.
INTERNAL CONSISTENCY
RELIABILITY
 Used to assess the consistency of results
across items within a test (consistency of an
individual’s performance from item to item
& item homogeneity)
 To determine the degree to which all items

measure a common characteristic of the


person

Ways of assessing internal consistency:


• Kuder-Richardson (KR20)
• Split-half Reliabilty
KR20
KR20 = [n/(n - 1)] x [1 - (Σpq)/Var]
KR20= estimated reliability of the full-length test
n = number of items
Var = variance of the whole test (standard deviation squared)
Σpq = sum the product of pq for all n items
p = proportion of people passing the item
q = proportion of people failing the item (or 1-p)
KR21
 Used for dichotomously scored items that
are all about the same difficulty

KR21 = [n/(n - 1)] x [1 - (M x (n - M) / (n x Var))]

KR21 = estimated reliability of the full-length test


n = number of items
Var = variance of the whole test (standard deviation squared)
M= mean score on the test
Split-half
rkk = k(r11) / [1 + (k - 1)r11]
rkk = reliability of the test k times as long as
the original test
r11 = reliability of original test
k = factor by which the length of the test is
changed
FAIRNESS
 The concept that assessment should be
'fair' covers a number of aspects.

Student Knowledge and learning targets of


assessment
Opportunity to learn
Prerequisite knowledge and skills
Avoiding teacher stereotype
Avoiding bias in assessment tasks and procedures
PRACTICALITY AND
EFFICICIENCY
 Something efficient is being able to
accomplish a purpose and is functioning
effectively.
 Practicality is defined as something that is

concerned concerned with actual use rather


than theoretical possibilities.
Will the test take longer to design than
apply?

Will the test be easy to mark?


The teacher is
familiar with it

An
assessment
procedure is
practical and
efficient
when…

Doesn’t require
Implementable too much time
STANDARD
S

RIGHT
AND
WRONG ACCEPTED

RULES

CONDUCT
MORALITY

ETHICS
ETHICS IN ASSESSMENT
 Refers to questions of right and wrong.

 Webster defines ethical (behavior) as


« confoming to the standards of conduct of
a given profession or group ».
Ethical issues that may
be raised:
Possible harm to the
participants.
Confidentiality.
Presence of concealment or
deception.
Temptation to assist students.
E.N.D

You might also like