100% found this document useful (1 vote)
625 views37 pages

Lesson 2 - Essentials of Test Administration, Scoring, and Interpretation

Uploaded by

jojeehannele
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
625 views37 pages

Lesson 2 - Essentials of Test Administration, Scoring, and Interpretation

Uploaded by

jojeehannele
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

CHAPTER 2

Essentials of Test Administration,


Scoring, and Interpretation
Psychological Assessment II
Adventist University of the Philippines, Department of Psychology

Instructor Information:
Rhalf Jayson F. Guanco, PhD, RPsy, RPm
[email protected]
Test Administration
Process by which a test-
taker completes a test.
In most testing situations,
the administrator's
primary job is to ensure
standardization (i.e., the
establishment of similar
test procedures)
Test Administration Standardization
Checking the physical setting for appropriateness
(e.g., adequate lighting, temperature, tables)
Preparing the materials ( e.g., pencils, papers,
stimulus book)
Ensuring that participants know what they are
supposed to do.
Monitoring the test administration.
Following any standardized instructions carefully.
Standards in Test Administration
American Educational Research Association (AERA)
American Psychological Association (APA)
National Council on Measurement in Education (NCME)
Standards in Test Administration

Standard 6.1: Test administrators should follow


carefully the standardized procedures for
administration and scoring specified by the test
developer and any instructions from the test user.
Standard 6.2: When formal procedures have been
established for requesting and receiving
accommodations, test takers should be informed of
these procedures in advance of testing.
Standards in Test Administration

Standard 6.3: Changes or disruptions to


standardized test administration procedures or
scoring should be documented and reported to the
test user.
Standard 6.4: The testing environment should
furnish reasonable comfort with minimal
distractions to avoid construct-irrelevant variance.
Standards in Test Administration
Standard 6.5: Test takers should be provided
appropriate instructions, practice, and other
support necessary to reduce construct-irrelevant
variance.
Standard 6.6: Reasonable efforts should be made to
ensure the integrity of test scores by eliminating
opportunities for test takers to attain scores by
fraudulent or deceptive means.
Standard 6.7: Test users have the responsibility o f
protecting the security of test materials at all times.
Test Administration
Two important issues regarding test administration
include:
– (a) the number and duration of appointments
– (b) the order of test administration
Number and Duration of Appointments
– majority of test administrations should be completed
within a 2-hour appointment.
– Exceptions: Complete Neuropsychological Battery and
Testing for learning disorders
Order of Test Administration
Guidelines that are meant to maximize test reliability
and validity and to avoid fatigue and increase
motivation:
– Highest Priority: Performance-Based Measures
(Cognitive, Neuropsychological, and Achievement tests)
– Moderate or Intermediate Priority: Examiner-
Administered Personality Tests (TAT, DAP, MMPI, 16 PF)
– Lowest Priority: Self-report and observer-report
instruments that are symptom or behaviorally focused
(CBCL, ABCL, Vineland).
Factors Affecting Test Administration
When we talk about reliability, we are interested in
random sources of error.
– Observed Score = True Score + Error
When tests are administered, however, there are
other sources of error aside from random error.
Factors Affecting Test Administration
Rapport
Ethnicity
Language
Training of Test Administrators
Expectancy Effects
Use of Reinforcers
Computer-Assisted Testing
Subject Variables
Rapport
 Children score lower on IQ test when the
administrator made disapproving comments (“I
thought you could do better) then when
administrators made neutral or positive comments
(Witmer, Bernstein and Dunham, 1971)
 Children unfamiliar with the administrator did
significantly worse on a reading test compared to
children familiar with the administrator (DeRosa and
Patalano, 1991)
Ethnicity
 Should children of one ethnicity be tested only by
test administrators of the same ethnicity?
Majority of studies have found nonsignificant effects for
cross-ethnic administration of most intelligence tests.
 The only significant findings have been when
paraprofessionals have administered the tests.
 Why no differences?
Standardized procedures
Language
 How valid are tests given in English to bilingual or
Limited-English Proficient (LEP) individuals?
 What about translating tests?
 Language
 Standard of practice: administer a test in the most proficient
language.
 BUT - what about the normative sample?
 How comparable are the scores from these individuals?
 Can IRT help?
 Interpreters: another potential source of bias
Training

Administration and scoring errors are a large


source of bias.
– Typical graduate training: 2-4 administrations of a
test (in class)
» importance of fieldwork placements
» majority of testing practice obtained in fieldwork
placements
Error rates on WAIS administrations decrease
after 10 administrations(!)
Expectancy Effects
Also known as: Rosenthal effects
– Robert Rosenthal, Harvard University
– Subjects perform in a manner consistent with
experimenter’s (test administrator’s) expectations
» works with humans, works with rats
– Effects not limited to experiments, also occurs on
standardized tests
» students asked to score ambiguous responses will give
more points to people they like, or think are bright.
Expectancy Effects

Expectancy and test administration


– Rosenthal - expectancy effects are triggered by non-
verbal cues, and the experimenter/ administrator may
not even be aware
– Expectancy effects have small and varied influence on
test outcomes; careful study is required
Use of Reinforcement
No clear and consistent difference between
studies using reinforcement showing positive or
negative effects.
General guidelines:
– Check with the testing manual first
– Generally OK to reward EFFORT, not answers.
Computer-Assisted Test administration

Advantages
– The obvious connection to Item Response Theory and
the ability to tailor tests to a persons ability
– Highly Standardized
– Precision of Timing
– Lessened Dependence on Human Testers
– Pacing (no need to rush respondents)
– Control of Bias (from the test administrator, etc.)
Computer-Assisted Test administration
Computer adaptive versions of tests have
shown no large differences between computer
assisted and paper-and-pencil versions
Computer versions can be more accurate and
take less time (e.g. IRT and CAT)
Some people enjoy the computer format and
even prefer it
Computer-Assisted Test administration

The big concern with computer aided testing is


that it will lead to the computer-generated reports
landing in the wrong (inexperienced) hands and
misinterpreted
Subject Variables
The current psychological state of the subject can
also be a source of error when administering a test
– Illness
– Insomnia
– Test-anxiety
– Drugs (prescription and recreational)
– Hormones (e.g. menstruation) – variations in perceptual
motor coordination varied with cycle (better away from
menses; effects reverse for other tasks
Scoring Tests
Coding versus Scoring
– Coding is the process of applying a coding system to
the responses of an individual being assessed (Check
the manual for the coding scheme).
» WAIS - IV Vocabulary subtest (individual response merits 0,
1, or 2 points).
– Scoring is the computation of scores based on
cumulative and composite numbers derived from.
coded responses.
» Add up all the points of the Vocabulary subtest to arrive at
the subtest raw score and when you convert this raw score
into a standard score.
Standards in Test Scoring
American Educational Research Association (AERA)
American Psychological Association (APA)
National Council on Measurement in Education (NCME)
Standards in Test Scoring
Standard 6.8: Those responsible for test scoring
should establish scoring protocols. Test scoring
that involves human judgment should include
rubrics, procedures, and criteria for scoring.
When scoring of complex responses is done by
computer, the accuracy of the algorithm and
processes should be documented.
Standards in Test Scoring
Standard 6.9: Those responsible for test scoring
should establish and document quality control
processes and criteria. Adequate training
should be provided. The quality of scoring
should be monitored and documented. Any
systematic source o f scoring errors should be
documented and corrected.
Standards in Test Reporting
and Interpretation
American Educational Research Association (AERA)
American Psychological Association (APA)
National Council on Measurement in Education (NCME)
Considerations in Interpreting Tests
One of the most important skills as a
practitioner
– The assessor interprets the tests
– Confidence
– Adequately trained and competent
– Supervision
Considerations in Interpreting Tests
Use additional information that is available to you
in the interpretation of each test.
You must always revisit the cultural considerations
(understand and integrate how this may have
affected the results)
Monitor your own biases and preconceptions
Check your attitudes, motivations, and personality
characteristics that may affect the way you interpret
tests
Standards in Test Reporting and
Interpretation
Standard 6.10: When test score information is
released, those responsible for testing programs
should provide interpretations appropriate to the
audience. The interpretations should describe in
simple language what the test covers, what scores
represent, the precision/reliability of the scores, and
how scores are intended to be used.
Standards in Test Reporting and
Interpretation
Standard 6.11: When automatically generated
interpretations of test response protocols or test
performance are reported, the sources, rationale,
and empirical basis for these interpretations
should be available, and their limitations should
be described.
Standards in Test Reporting and
Interpretation
Standard 6.12: When group-level information is
obtained by aggregating the results of partial tests
taken by individuals, evidence of validity and
reliability/precision should be reported for the
level of aggregation at which results are reported.
Scores should not be reported for individuals
without appropriate evidence to support the
interpretations for intended uses.
Standards in Test Reporting and
Interpretation
Standard 6.13: When a material error is found in test
scores or other important information issued by a testing
organization or other institution, this information and a
corrected score report should be distributed as soon as
practicable to all known recipients who might otherwise
use the erroneous scores as a basis for decision making.
The corrected report should be labeled as such. What was
done to correct the reports should be documented. The
reason for the corrected score report should be made clear
to the recipients o f the report.
Standards in Test Reporting and
Interpretation
Standard 6.14: Organizations that maintain individually
identifiable test score information should develop a clear
set of policy guidelines on the duration of retention of an
individual’s records and on the availability and use over
time of such data for research or other purposes(10-15
years). The policy should be documented and available to
the test taker. Test users should maintain appropriate
data security, which should include administrative,
technical, and physical protections.
Standards in Test Reporting and
Interpretation
Standard 6.15: When individual test data are
retained, both the test protocol and any written
report should also be preserved in some form.
Standards in Test Reporting and
Interpretation
Standard 6.16: Transmission of individually
identified test scores to authorized individuals or
institutions should be done in a manner that
protects the confidential nature of the scores and
pertinent ancillary information.
Exercise 1
Sarah, a college student, has recently taken a
comprehensive personality test that measures the five
major personality traits. She is curious about what
these scores mean for her and how they might
influence her college experience. However, Sarah has
also been dealing with symptoms of anxiety, which
she has experienced periodically over the past year.
– What is the reason for referral?
– What are the assessment procedures for this case?

You might also like