SAS#11-ACC 116
SAS#11-ACC 116
Productivity Tip: Tackle the hardest task first. Start your day off
right by “eating the frog.” Complete your hardest task first thing in
the morning and you’ll set the tone for a productive day.
A. LESSON PREVIEW/REVIEW
INTRODUCTION (2 mins)
Greetings! It’s a new day for a new module☺. Before dealing with the main lesson, read the
preview about measuring instruments as follows.
The following is/are your target/s for today, keep them in mind as we go through today’s lesson.
1. I can compare validity, reliability, and usability; and
2. I can identify types of validity, methods in testing reliability, and factors
that determine usability.
What is validity?
What is reliability?
What is usability?
B. MAIN LESSON
Activity 2: Content Notes (13 minutes)
It is now time to collect information to satisfy today’s target/s. You may underline or highlight
words or phrases that you think is the main focus of the lesson.
1. Validity means the degree to which a test or measuring instrument measures what it intends
to measure. The validity of measuring instrument has to do with its soundness, what
the test or questionnaire measures its effectiveness, how well it could be applied.
No test or research instrument can be said to have “high” or “low” validity in the
abstract. Its validity must be determined with the reference to the particular use for
which the test is being considered. The validity of test must always be considered in
relation to the purpose it serves. Validity is always specific in relation to some
definite situation. Likewise, a valid test is always valid.
Types of Validity. Validity is classified under four types, namely, content validity, concurrent
validity, predictive validity, and construct validity.
a. Content Validity means the extent to which the content or topic of the test is truly
representative of the content of the course. It involves, essentially, the systematic
examination of the test content to determine whether it covers a representative sample
of the behavior domain to be measured. It is very important that the behavior domain to
be tested must be systematically analyzed to make certain that all major aspects are
covered by the test items in correct proportions. The domain under consideration should
be fully described in advance rather than defined after the test has been prepared.
Content validity is particularly appropriate for the criterion – reference measure. It is also
applicable to certain occupational tests designed to select and classify employees. But
content validity is inappropriate for aptitude and personality tests. These tests are not
based on a specified course of instruction from which the test content can be drawn, and
they bear less intrinsic resemblance to the behavior domain.
b. Concurrent Validity is the degree to which the test agrees or correlates with a criterion
set up as an acceptable measure. The criterion is always available at the time of testing.
It is applicable to tests employed for the diagnosis of existing status rather than for the
prediction of future outcome.
d. Construct Validity. The construct validity of a test is the extent to which the test
measures a theoretical construct or trait. This involves such tests as those of
understanding, appreciation and interpretation of data. Examples are intelligence and
mechanical aptitude tests.
2. Reliability means the extent to which a “test is dependable, self – consistent and stable”
(Merriam). In other words, the test agrees with itself. It is concerned with the
consistency of responses from the moment to moment. Even if a person takes
the same test twice, the test yields the same results. However, a reliable test may
not always be valid.
For instance, a research student receives a grade of 1.25 in Methods of
Research. When ask by his friends, he says his grade is only 1.5. In statistical
sense, the story is reliable for it is consistent, but not valid because there is no
veracity or truthfulness of the story. Hence, it is reliable but not valid. Likewise,
reliable test or measuring instrument is not always valid even if it may be reliable.
Methods in Testing Reliability. There are four methods in testing the reliability of a good
research instrument, these are: test – retest method; parallel
forms; split – half; and internal consistency.
a. Test – retest method. In test – retest method, the same test is administered twice to the
same group of students and the correlation coefficient is determined.
the test must be constructed so that the content, type of test item, difficulty, and
instruction of administration are similar but not identical.
c. Split – half method is administered once, but the test items are divided into two. The
common procedure is to divide the test into odd and even items. The two halves of the
test must be similar but not identical in content, number of items, difficulty, means, and
standard deviations.
b. Ease of Scoring a research instrument depends upon the following aspects: construction
of the test in the objective type; answer keys are adequately prepared; and scoring
directions are fully understood. Moreover, scoring is easier when all subjects are
instructed to write their responses in one column in numerical form or word and with
separate answer sheets for their responses.
c. Ease of Interpretation and Application. Results of tests are easy to interpret and apply if
tables are provided. All scores must be given meaning from the tables of norms without
the necessity of computation. As a rule, norms should be based both on age and year
level, as in the case of school achievement tests. It is also desirable if all achievement
tests should be provided with separate norms for rural and urban subjects as well as for
learners of various degrees of mental ability.
d. Low Cost. It is more practical if the test is low cost, material – wise. It is more
economical also if the research instrument is of low cost and can be reused by future
researchers.
e. Proper Mechanical Make – up. A good research instrument should be printed clearly in
an appropriate size for the grade or year level for which the instrument is intended.
Careful attention should be given to the quality of pictures and illustrations on the lower
grade subjects of the study.
I. Identification. Identify the word/s that corresponds to each statement that follows. Write your
answer on the space provided before each number.
4 – 7. Classification of Validity
2. Why do you think researchers need to consider qualities of a good research instrument?
Direction. True or False. Write true if the statement is true, false if not. Write your answer on the
space provide before each statements.
C. LESSON WRAP-UP
Activity 6: Thinking about Learning (5 mins)
A. Work Tracker
You are done with this session! Let’s track your progress. Shade the session number you just
completed.
FAQs
Is/Are there any other qualities of good research instrument? If yes, define each.
Yes. These are: justness; morality; and honesty as stated in Calmorin, L. Research and Thesis Writing
(with statistics and computer application) 2016 Rex Book Store, Inc. pp.111, 126.
Justness is the degree to which the teacher is fair in evaluating the grades of the learners. The learners
must be informed of the criteria on which they are evaluated.
Morality is the degree of secrecy of the grades of the learners. Morality or ethics means that test results
or grades must be confidential to avoid embarrassing slow learners.
Honesty. In honesty, the researcher must be honest in constructing the research instrument, writing the
research paper, thesis, dissertation, and book.
Job well done! You’ve finished today’s activity. The activities will be assess with your teacher and/or
facilitator during your class sessions.