0% found this document useful (0 votes)
97 views35 pages

Chapter 6 Instruments in Research

The document discusses different types of instruments used in research including researcher-made instruments like questionnaires and personal data sheets. It provides tips for developing good instruments and validating instruments through establishing validity and reliability. Researcher-made instruments are popular but must be tested before use to ensure they are appropriate and yield useful data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views35 pages

Chapter 6 Instruments in Research

The document discusses different types of instruments used in research including researcher-made instruments like questionnaires and personal data sheets. It provides tips for developing good instruments and validating instruments through establishing validity and reliability. Researcher-made instruments are popular but must be tested before use to ensure they are appropriate and yield useful data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

INSTRUMENT

S IN
RESEARCH
CHAPTER 6
Chapter Objectives
After this chapter, you should be able to:
- Learn the good instruments,
- Identify the research-made instruments, and
- Understand and appreciate validating the instrument.
Content Outline
- Introduction
- Good Instruments
- Researcher-Made Instruments
- Sample Personal Data Sheet
- Sample Questionnaire
- Validating the Instrument
Introduction
Frustrations have been sounded, for many knew not what to do in
gathering data for their research project. Instruments are materials,
concrete or verbal or non-verbal communications, used to collect data
for a research study. Each of these instruments will be described in this
chapter. It could not be denied that a “good” research study is dependent
upon the kind of instruments used, how these are administered, scored,
treated, and interpreted. Without instruments, the researcher would be
like a man going to a well and returning with an empty bucket.
Good Instruments
“Good” instruments are always sought in research work. Instruments
are implements or apparatuses necessary to facilitate and validate an
assessment being made. An instrument is termed “good” if it suits its
purpose, when data gathered could be analyzed or treated, and when
it sheds light, finally, to the research problems.
To cite an example in business research, let us take the assessment of consumer preferences of the kind of bulb to
use in lighting their homes. Researchers may use an interview guide or questionnaire as the instrument used for
collecting data. “Good” instruments are determined by answers to the following questions:

1.Is the instrument appropriate for the study?


2.Was there a trial run of the instruments to determine the difficulty index and
validity index of each item included if this is researcher-made?
3.Are the items in the instrument relevant to the problem on hand?
4.How long does it take to finish answering the instrument?
5.Are questions clear?
6.Has the instrument stood the test of time? How popular is it?
7.What are the critique of its use? Were these considered?
8.Will responses yield to quantification and descriptive qualifications?
9.Is the instrument easy to administer?
10.Is scoring facilitated?
Researcher-Made Instrument
This type of instruments is very popular in research. Scrutinizing
research studies will reveal that researcher-made instruments are
among the overall set of instruments used in research. One or two or
more of these researcher-made materials are combined with other
standardized instruments in the assessment of individuals, operations,
situations, sales or products, as the case may be.
1. Personal Information Blank. Information about the respondent is
important to the researcher. Personal information blanks are sources of
descriptions about the respondent. Such information is usually included
in the methodology part of the research report. Part of this chapter
deals with descriptions of respondents which call for data usually
gathered through the personal information sheet. On the other hand, the
personal information form is the source of the variables needed in the
research project. The following is a sample.
2. Questionnaire. A questionnaire is defined as a form for securing
response to certain questions. It is distributed through mail or is filled
out by the respondent under the supervision of the investigator. The
answers to the questions could be factual, intended to obtain
information about consideration a or practices of which the
respondent is presumed to have knowledge.
To minimize frustrations in the use of questionnaires the following pointers are
suggested.
1. The questionnaire should be short and should include only questions pertinent to the
problem under the study.
2. The questionnaire should be tried out, i.e., a trial run should be made.
3. It should be worded in such a way that the respondents could comprehend meanings
easily (It can be translated in vernacular language).
4. It should require a minimum of writing.
5. Questions should be so framed that the responses could be classified descriptively,
and if possible, quantitatively. The latter leads to statistical treatment of collected data.
6. It should be so arranged as to facilitate classification and analysis.
There are two kinds of questionnaire. One is structured or sometimes called closed-ended; the other is
the unstructured or open-ended. For research purposes, the closed-ended items are advantages for the
following reasons:

1. Pre-classification are already made thus ease in correction and recording.


2. For easy statistical treatment of data.
3. Tallying is facilitated.
4. Analysis and interpretation is easier.
5. It saves on time and energy on the part of the researcher and the respondents.
6. More returns are expected because respondents do not think anymore of the
answer.
There are however, situations when the researcher desires to use unstructured or open-ended questions.
One reason for its use in ease developing this type of questions. The disadvantages are the following:

1. The respondents are made to think on what to respond, thus most of these
types of questions are let unanswered.
2. The questions may be vague and too engulfing so that response are not straight
to the point.
3. Responses could be very varied, thus causing difficulty in classifying and
analysis.
4. Treatment of data would then be difficult.
5. Lesser returns of questionnaires are expected.
3. Interviews. When the interview is used for research purposes, the
investigator is gathering data from other in face-to-face contacts.
Certain types of information can be secured only by direct contact
with people; for example, intimate facts of personal history, of
personal habits and characteristics, of personal beliefs, product
history, etc.
Interviews should consider the following: Generally, interview results are written after the interview.

1. The interview must have a set of carefully prepared questions to be introduced


into the conversation at appropriate points.
2. Careful planning must be done in advance of the interview.
3. Information gathered should be written clearly and should be outlined so as
not to waste time during the interview.
4. Spontaneity during the interview process should be observed.
5. If possible, the filling out of the interview form should be done out of sight of
the interviewee.
4. Observation Records and Ratings. The instrument are commonly
used in descriptive and experimental studies. Usually, the researcher
sets up the observation sheet himself to fit the purpose and design of the
study. Observations are recorded in these sheets. Recordings could be
descriptive (qualitative) or quantitative ratings. Observation records and
ratings should be made as simple as possible. It would be beneficial to
the observer to know how much time to consume for recording purpose
and for observation.
5. Research Instrument Forms. These are used in collecting data for
analyses and comparative purposes. These forms synthesize in one
form what data are to be collected for a particular purpose of the
investigator or researcher. The forms should be so structured to suit
the problems of the study. The filled-out form could reveal patterns
and trends which, when analyzed, could be utilized by management in
the improvement of sales, production, costing, predictions, etc.
6. Rating Scales. These are the most commonly used instruments for
appraisal or evaluation. They deal with differences in quality among
the characteristics to be measured. Rating scales direct evaluator’s
attention to different parts or aspects of the situation to be evaluated.
Usually the scale is made up of a series of examples of things to be
rated, and these are arranged as a measuring instrument consisting of
units from the lowest or poorest to the highest or best quality.
The scales may be a series of numbers, a graduated line, qualitative
terms such as excellent, average, weak, poor, etc. A simple form of
rating scale is commonly employed when judging people’s activities
such as debates and competitions in music. An investigator may
assign a certain percentage to the phases under study.
7. Checklist. In the checklist, schedules are lists of items, with a place
to check or mark “yes” or “no” present or not present, adequate or
inadequate, and the like. A checklist may be used to direct events to
larger aspects of a situation or setting, or to check against the
completeness of details according to the instruments used.
8. Use of Concrete Tools, Equipment, and Materials. In some research
studies, there is a need to use concrete tools and equipment. The
following are some pointers to consider to the choice of these materials
are the following, consistency of the measuring device used, durability
of the equipment or materials, accessibility of these items, cost of these
items/materials, economy in their use, usability, cleaning of the
materials, the “make” of the materials, longevity of use, and
multipurpose use.
Test on Validity and Reliability
VALIDITY. Means the degree to which a test or measuring
instrument measures what it intends to measure. The validity of a
measuring instrument has to do with its soundness, what the test or
questionnaire measures, its effectiveness, how well it could be
applied.
Some criteria to determine the validity of a test

1. Content Validity. Means the extent to which the content or topic of


the test is truly representative of the content of the course.
2. Concurrent Validity. It is the degree to which the test agrees or
correlates with a criterion set up as an acceptable measure.
3.Predictive Validity. It is determined by showing how well
predictions are made from the test are confirmed by evidence
gathered at some subsequent time.
Some criteria to determine the validity of a test

4. Construct Validity. It seeks agreement between a theoretical concept and a


specific measuring device or procedure. It is the extent to which the test
measures a theoretical construct or trait.
5. Face Validity. This criterion is an assessment of whether a measure appears,
on the face of it, to measure the concept of it is intended to measure.
6. Criterion-Related Validity. It applies to instruments that have been
developed for usefulness as indicator of specific trait or behavior, either now
or in the future.
Types of Validity

1. Internal Validity. This is an estimate of how much your measurement is based on


clean experimental techniques, so that you can make clear-cut interferences about
cause-consequence relations. If you choose experimental designs with at random
assignment of subjects or you counterbalance for interfering variables then you get
an experiment with high internal validity.
2. External Validity. This refers to the condition wherein results are generalizable or
applicable to groups and environments outside of the experimental settings. This
indicates that the results of the study, the confirmed cause-effect relationship, can be
expected to be reconfirmed with other groups, in other settings, at other times, as
long as the conditions are similar to those of the original study.
RELIABILITY
Reliability is the consistency of your measurement, or the
degree to which an instrument measures the same way each
time it is used under the same condition with the same
subjects. In short, it is the repeatability of your measurement.
A measure is considered reliable if a person’s score on the
same test given twice is similar. It is important to remember
that reliability is not measured, it is estimated.
Methods in Testing the Reliability of Good
Research Instrument

1. Inter-Rater or Inter-Observer Reliability. The


extent to which raters or observers respond the
same way to a given phenomenon is one
measure of reliability. Where there’s judgment
there’s disagreement.
2. Test-Retest Method. Do customers provide the
same set of responses when nothing about their
experience or their attitudes has changed? You don’t
want your measurement system to fluctuate when all
other things are static.
3. Parallel Test Method. Getting the same or very similar
results from slight variations on the question or evaluation
method also establishes reliability.
Internal Consistency Reliability. This is by far the most
commonly used measure of reliability in applied settings.
It’s popular because it’s the easiest to compute using
software—it requires only one sample of data to estimate the
internal consistency reliability. This measure of reliability is
described most often using Cronbach’s alpha (sometimes
called coefficient alpha).

You might also like