0% found this document useful (0 votes)
26 views15 pages

Measurement Techniques in Business Research

Uploaded by

Himanshu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views15 pages

Measurement Techniques in Business Research

Uploaded by

Himanshu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

BUSINESS RESEARCH METHODS

KMBN/A
UNIT -3
UNIT-3
Scaling & measurement techniques: Concept of Measurement:
Need of Measurement; Problems in measurement in
management research – Validity and Reliability.
Levels of measurement – Nominal, Ordinal, Interval, Ratio.
Attitude Scaling Techniques: Concept of Scale – Rating Scales
viz. Likert Scales, Semantic Differential Scales, Constant Sum
Scales, Graphic Rating Scales – Ranking Scales – Paired
comparison & Forced Ranking – Concept and Application.
Validity and Reliability – Criterion for Good Measurement

Reliability is about the consistency of a measure, and validity is about the


accuracy of a measure.
Reliability-
• Measurement is said to be reliable when it give consistent results. i.e. when
repeated measurements of same things give constant results.
• Reliability is the extent to which the same finding will be obtained if the
research is repeated at another time by another researcher. If the same finding
can be obtained again, the instrument is consistent or reliable.
Reliability refers to how consistently a method measures something. If the
same result can be consistently achieved by using the same methods under the
same circumstances, the measurement is considered reliable.
Ex- You measure the temperature of a liquid sample several times under identical
conditions. The thermometer displays the same temperature every time, so the
results are reliable.

Example: If you weigh yourself on a weighing scale throughout the day, you’ll get
the same results. These are considered reliable results obtained through repeated
measures.

• Two dimensions underlie the concept of reliability:


1. Repeatability

2. Internal Consistency
Types of reliability

[Link]-retest: The consistency of a measure across time: do you get the same results when
you repeat the measurement? Ex- A group of participants complete a questionnaire
designed to measure personality traits. If they repeat the questionnaire days, weeks or
months apart and give the same answers, this indicates high test-retest reliability.
[Link]-Term: It measures the consistency of the measurement. Example: The results of the
same tests are split into two halves and compared with each other. If there is a lot of
difference in results, then the inter-term reliability of the test is low.
3. Inter-Rater : It measures the consistency of the results at the same time by different
raters (researchers). Example: Suppose five researchers measure the academic performance
of the same student by incorporating various questions from all the academic subjects and
submit various results. It shows that the questionnaire has low inter-rater reliability.
[Link] Forms: It measures Equivalence. It includes different forms of the same test
performed on the same participants. Example: Suppose the same researcher conducts the
two different forms of tests on the same topic and the same students. The tests could be
written and oral tests on the same topic. If results are the same, then the parallel-forms
reliability of the test is high; otherwise, it’ll be low if the results are different.
Validity
Validity refers to how accurately a method measures what it is intended to
measure. If research has high validity that means it produces results that
correspond to real properties, characteristics, and variations in the physical or
social world.
• High reliability is one indicator that a measurement is valid. If a method is not
reliable, it probably isn’t valid.

Validity refers to the accuracy of the measurement. Validity shows how a specific
test is suitable for a particular situation. If the results are accurate according to the
researcher’s situation, explanation, and prediction, then the research is valid.
• Example: Suppose a questionnaire is distributed among a group of people to
check the quality of a skincare product and repeated the same questionnaire with
many groups. If you get the same response from various participants, it means the
validity of the questionnaire and product is high as it has high reliability.
• Example: Your weighing scale shows different results each time you weigh
yourself within a day even after handling it carefully, and weighing before and
after meals. Your weighing machine might be malfunctioning. It means your
method had low reliability. Hence you are getting inaccurate or inconsistent results
that are not valid.
For example, variable like behavior of employees to measure consumer satisfaction
in a big shopping mall is a validity issue. As behavior of employees is not the only
determinant of consumer satisfaction rather various other factors such as pricing
policies, discount policy, parking facility, and others may be responsible for
generating consumer satisfaction. Hence, the tool that was designed to measure
consumer satisfaction from “employee’s behavior” may not be a valid measurement
tool. The researchers are always concerned about the validity of their measuring
instrument.
Validity is referred in context of two terms viz., internal & external validity.
External Validity refers to the generalizability of research findings to the external
environment like population, variables, etc. In other words, external validity of
research findings is the data's ability to be generalized across universe.
On the other hand, internal validity is the ability of a research instrument to
measure what it is purported (supposed) to measure.
Types of Validity

[Link] validity: It shows whether all the aspects of the test/measurement are
covered. Example:A language test is designed to measure the writing and reading
skills, listening, and speaking skills. It indicates that a test has high content validity.
[Link] validity: It is about the validity of the appearance of a test or procedure of
the test. Example:The type of questions included in the question paper, time, and
marks allotted. The number of questions and their categories. Is it a good question
paper to measure the academic performance of students?
[Link] validity :It shows whether the test is measuring the correct construct
(ability/attribute, trait, skill) Example: A self-esteem questionnaire could be assessed
by measuring other traits known or assumed to be related to the concept of self-
esteem (such as social skills and optimism). Strong correlation between the scores
for self-esteem and associated traits would indicate high construct validity.
[Link] validity :Refers to how well the measurement of one variable can
predict the response of another variable.
A job applicant takes a performance test during the interview process. If this test
accurately predicts how well the employee will perform on the job, the test is said to
have criterion validity.
Any measurement tool should have the ability to measure a particular variable
accurately and it must measure what it is supposed to measure. A good instrument
would enhance the quality of research results. Hence it becomes necessary that we
assess the ‘goodness’ of the measures developed. Any instrument that meets the test
of reliability, validity and practicality is said to possess the „goodness‟ of measures.
These tests of sound measurement are:
How to Increase Reliability

• Use an appropriate questionnaire to measure the competency level.


• Ensure a consistent environment for participants
• Make the participants familiar with the criteria of assessment.
• Train the participants appropriately.
• Analyse the research items regularly to avoid poor performance.
How to Increase Validity

• The respondents should be motivated.


• The intervals between the pre-test and post-test should not be lengthy.
• Dropout rates should be avoided.
• The inter-rater reliability should be ensured.
• Control and experimental groups should be matched with each other.

You might also like