0% found this document useful (0 votes)
7 views

RM Unit II Important Questions

Research methods important points

Uploaded by

tarunts567
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

RM Unit II Important Questions

Research methods important points

Uploaded by

tarunts567
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

RESEARCH METHODS

IMPORTANT QUESTIONS

UNIT II

Part A

1. What is Research Design?


Research design is a master plan specifying the methods and procedures for
collecting and analyzing the needed information.
There are three basic types of research design: exploratory, descriptive, and causal.

It constitutes the blue print for collection, measurement and analysis of data

2. What is Reliability?
Reliability is accuracy and precision of a measurement procedure.

3. What is validity?
Validity is the extent to which a test (instrument) measures what we actually wish to measure.

4. What is scaling?
Scaling is the procedure for the assignment of numbers to a property of objects in order to impart
some of the characteristics of numbers to the properties in question.

5. What is a focus group?


Focus groups are discussions on a topic involving a small group of participants led by a trained
moderator. They include 6-10 participants. The facilitator uses group dynamics principles to
focus or guide the group in an exchange of ideas, feelings, and experiences on a specific topic.

6. Define Experimental Research Design

7. Define Exploratory Research.


Exploratory research is most commonly unstructured, informal research that is undertaken to
gain background information about the general nature of the research problem.
.
Uses
– Gain Background Information
– Define Terms
– Clarify Problems and Hypothesis (refine research objectives)
– Establish Research Priorities

8. What is Experimental Group and Control Group?


In an Experimental Research Design, Experimental Groups are those which are given treatment
and Control Groups are those which are not given the treatment in order to compare the test
results.

Part B

1. Explain Internal Validity and External Validity.

Validity

Validity is the extent to which a test (instrument) measures what we actually wish to measure.

• An experiment is valid if it has:


– Internal validity: which measures the extent to which the change in the dependent
variable is actually due to the change in the independent variable.

– External validity: which refers to the extent that the relationship observed between
the independent and dependent variables during the experiment is generalizable to
the “real world.”

METHODS OF MEASURING VALIDITY

• There are four good methods of estimating validity:

• Face

• Content

• Criterion

• Construct

Face Validity

• Face validity is the least statistical estimate (validity overall is not as easily quantified as
reliability) as it's simply an assertion on the researcher's part claiming that they've reasonably
measured what they intended to measure. It's essentially a "take my word for it" kind of validity.
Usually, a researcher asks a colleague or expert in the field to vouch for the items
measuring what they were intended to measure.

Content Validity

• Content Validity goes back to the ideas of conceptualization and operationalization. If the
researcher has focused in too closely on only one type or narrow dimension of a construct or
concept, then it's conceivable that other indicators were overlooked. In such a case, the study
lacks content validity. Content validity is making sure you've covered all the conceptual space.
Most common is a reliability approach : simply look over your inter-item correlations.

Criterion Validity

• Criterion Validity relates to our ability to predict some outcome or estimate the existence of some
current condition.

• This form of validity reflects used for some the success of measures used for some empirical
estimating purpose.

• Qualities of Criterion Validity:

a) Relevance: proper measure

b) Freedom from bias : Equal opportunities of score

c) Reliability : Criterion is stable/ reproducible

d) Availability : Information specified must be available

Construct Validity

• Construct Validity: A measure is said to possess construct validity to the degree that it confirms
to predicted correlations with other theoretical propositions.

• Construct Validity is the degree to which scores a test can be accounted for by the explanatory
constructs of a sound theory.

2. Describe the various types of scaling. What are the advantages and disadvantages of each
scale?

Scaling
Scaling is a procedure for attempting to determine quantitative measures of subjective
abstract concepts.
“ Scaling is the procedure for the assignment of numbers to a property of objects in order
to impart some of the characteristics of numbers to the properties in question” – Edward

Measurement Scales
 Nominal
 Ordinal
 Interval
 Ratio
Nominal Scale (Name/Label)
 A system of assigning number or name or symbols to events inorder to label them.
 Nominal Scales provide convenient ways of keeping track of people, objects and events.
 It indicates no order or distance relationship and has no arithmetic orgin.
 Nominal data are thus counted data.
 Nominal scale is the least powerful level of measurement.
 Eg: Assignment of numbers to cricket players
 Eg: M-Male, F- Female

Ordinal Scale ( Rank/Order)

 Rank order represents ordinal Scale

 The ordinal Scale places events in order, but there is no attempt to make the intervals of the scale
equal in terms of some rule. ( The real difference between rank 1 & 2 may be more or less than
the difference between 5 & 6.

 Eg: Rank your preference of choosing a bike

Characteristics Rank

Style 5

Price 1

Comfort 4

Mileage 2

Durability 3

Interval Scale

 Interval Scale possesses all the characteristics of nominal and ordinal scales.
 Intervals are adjusted in terms of some rule that has been established as a basis for making the
units equal.
 Interval Scales can have an arbitrary Zero but it not possible to determine for them what may be
called an absolute Zero or the unique orgin.
 Eg: Fahrenheit scale is an example of interval scale and shows similarities in what one can and
cannot do with it. Increase in temperature from 30degree to 40 degree involves the same increase
in temperature as an increase from 60 to 70 degrees, but one cannot say that the temperature of 60
degree is twice as warm as the temperature of 30 degree because both numbers are dependent on
the fact that the zero on the scale is set arbitrarily as the temperature of the freezing point of
water.

Ratio Scales

 A ratio Scale is a interval scale with a natural orgin possessing all the characteristics of number
system.
 Eg: The number of minor trafic rules violation,
 Eg: No. of incorrect letters in a page
 One can make statements like A’s performance is twice as good as that of B’s performance. The
ratio involved does have significance and facilitates a kind of comparision which is not possible
in case of interval scale.
 Eg: Exp : a) Below 5000 b) 5000-10000 c) 10,001 -15000 d) Above 15000

4. Explain the various scaling techniques with suitable examples

The following rating scales are often used in organizational research.

 Likert scale/Summated Rating Scales

 Semantic differential scale

 Magnitude Scaling

 Thurston Scales (differential)

 Guttmann Scales

Likert’s scale/Summated Rating Scales:

Is designed to examine how strongly subjects agree or disagree with statements on a 5-


point scale as following:

Strongly Neither Agree Strongly

Disagree Disagree Nor Disagree Agree Agree

1 2 3 4 5

• This is an Interval scale and the differences in responses between any two points on the
scale remain the same.

• A number of items collectively measure one construct (Job Satisfaction)

• A number of items collectively measure a dimension of a construct and a collection of


dimensions will measure the construct (Self-esteem)
• Must contain multiple items

• Each individual item must measure something that has an underlying, quantitative
measurement continuum

• There can be no right/wrong answers as opposed to multiple-choice questions

• Items must be statements to which the respondent assigns a rating

• Cannot be used to measure knowledge or ability, but familiarity

Semantic Differential Scale

• We use this scale when several attributes are identified at the extremes of the scale. For
instance, the scale would employ such terms as:

• Good – Bad

• Strong – Weak
• Hot – Cold
• This scale is treated as an Interval scale.

Example

• What is your opinion on your supervisor?


• Responsive--------------Unresponsive
• Beautiful-----------------Ugly
• Courageous-------------Timid

• Uses a set of scale anchored by their extreme responses using words of opposite meaning.
• Example:
Dark ___ ___ ___ ___ ___ Light
Short ___ ___ ___ ___ ___ Tall
Evil ___ ___ ___ ___ ___ Good
• Four to seven categories are ideal

Graphic Rating Scale

• A graphical representation helps the respondents to indicate on this scale their answers to
a particular question by placing a mark at the appropriate point on the line, as in the
following example:

• On a scale of 1 to 10, how would you rate your supervisor?

Magnitude Scaling
• Attempts to measure constructs along a numerical, ratio level scale

– Respondent is given an item with a pre-assigned numerical value attached to it to


establish a “norm”

– The respondent is asked to rate other items with numerical values as a proportion
of the “norm”

– Very powerful if reliability is established

Thurston Scales

– Items are formed

– Panel of experts assigns values from 1 to 11 to each item

– Mean or median scores are calculated for each item and Select statements evenly
spread across the scale.

• Example:

Please check the item that best describes your level of willingness to try new tasks

– I seldom feel willing to take on new tasks (1.7)

– I will occasionally try new tasks (3.6)

– I look forward to new tasks (6.9)

I am excited to try new tasks (9.8

Guttmann Scales

 Also known as Scalograms

 Both the respondents and items are ranked

 Cutting points are determined (Goodenough-Edwards technique)

 Coefficient of Reproducibility (CReg) - a measure of goodness of fit between the


observed and predicted ideal response patterns

 Keep items with CReg of 0.90 or higher

5. Explain the types of research design.


The research design is the master plan specifying the methods and procedures for collecting
and analyzing the needed information. Although every problem and research objective may
seem unique, there are usually enough similarities among problems and objectives to allow
decisions to be made in advance about the best plan to resolve the problem. There are some
basic marketing research designs that can be successfully matched to given problems and
research objectives.

Three traditional categories of research design:

a. Exploratory

b. Descriptive

c. Causal

The choice of the most appropriate design depends largely on the objectives of the research
and how much is known about the problem and these objectives. The overall research design
for a project may include one or more of these three designs as part(s) of it. Further, if more
than one design is to be used, typically we progress from Exploratory toward Causal.

Research Objective Appropriate Design

 To gain background information, to define terms, to clarify Exploratory

problems and develop hypotheses, to establish

research priorities, to develop questions to be

answered.

 To describe and measure marketing phenomena at a point Descriptive

in time

 To determine causality, test hypotheses, to make “if-then” Causal


statements, to answer questions

Exploratory Research:

• Exploratory research is most commonly unstructured, “informal” research that is undertaken to


gain background information about the general nature of the research problem. Exploratory
research is usually conducted when the researcher does not know much about the problem and
needs additional information or desires new or more recent information.

• Exploratory research is used in a number of situations:

• To gain background information

• To define terms
• To clarify problems and hypotheses

• To establish research priorities

A variety of methods are available to conduct exploratory research:

• Secondary Data Analysis

Secondary data analysis is also called a literature search. Within secondary data
exploration, researchers should start first with an organization’s own data archives. The
second source of secondary data is published documents prepared by authors outside the
sponsor organization.

• Experience Surveys

Experience survey means the survey of people who have had practical experience with
the problem to be studied. These individuals can be top executives, sales managers/
executives, wholesalers and retailers possessing valuable knowledge and information
about the problem environment.

• Case Analysis

A case study places more emphasis on full contextual analysis of a few events or
conditions and their interrelations. Case studies rely on qualitative data and emphasize
the use of results for insight into problem-solving, evaluation, and strategy. While case
studies are not considered “scientific,” they do play an important role in challenging
theory, providing new hypotheses, and offering new ideas on constructs.

• Focus Groups

Focus groups are discussions on a topic involving a small group of participants led by a
trained moderator. They include 6-10 participants. Mini-focus groups with just 3 people
are increasingly common. The facilitator uses group dynamics principles to focus or
guide the group in an exchange of ideas, feelings, and experiences on a specific topic.
Focus groups can take place in a variety of settings, but many take place in a focus group
room equipped with one-way window and recording devices. They are widely used in
business research.

• Projective Techniques

A projective technique is an unstructured, indirect form of questioning that encourages


respondents to project their underlying motivations, beliefs, attitudes or feelings
regarding the issues of concern. In projective techniques, respondents are asked to
interpret the behaviour of others rather than describe their own behaviour.

Descriptive Research:
• Descriptive research is undertaken to provide answers to questions of who, what, where, when,
and how – but not why.

• Two basic classifications:

• Cross-sectional studies

• Longitudinal studies

Cross-sectional studies measure units from a sample of the population at only one point in time.
Sample surveys are cross-sectional studies whose samples are drawn in such a way as to be
representative of a specific population. On-line survey research is being used to collect data for
cross-sectional surveys at a faster rate of speed.

Longitudinal studies repeatedly draw sample units of a population over time. One method is to
draw different units from the same sampling frame. A second method is to use a “panel” where
the same people are asked to respond periodically. On-line survey research firms recruit panel
members to respond to online queries.

• Two types of panels:

• Continuous panels ask panel members the same questions on each panel measurement.

• Discontinuous (Omnibus) panels vary questions from one time to the next.

• Longitudinal data used for:

• Market tracking, Brand-switching ,Attitude and image checks

Causal Research:

• Causality may be thought of as understanding a phenomenon in terms of conditional statements


of the form “If x, then y.”Causal relationships are typically determined by the use of experiments,
but other methods are also used. An experiment is defined as manipulating (changing
values/situations) one or more independent variables to see how the dependent variable(s) is/are
affected, while also controlling the affects of additional extraneous variables.

• Independent variables: those over which the researcher has control and wishes to
manipulate i.e. package size, ad copy, price.

• Dependent variables: those over which the researcher has little to no direct control, but
has a strong interest in testing i.e. sales, profit, market share.

• Extraneous variables: those that may effect a dependent variable but are not independent
variables.

Research Variables:
Research variable: the research problem requires identification of the key variables under
the particular study. To carry out an investigation, it becomes imperative to convert the
concepts and constructs to be studied into empirically testable and observable variables.
A variable is generally a symbol to which we assign numerals or values. A variable may
be dichotomous in nature, that is, it can posses only two values such as male-female or
customer –non-customer. Values that can only fit into prescribed number of categories
are continuous variable.

6. Explain the various types of experimental design.

An experiment is defined as manipulating (changing values/situations) one or more


independent variables to see how the dependent variable(s) is/are affected, while also
controlling the affects of additional extraneous variables.

d. Independent variables: those over which the researcher has control and wishes to
manipulate i.e. package size, ad copy, price.

e. Dependent variables: those over which the researcher has little to no direct control,
but has a strong interest in testing i.e. sales, profit, market share.

f. Extraneous variables: those that may effect a dependent variable but are not
independent variables.

An experimental design is a procedure for devising an experimental setting such that a


change in the dependent variable may be solely attributed to a change in an independent
variable.

Symbols of an experimental design:

O = measurement of a dependent variable

X = manipulation, or change, of an independent variable

R = random assignment of subjects to experimental and control groups

E = experimental effect

After-Only Design: X O1

One-Group, Before-After Design: O1 X O2

Before-After with Control Group:

Experimental group: O1 X O2

Control group: O3 O4

Where E = (O2 – O1) – (O4 – O3)


• An experiment is valid if:

• the observed change in the dependent variable is, in fact, due to the independent variable
(internal validity)

• if the results of the experiment apply to the “real world” outside the experimental setting
(external validity)

• Two broad classes:

• Laboratory experiments: those in which the independent variable is manipulated and


measures of the dependent variable are taken in a contrived, artificial setting for the
purpose of controlling the many possible extraneous variables that may affect the
dependent variable

• Field experiments: those in which the independent variables are manipulated and
measurements of the dependent variable are made on test units in their natural setting

• Test marketing is the phrase commonly used to indicate an experiment, study, or test that is
conducted in a field setting.

• Two broad classes:

• To test the sales potential for a new product or service

• To test variations in the marketing mix for a product or service

8. Explain the types of research variables.

Variable

 A concept which can take on different quantitative values is called a variable.

 As such the concepts like weight, height, income are all examples of variables.

Types of Research Variables

 In order to conduct research, you must be able to identify your variables. These are some of the
most common types of variables in a research project. There are six common variable types:

1. Dependent variables

2. Independent variables

3. Extraneous variables

4. Moderator variables

5. Control variables

6. Intervening variables
Independent and Dependent variables

 If one variable depends upon or is a consequence of the other variable, it is termed as a dependent
variable, and the variable that is influence to the dependent variable is termed as an independent
variable. For instance, if we say that Eg: height depends upon age,

 then height is a dependent variable and age is an independent variable

Extraneous variables
 The independent variables which are not directly related to the purpose of the study but affect the
dependent variables, are known as extraneous variables.
 For instance, assume that a researcher wants to test the hypothesis that there is a relationship
between children’s school performance and their self-confidence, in which
case the latter is an independent variable and the former, a dependent variable. In this context,
intelligence may also influence the school performance. However, since it is not directly related
to the purpose of the study undertaken by the researcher, it would be known as an extraneous
variable.

Moderator Variable
 Moderating variables are the ones that have a strong contingent effect on the relationship between
the independent and dependent variables. They have the potential to modify the direction and
magnitude of the above stated association.
 There might be instances when confusion might arise between a moderating variable and an
independent variable.

 Proposition 1. Turnover intention (DV) is an inverse function of organisational commitment


(IV), especially for workers who have a higher job satisfaction level (MV).

 While another study might have the following proposition to test.

 Proposition 2. Turnover intention (DV) is an inverse function of job satisfaction (IV), especially
for workers who have a higher organizational commitment (MV).

 Thus, the two propositions are studying the relation between the same three variables. However
the decision to classify one as independent and other as moderating depends on the research
interest of the decision maker.

Control Variable

 A variable that is held constant in order to assess or clarify the relationship between two
other variables. Control variable should not be confused with controlled variable, which is an
alternative term for independent variable.

 One important characteristic of a good research design is to minimize the influence or effect of
extraneous/inappropriate variable(s). The technical term ‘control’ is used when we design the
study minimizing the effects of extraneous independent variables. In experimental researches, the
term ‘control’ is used to refer to restrain/free experimental conditions.

Intervening variables

 Refers to abstract processes that are not directly observable but that link the independent and
dependent variables.

 Intervening variable is a temporal occurrence which follows the independent variable and
precedes the dependent variable. For example, the introduction of an electronic
advertisement for the new diet drink (IV) will result in increased brand awareness (IVV),
which in turn will impact the first quarter sales (DV). This would be significantly higher
amongst the younger female population (MV).

You might also like