0% found this document useful (0 votes)
70 views

RMEB Experiments

Experiments are a research method used to test hypotheses by manipulating independent variables and observing their effects on dependent variables. There are three main types of experimental designs - true experiments, quasi-experiments, and pre-experiments. True experiments have random assignment to control and experimental groups to test causal relationships. Quasi-experiments are similar but lack random assignment, compromising internal validity. Pre-experiments only use a single group and cannot establish causation. Key aspects of experiments include identifying independent, dependent, intervening, moderator, control, and extraneous variables and controlling for confounding factors through randomization and comparison groups. The goal is to support or disprove hypotheses through reproducible and controlled observation.

Uploaded by

Gangura Dorin
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

RMEB Experiments

Experiments are a research method used to test hypotheses by manipulating independent variables and observing their effects on dependent variables. There are three main types of experimental designs - true experiments, quasi-experiments, and pre-experiments. True experiments have random assignment to control and experimental groups to test causal relationships. Quasi-experiments are similar but lack random assignment, compromising internal validity. Pre-experiments only use a single group and cannot establish causation. Key aspects of experiments include identifying independent, dependent, intervening, moderator, control, and extraneous variables and controlling for confounding factors through randomization and comparison groups. The goal is to support or disprove hypotheses through reproducible and controlled observation.

Uploaded by

Gangura Dorin
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 6

Note: material compiled from various internet sources

EXPERIMENTS AS RESEARCH METHOD IN BUSINESS

Experiments can be characterized by four general parameters:


- the synthetic-analytic continuum: experimental research is rather analytic, focuses on a specific
element (a "constituent part") of the larger process investigated
- the heuristic (hypothesis-generating) vs. deductive (hypothesis-testing) factor: in contrast to qualitative
research, virtually all experiments are designed to test hypotheses
- the amount of control over the research context: experiments generally attempt to control the research
environment to a considerable degree, this being both a plus and a minus. On one hand, it allows the
researcher to isolate a particular variable and focus on it in order to determine its effect on other
variables. Because of this feature, only experimental studies can claim to show any degree of causality.
Qualitative and descriptive research can reveal only relationships or processes. On the other hand,
control has several disadvantages: a) it often makes the research situation unnatural; consequently,
subjects may not behave normally in an experiment; b) it is virtually impossible to control all the
variables in a research situation involving human beings; c) controlled experiments often raise serious
questions about research ethics
- the level of explicitness in data collection: high. Carefully focused instruments (tests, observations,
questionnaires, etc.) that generate precise quantitative data are the norm in experiments. These data
can be analyzed using statistical tests of significance in order to accept or reject the hypothesis.

Major types of experimental designs:


- true-experimental designs
Conditions:
 Random selection of subjects
 Use of control groups
 Random assignments to control and experimental groups
 Random assignment of groups to control and experimental conditions
In order for an experiment to follow a true-experimental design, it must meet all the four preceding criteria.
There is some variation in true-experimental designs, but that variation comes in the time(s) that the treatment
is given to the experimental group, or in the observation or measurement (pre-test, post-test, mid-test) area.
Advantages:
 Greater internal validity
 Causal claims can be investigated (biggest explanatory power in terms of causality)
Disadvantages:
 Less external validity (artificially created conditions, not like real world conditions)
 Not very practical

- quasi-experimental designs
Are usually constructions that already exist in the real world. Those designs that fall into the quasi-
experimental category fall short in some way of the criteria for the true experimental group. A quasi-
experimental design will have some sort of control and experimental group, but these groups probably weren't
randomly selected. Random selection is where true-experimental and quasi-experimental designs differ.
Advantages:
 Greater external validity (more like real world conditions)
 Much more feasible given time and logistical constraints
Disadvantages:
 Not as many variables controlled (less causal claims)

- pre-experimental designs
They lack random selection in most cases, they usually just employ a single group. This group receives the
"treatment," there is no control group. Pilot studies, one-shot case studies, and most research using only one
group, fall into this category.
Advantages:
 Very practical
 Set the stage for further research
Disadvantages:
 Lower validity

Types of variables in experiments:


Show the effect of manipulating or introducing the independent variables. For example, if
the independent variable is the use or non-use of a new language teaching procedure, then
DEPENDENT
the dependent variable might be students' scores on a test of the content taught using that
VARIABLES
procedure. In other words, the variation in the dependent variable depends on the
variation in the independent variable.
Are those that the researcher has control over. This "control" may involve manipulating
existing variables (e.g., modifying existing methods of instruction) or introducing new
INDEPENDENT
variables (e.g., adopting a totally new method for some sections of a class) in the research
VARIABLES
setting. Whatever the case may be, the researcher expects that the independent variable(s)
will have some effect on (or relationship with) the dependent variables.
Refer to abstract processes that are not directly observable but that link the independent
and dependent variables. In language learning and teaching, for example, they are usually
INTERVENING inside the subjects' heads, including various language learning processes which the
VARIABLES researcher cannot observe. For example, if the use of a particular teaching technique is
the independent variable and mastery of the objectives is the dependent variable, then the
language learning processes used by the subjects are the intervening variables.
Affect the relationship between the independent and dependent variables by modifying
the effect of the intervening variable(s). Unlike extraneous variables, moderator variables
MODERATOR
are measured and taken into consideration. Typical moderator variables in TESL and
VARIABLES
language acquisition research (when they are not the major focus of the study) include the
sex, age, culture, or language proficiency of the subjects.
Language learning and teaching are very complex processes. It is not possible to consider
every variable in a single study. Therefore, the variables that are not measured in a
CONTROL
particular study must be held constant, neutralized/balanced, or eliminated, so they will
VARIABLES
not have a biasing effect on the other variables. Variables that have been controlled in this
way are called control variables.
Are those factors in the research environment which may have an effect on the dependent
variable(s) but which are not controlled. Extraneous variables are dangerous. They may
EXTRANEOUS damage a study's validity, making it impossible to know whether the effects were caused
VARIABLES by the independent and moderator variables or some extraneous factor. If they cannot be
controlled, extraneous variables must at least be taken into consideration when
interpreting results.

More details about experiments

Experiments – a scientific method that arbitrates between competing models or hypotheses, used to test
existing theories or new hypotheses in order to support them or disprove them.
An experiment or test can be carried out using the scientific method to answer a question or investigate a
problem. A good experiment usually tests a hypothesis. However, an experiment may also test a question or
test previous results. First an observation is made, then a question is asked, or a problem arises. Next,
a hypothesis is formed, then experimentation is used to test that hypothesis. The results are analyzed,
a conclusion is drawn, sometimes a theory is formed, and results are communicated through research papers.
It is important that one knows all factors in an experiment. It is also important that the results are as accurate as
possible. If an experiment is carefully conducted, the results usually either support or disprove the hypothesis.
An experiment can never "prove" a hypothesis, it can only add support. However, one repeatable experiment
that provides a counterexample can disprove a theory or hypothesis. An experiment must also control the
possible confounding factors - any factors that would affect the accuracy or repeatability of the experiment or
the ability to interpret the results. The results of an experiment can never uniquely identify the explanation,
they can only split the range of available models into two groups, those that are consistent with the results and
those that aren't.
An experiment usually refers to observations in which conditions are artificially controlled and manipulated by
the experimenter to eliminate extraneous factors, often in a scientific laboratory.

A methodology for designing experiments was proposed by Ronald A. Fisher, in his book The Design of
Experiments (1935). The most important ideas of experimental design:
Comparison
In many fields of study it is hard to reproduce measured results exactly. Comparisons between treatments are
much more reproducible and are usually preferable. Often one compares against a standard,scientific control,
or traditional treatment that acts as baseline.
Randomization
There is an extensive body of mathematical theory that explores the consequences of making the allocation of
units to treatments by means of some random mechanism such as tables of random numbers, or the use of
randomization devices such as playing cards or dice. Provided the sample size is adequate, the risks associated
with random allocation (such as failing to obtain a representative sample in a survey, or having a serious
imbalance in a key characteristic between a treatment group and a control group) are calculable and hence can
be managed down to an acceptable level. Random does not mean haphazard, and great care must be taken that
appropriate random methods are used.
Replication
Measurements are usually subject to variation and uncertainty. Measurements are repeated and full
experiments are replicated to help identify the sources of variation and to better estimate the true effects of
treatments.
Blocking
Blocking is the arrangement of experimental units into groups (blocks) that are similar to one another.
Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in
the estimation of the source of variation under study.
Orthogonality
Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out.
Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently
distributed if the data are normal. Because of this independence, each orthogonal treatment provides different
information to the others. If there are T treatments and T – 1 orthogonal contrasts, all the information that can
be captured from the experiment is obtainable from the set of contrasts.

Experimental research designs are used for the controlled testing of causal processes.
The general procedure is: one or more independent variables are manipulated to determine their effect on a
dependent variable. These designs can be used where:
1. There is time priority in a causal relationship (cause precedes effect),
2. There is consistency in a causal relationship (a cause will always lead to the same effect), and
3. The magnitude of the correlation is great.
The most common applications of these designs in marketing research and experimental economics are test
markets and purchase labs. The techniques are commonly used in other social sciences including
sociology, psychology, and social work.

In an attempt to control for extraneous factors, several experimental research designs have been developed,
including:
 Classical pretest-post test - The total population of participants is randomly divided into two
samples; the control sample, and the experimental sample. Only the experimental sample is exposed to the
manipulated variable. The researcher compares the pretest results with the post test results for both
samples. Any divergence between the two samples is assumed to be a result of the experiment.
 Solomon four group design - The sample is randomly divided into four groups. Two of the groups are
experimental samples. Two groups experience no experimental manipulation of variables. Two groups
receive a pretest and a post test. Two groups receive only a post test. This is an improvement over the
classical design because it controls for the effect of the pretest.
 Factorial design - this is similar to a classical design except additional samples are used. Each group
is exposed to a different experimental manipulation.

In statistics, a full factorial experiment is an experiment whose design consists of two or more factors, each
with discrete possible values or "levels", and whose experimental units take on all possible combinations of
these levels across all such factors.
A full factorial design may also be called a fully crossed design. Such an experiment allows studying the
effect of each factor on the response variable, as well as the effects of interactions between factors on the
response variable.

For the vast majority of factorial experiments, each factor has only two levels. For example, with two factors
each taking two levels, a factorial experiment would have four treatment combinations in total, and is usually
called a 2×2 factorial design.
If the number of combinations in a full factorial design is too high to be logistically feasible, a fractional
factorial design may be done, in which some of the possible combinations (usually at least half) are omitted.

To save space, the points in a two-level factorial experiment are often abbreviated with strings of plus and
minus signs. The strings have as many symbols as factors, and their values dictate the level of each factor:
conventionally, − for the first (or low) level, and + for the second (or high) level. The points in this experiment
can thus be represented as − − , + − , − + , and + + .
The factorial points can also be abbreviated by (1), a, b, and ab, where the presence of a letter indicates that the
specified factor is at its high (or second) level and the absence of a letter indicates that the specified factor is at
its low (or first) level (for example, "a" indicates that factor A is on its high setting, while all other factors are
at their low (or first) setting). (1) is used to indicate that all factors are at their lowest (or first) values.

2×2 factorial experiment

A B
(1) − −
a + −
b − +
ab + +

The design of a quasi-experiment relates to the setting up a particular type of an experiment or other study in
which one has little or no control over the allocation of the treatments or other factors being studied. The key
difference in this empirical approach is the lack of random assignment. Another unique element often involved
in this experimentation method is use of time series analysis: interrupted and non-interrupted. Experiments
designed in this manner are referred to as having quasi-experimental design.
The first part of creating a quasi-experimental design is to identify the variables. The quasi-independent
variable will be the x-variable, the variable that is manipulated in order to affect a dependent variable. “X” is
generally a grouping variable with different levels. Grouping means two or more groups such as a treatment
group and a placebo (placebos are more so used for medical or physiological experiments) or control group.
The predicted outcome is the dependent variable which is the y-variable. In a time series analysis, the
dependent variable is observed over time for any changes that may take place. Once the variables have been
identified and defined, a procedure should then be implemented and group differences should be examined
There are several types of quasi-experimental designs ranging from the simple to the complex, each having
different strengths, weaknesses and applications. These designs include (but are not limited to): 1. The one-
group posttest only; 2. The one-group pretest posttest; 3. The removed-treatment design; 4. The case-control
design; 5. The non-equivalent control groups design; 6. The interrupted time-series design; 7. The regression
discontinuity design.

Experiments Glossary
 Alias: When the estimate of an effect also includes the influence of one or more other effects (usually
high order interactions) the effects are said to be aliased (see confounding). For example, if the estimate of
effect D in a four factor experiment actually estimates (D + ABC), then the main effect D is aliased with
the 3-way interaction ABC. Note: This causes no difficulty when the higher order interaction is either non-
existent or insignificant.
 Balanced design: An experimental design where all cells (i.e. treatment combinations) have the same
number of observations.
 Blocking: A schedule for conducting treatment combinations in an experimental study such that any
effects on the experimental results due to a known change in raw materials, operators, machines, etc.,
become concentrated in the levels of the blocking variable. Note: the reason for blocking is to isolate a
systematic effect and prevent it from obscuring the main effects. Blocking is achieved by restricting
randomization.
 Center Points: Points at the center value of all factor ranges.
 Coding Factor Levels: Transforming the scale of measurement for a factor so that the high value
becomes +1 and the low value becomes -1. After coding all factors in a 2-level full factorial experiment,
the design matrix has all orthogonal columns. Coding is a simple linear transformation of the original
measurement scale. If the "high" value is Xh and the "low" value is XL (in the original scale), then the
scaling transformation takes any original X value and converts it to (X − a)/b, where a = (Xh + XL)/2
and b = (Xh−XL)/2. To go back to the original measurement scale, just take the coded value and multiply it
by b and add a or, X = b × (coded value) + a. As an example, if the factor is temperature and the high
setting is 65°C and the low setting is 55°C, then a = (65 + 55)/2 = 60 and b = (65 − 55)/2 = 5. The center
point (where the coded value is 0) has a temperature of 5(0) + 60 = 60°C.
 Comparative design: A design aimed at making conclusions about one a priori important factor,
possibly in the presence of one or more other "nuisance" factors.
 Confounding: A confounding design is one where some treatment effects (main or interactions) are
estimated by the same linear combination of the experimental observations as some blocking effects. In
this case, the treatment effect and the blocking effect are said to be confounded. Confounding is also used
as a general term to indicate that the value of a main effect estimate comes from both the main effect itself
and also contamination or bias from higher order interactions. Note: Confounding designs naturally arise
when full factorial designs have to be run in blocks and the block size is smaller than the number of
different treatment combinations. They also occur whenever a fractional factorial design is chosen instead
of a full factorial design.
 Design: A set of experimental runs which allows you to fit a particular model and estimate your
desired effects.
 Design matrix: A matrix description of an experiment that is useful for constructing and analyzing
experiments.
 Effect: How changing the settings of a factor changes the response. The effect of a single factor is also
called a main effect. Note: For a factor A with two levels, scaled so that low = -1 and high = +1, the effect
of A is estimated by subtracting the average response when A is -1 from the average response when A = +1
and dividing the result by 2 (division by 2 is needed because the -1 level is 2 scaled units away from the
+1 level).
 Error: Unexplained variation in a collection of observations. See Errors and residuals in statistics.
Note: experimental designs typically require understanding of both random error and lack of fit error.
 Experimental unit: The entity to which a specific treatment combination is applied.
 Factors: Process inputs an investigator manipulates to cause a change in the output. Some factors
cannot be controlled by the experimenter but may affect the responses. If their effect is significant, these
uncontrolled factors should be measured and used in the data analysis. Note: The inputs can be discrete or
continuous.
 Crossed factors: Two factors are crossed if every level of one occurs with every level of the
other in the experiment.
 Nested factors: A factor "A" is nested within another factor "B" if the levels or values of "A"
are different for every level or value of "B". Note: Nested factors or effects have a hierarchical
relationship.
 Fixed effect: An effect associated with an input variable that has a limited number of levels or in
which only a limited number of levels are of interest to the experimenter.
 Interaction: Occurs when the effect of one factor on a response depends on the level of another
factor(s).
 Lack of fit error: Error that occurs when the analysis omits one or more important terms or factors
from the process model. Note: Including replication in a designed experiment allows separation of
experimental error into its components: lack of fit and random (pure) error.
 Model: Mathematical relationship which relates changes in a given response to changes in one or more
factors.
 Orthogonality: Two vectors of the same length are orthogonal if the sum of the products of their
corresponding elements is 0. Note: An experimental design is orthogonal if the effects of any factor
balance out (sum to zero) across the effects of the other factors.
 Random effect: An effect associated with input variables chosen at random from a population having a
large or infinite number of possible values.
 Random error: Error that occurs due to natural variation in the process. Note: Random error is
typically assumed to be normally distributed with zero mean and a constant variance. Note: Random error
is also called experimental error.
 Randomization: A schedule for allocating treatment material and for conducting treatment
combinations in a designed experiment such that the conditions in one run neither depend on the
conditions of the previous run nor predict the conditions in the subsequent runs. Note: The importance of
randomization cannot be over stressed. Randomization is necessary for conclusions drawn from the
experiment to be correct, unambiguous and defensible.
 Replication: Performing the same treatment combination more than once. Note: Including replication
allows an estimate of the random error independent of any lack of fit error.
 Resolution: In fractional factorial designs, "resolution" describes the degree to which the estimated
main-effects are aliased (or confounded) with estimated higher-order interactions (2-level interactions, 3-
level interactions, etc). In general, the resolution of a design is one more than the smallest order interaction
which is aliased with some main effect. If some main effects are confounded with some 2-level
interactions, the resolution is 3. Note: Full factorial designs have no confounding and are said to have
resolution "infinity". For most practical purposes, a resolution 5 design is excellent and a resolution 4
design may be adequate. Resolution 3 designs are useful as economical screening designs.
 Response(s): The output(s) of a process. Sometimes called dependent variable(s).
 Rotatability: A design is rotatable if the variance of the predicted response at any point x depends only
on the distance of x from the design center point. A design with this property can be rotated around its
center point without changing the prediction variance at x. Note: Rotatability is a desirable property for
response surface designs (i.e. quadratic model designs).
 Scaling factor levels: Transforming factor levels so that the high value becomes +1 and the low value
becomes -1.
 Screening design: A designed experiment that identifies which of many factors have a significant
effect on the response. Note: Typically screening designs have more than 5 factors.
 Test plan: a written document that gives a specific listing of the test procedures and sequence to be
followed.
 Treatment: A treatment is a specific combination of factor levels whose effect is to be compared with
other treatments.
 Treatment combination: The combination of the settings of several factors in a given experimental
trial. Also known as a run.

You might also like