2016 Design of Experiments
2016 Design of Experiments
METHODS
TEGT3762
Comparative
Screening/Characterizing
Modeling
Optimizing
5
Comparative
assessing whether a change in a single factor
has in fact resulted in a change/improvement
to the process as a whole.
choosing between alternatives, with narrow
scope
suitable for an initial comparison
6
Screening/characterisation
understanding" the process as a whole in the
sense that he/she wishes (after design and
analysis) to have in hand a ranked list of
important to unimportant factors (most
important to least important) that affect the
process (considering multiple factors).
Y=mx + c
8
Optimization/Response surface
interested in determining optimal settings of
the process factors;
9
Optimization
hit a target
maximize or minimize a response
reduce variation by locating a region where
the process is easier to manage
RESEARCH DESIGN
After identifying the objective listed above
that corresponds most closely to your specific
goal, you can:
Proceed to select the experimental factors
Select the appropriate design
Process variables include both inputs and
outputs - i.e., factors and responses
Include all important factors (based on
engineering judgment).
13
A designed experiment
-is an investigation in which a specified framework
is provided in order to observe, measure and
evaluate groups/treatments with respect to a
designated response
31
Blocking
Blocking is one way of controlling extraneous
conditions
Blocking can be defined as the grouping of
experimental units into comparable homogeneous
groups with each group representing a level of the
blocking variable, units in each block are as similar
as is possible but differ from units in the next block
significantly.
Blocking factors can be pH, material type, moisture
level, gradient, age, time of experimentation, weight,
position, etc.
32
The experimental units are placed into groups based on
their similarity with respect to a characteristic that may
affect the response
Units in a block are considered homogenous
Treatments are randomly assigned separately within each
block
In a randomised complete block design (RCBD), the
experimental units in a block are equal to the treatments
being compared
Alternative is a randomised incomplete block design
(RIBD)
Strategy of Experimentation
Strategy of experimentation
Best guess approach (trial and error)
can continue indefinitely
cannot guarantee best solution has been found
One-factor-at-a-time (OFAT) approach
inefficient (requires many test runs)
fails to consider any possible interaction between factors
Factorial approach (invented in the 1920s)
Factors varied together
Correct, modern, and most efficient approach
Can determine how factors interact
Used extensively in industrial R and D, and for process
improvement.
34
Statistical Design of Experiments
All experiments should be designed experiments
Unfortunately, some experiments are poorly
designed - valuable resources are used
ineffectively and results inconclusive
Statistically designed experiments permit
efficiency and economy, and the use of statistical
methods in examining the data result in scientific
objectivity when drawing conclusions.
35
DOE is a methodology for systematically
applying statistics to experimentation.
DOE lets experimenters develop a mathematical
model that predicts how input variables interact
to create output variables or responses in a
process or system.
DOE can be used for a wide range of
experiments for various purposes including
nearly all fields of engineering.
Use of statistics is very important in DOE and
the basics are covered in a first course in an
engineering program.
36
In general, by using DOE, we can:
Learn about the process we are investigating
Screen important variables
Build a mathematical model
Obtain prediction equations
Optimize the response (if required)
37
Applications of DOE in Engineering Design
Experiments are conducted in the field of
engineering to:
evaluate and compare basic design configurations
evaluate different materials
select design parameters so that the design will work
well under a wide variety of field conditions (robust
design)
determine key design parameters that impact
performance
38
INPUTS OUTPUTS
(Factors) (Responses)
X variables Y variables
People
Materials
responses related
Policies to producing a
A Ble nding of produce
Inputs which
Ge ne rates responses related
Procedures
Corresponding to completing a task
Outputs
Methods
39
INPUTS OUTPUTS
(Factors) (Responses)
X variables Y variables
Type of
cement
compressive
Percent water
strength
PROCESS:
Type of
modulus of elasticity
Additiv es
Percent
Discov e ring modulus of rupture
Additiv es
Optimal
Concre te
Mixing Time M ixture Poisson's ratio
Curing
Conditions
40
INPUTS OUTPUTS
(Factors) (Responses)
X variables Y variables
Type of Raw
Material
Mold
Temperature
% shrinkage f rom
Holding Time
mold size
M anufacturing
Inje ction number of defective
Gate Size
M olde d Parts parts
Screw Speed
41
INPUTS OUTPUTS
(Factors) (Responses)
X variables Y variables
Impermeable layer
(mm)
Soil Moisture
Capacity
(mm)
M odel Calibration
Initial Soil Moisture
(mm)
42
Basic Principles
43
Every experiment involves a sequence of
activities:
Conjecture - hypothesis that motivates the
experiment
Experiment - the test performed to investigate
the conjecture
Analysis - the statistical analysis of the data
from the experiment
Conclusion - what has been learned about the
original conjecture from the experiment.
44
Three basic principles of Statistical DOE
Replication
allows an estimate of experimental error
allows for a more precise estimate of the sample mean
value
Randomization
cornerstone of all statistical methods
average out effects of extraneous factors
reduce bias and systematic errors
Blocking
increases precision of experiment
factor out variable not studied
45
Guidelines for Designing Experiments
Recognition of and statement of the problem
need to develop all ideas about the objectives of the
experiment - get input from everybody - use team
approach.
Choice of factors, levels, ranges, and response
variables.
Need to use engineering judgment or prior test results.
Choice of experimental design
sample size, replicates, run order, randomization,
software to use, design of data collection forms.
46
Performing the experiment
vital to monitor the process carefully. Easy to
underestimate logistical and planning aspects in a
complex R and D environment.
Statistical analysis of data
provides objective conclusions - use simple graphics
whenever possible.
Conclusion and recommendations
follow-up test runs and confirmation testing to validate
the conclusions from the experiment.
Do we need to add or drop factors, change
ranges, levels, new responses, etc.. ???
47
Using Statistical Techniques in
Experimentation - things to keep in mind
Use non-statistical knowledge of the problem
physical laws, background knowledge
Keep the design and analysis as simple as possible
Dont use complex, sophisticated statistical techniques
If design is good, analysis is relatively straightforward
If design is bad - even the most complex and elegant
statistics cannot save the situation
Recognize the difference between practical and
statistical significance
statistical significance practically significance
48
Experiments are usually iterative
unwise to design a comprehensive experiment at the
start of the study
may need modification of factor levels, factors,
responses, etc.. - too early to know whether
experiment would work
use a sequential or iterative approach
should not invest more than 25% of resources in the
initial design.
Use initial design as learning experiences to accomplish
the final objectives of the experiment.
49
Factorial v.s. OFAT
Factorial design - experimental trials or runs are
performed at all possible combinations of factor
levels in contrast to OFAT experiments.
50
The ability to gain competitive advantage requires
extreme care in the design and conduct of
experiments. Special attention must be paid to joint
effects and estimates of variability that are provided
by factorial experiments.
51
Factorial Designs
In a factorial experiment, all
possible combinations of
factor levels are tested
The golf experiment:
Type of driver (over or regular)
Type of ball (balata or 3-piece)
Walking vs. riding a cart
Type of beverage (Beer vs water)
Time of round (am or pm)
Weather
Type of golf spike
Etc, etc, etc
52
One-factor-at-a-time experiments (OFAT)
OFAT experiments are regarded as easier to implement,
more easily understood, and more economical than
factorial experiments. Better than trial and error.
OFAT experiments are believed to provide the optimum
combinations of the factor levels.
Unfortunately, each of these presumptions can generally
be shown to be false except under very special
circumstances.
The key reasons why OFAT should not be conducted
except under very special circumstances are:
Do not provide adequate information on interactions
Do not provide efficient estimates of the effects
53
Approaches to Experimentation: One-
Factor-at-a-Time
One-factor-at-a-time
procedure (2 level example)
run all factors at one condition
repeat, changing condition of one factor
continuing to hold that factor at that condition, rerun with another
factor at its second condition
repeat until all factors at their optimum conditions
slow, expensive: many tests
can miss interactions!
54
One-Factor-At-A-Time
Process: Yield = f(temperature, pressure)
50% yield
30% yield
Factorial OFAT
high high
Factor B B
low
low
58
FACTORIAL (2k) DESIGNS (k = 2):
GRAPHICAL OUTPUT
Neither factor A nor Factor B have an effect on
the response variable.
59
FACTORIAL (2k) DESIGNS (k = 2):
GRAPHICAL OUTPUT
Factor A has an effect on the response
variable, but Factor B does not.
60
FACTORIAL (2k) DESIGNS (k = 2):
GRAPHICAL OUTPUT
Factor A and Factor B have an effect on the
response variable.
61
FACTORIAL (2k) DESIGNS (k = 2): GRAPHICAL
OUTPUT
Factor B has an effect on the response variable, but only if factor A is
set at the High level. This is called interaction and it basically
means that the effect one factor has on a response is dependent on
the level you set other factors at. Interactions can be major problems
in a DOE if you fail to account for the interaction when designing your
experiment.
62
Design of Engineering Experiments
Basic Statistical Concepts
Simple comparative experiments
The hypothesis testing framework
The two-sample t-test
Checking assumptions, validity
Comparing more than two factor levelsthe
analysis of variance
ANOVA decomposition of total variability
Statistical testing & analysis
Checking assumptions, model validity
Post-ANOVA testing of means
63
Example
Formulation of a cement mortar
Original formulation and modified formulation
10 samples for each formulation
One factor formulation
Two formulations:
two treatments
two levels of the factor formulation
64
Portland Cement Formulation
Observation Modified Mortar Unmodified Mortar
(sample), j (Formulation 1) y1 j (Formulation 2) y2 j
1 16.85 17.50
2 16.40 17.63
3 17.21 18.25
4 16.35 18.00
5 16.52 17.86
6 17.04 17.75
7 16.96 18.22
8 17.15 17.90
9 16.59 17.96
10 16.57 18.15
65
Basic Statistical Concepts
66
Basic Statistical Concepts
67
Graphical View of the Data
Dot Diagram
Dotplots of Form 1 and Form 2
(means are indicated by lines)
18.3
17.3
16.3
Form 1 Form 2
68
Box Plots
Boxplots of Form 1 and Form 2
(means are indicated by solid circles)
18.5
17.5
16.5
Form 1 Form 2
69
Histogram
70
Two level design experiments
The most popular experimental designs are two-
level designs.
Why only two levels?
There are a number of good reasons why two is the
most common choice amongst engineers:
it is ideal for screening designs,
simple and economical;
it also gives most of the information (required to go
to a multilevel response surface experiment if one is
needed. )
Example
Two factors: results in distinct 4 treatments
(see table)
Bottle type:
glass
plastic
Chemical for disinfecting
- Chlorine
- metabisulphate
Bottle type
1 1-factor completely _ _
randomized design