0% found this document useful (0 votes)
23 views12 pages

Design and Analysis of Experiment Theoretical

Uploaded by

mark asaad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views12 pages

Design and Analysis of Experiment Theoretical

Uploaded by

mark asaad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Design and Analysis of Experiment Theoretical

• Experiment: A planned course of action aimed at answering one or more


carefully framed questions. Also, it can be defined as a test or series of runs in
which purposeful changes are made to the input variables of a process or
system so that we may observe and identify the reasons for changes that may
be observed in the output response.
• Design: the pattern of experimentation, or the plan for performing the
experiment.
• Each experimental run is a test.
• Experimentation plays an important role in technology commercialization
and product realization activities, which consist of new product design and
formulation, manufacturing process development, and process improvement.
• The objective in many cases may be to develop a robust process, that is, a
process affected minimally by external sources of variability. There are also
many applications of designed experiments in a nonmanufacturing or non-
product-development setting, such as marketing, service operations, and
general business operations.
• Designed experiments are a key technology for innovation. Both break
through innovation and incremental innovation activities can benefit from
the effective use of designed experiments.
• Quenching:

As an example of an experiment, suppose that a metallurgical engineer is


interested in studying the effect of two different hardening processes, oil
quenching and saltwater quenching, on an aluminum alloy. Here the objective
of the experimenter (the engineer) is to determine which quenching solution
produces the maximum hardness for this particular alloy. The engineer
decides to subject a number of alloy specimens or test coupons to each
quenching medium and measure the hardness of the specimens after
quenching. The average hardness of the specimens treated in each quenching
solution will be used to determine which solution is best.

• Experimentation is a vital part of the scientific (or engineering) method.


• Now there are certainly situations where the scientific phenomena are so well
understood that useful results including mathematical models can be devel-
oped directly by applying these well-understood principles. The models of
such phenomena that follow directly from the physical mechanism are usually
called mechanistic models.
• A simple example is the familiar equation for cur- rent flow in an electrical
circuit, Ohm’s law, E = IR. However, most problems in science and
engineering require observation of the system at work and experimentation
to elucidate information about why and how it works. Well-designed
experiments can often lead to a model of system performance; such
experimentally determined models are called empirical models. These
empirical models can be manipulated by a scientist or an engineer just as a
mechanistic model can.

• In general, experiments are used to study the performance of processes and


systems. We can usually visualize the process as a combination of operations,
machines, methods, people, and other resources that transforms some input
(often a material) into an output that has one or more observable response
variables. Some of the process variables and material properties x1 , x2 , . . . ,
xp are controllable, whereas other variables such as environmental factors or
some material properties z1 , z2 , . . . , zq are uncontrollable (although they
may be controllable for purposes of a test). The objectives of the experiment
may include the following:
- Determining which variables are most influential on the response y
- Determining where to set the influential x’s so that y is almost always near
the desired nominal value
- Determining where to set the influential x’s so that variability in y is small
- Determining where to set the influential x’s so that the effects of the
uncontrollable variables z1 , z2 , . . . , zq are minimized.

• Usually, an objective of the experimenter is to determine the influence that


these factors have on the output response of the system. The general approach
to planning and conducting the experiment is called the strategy of
experimentation.
• Objectives include:

1. What are the influential variables?

2. Where to set the influential variables such that Y is on target?

3. Where to set the influential variables so as to minimize the variability in Y?

4. Where to set the influential variables so as to minimize the effect of the


uncontrollable variables on Y?

• Statically Designed Experiments:

A restricted kind of experiments, in which, the engineer (experimenter):

- Selects certain factors to study


- Deliberately varies those factors, in a controlled fashion, then
- Uses statistical methods to make ‘objective’ conclusions regarding their
effect.

• The steps of the experiment are as follows:


1) State the problem: clear, unbiased and of practical consequence.
2) Choose the factors:

Control Variables Nuisance Factors


Influential, controllable and can A factor that probably has some
be measured effect on the response, but not that
important to the experimenter.

- Unknown and
uncontrollable
- Known but
uncontrollable. Ex;
environmental humidity.
- Known and controllable.

- Randomization: when we choose random samples and this helps reduce


the unknown and uncontrollable nuisance factor. Pertains to the allocation
of the units and the order in which tests (runs) are performed. Key
requirement of statistical methods.
- Replication: an independent repeat run of each factor of combination and
this helps the experimenter to estimate the experimental error and take it as
a factor, and its sample mean is used to determine the population mean.
Basically, a repetition of the experiment. It mainly helps give out more
precise estimate of the experiment’s parameters.

• What is the difference between replication and repeated measurements?

- Replication repeats the experiment several times, while repeated


measurements are in which the same experiment is being remeasured.

- Blocking: a design technique, in which we do the experiment in blocks,


used to improve the precision with which comparisons among the factors of
interest are made. Also, a design technique used to improve the precision
with which comparisons are made. Often blocking is used to reduce or
eliminate the variability transmitted from known and controllable nuisance
factors. Hold constant: we do the whole experiment at one level of the
variable.

3) Select the response (s):


- Ability to capture a quality or quantity of interest.
- Can be obtained using Non-destructive testing.
- Continuous variable.
- Constant variance.

4) Choose the design


5) Perform the experiment
6) Statistical data analysis
7) Conclusions and recommendations.

• General requirements in order to conduct an experiment:

- Use your engineering knowledge,


- Keep the design simple,
- Recognize the difference between statistical and practical significance,
- Allow for iterative experimentation.

• Experimental design and its importance:

- Improve process yield.


- Reduce variability, closer conformance to nominal/ target requirement.
- Reduce development time.
- Reduce overall cost.
- Evaluate/ compare design configuration.
- Evaluate material alternatives
- Robust product
- Key product design parameters that impact the performance.
- Formulation of new product.

• Simple comparative experiment: the experiment to compare two conditions or


sometimes called two treatments.

• Statistical Significance: hypothesis is useful for building theoretical


foundation for other statistical work.
• Practical significance: the particular application of the hypothesis test of great
importance to the real world.
• Point estimation: single value estimate of a parameter.
1 n
å
n 1
yi = y = µ̂

• Interval estimation: range of values where the parameter’s expected to lie.

P(qˆL £ q £ qˆU ) = 1 - a
Where,

qˆL is the lower confidence limit constructed such that (a/2) 100 % of the

sampling distribution of lies to its left.

qˆU is the upper confidence limit constructed such that (a/2) 100 % of the

sampling distribution of lies to its right.

a/2 a/2
1- a
(1-a) is the confidence level of the interval.

{a = 0.05 for most practical applications}

• Confidence interval (level): probability with which the estimation of location


of a statistical parameter in a sample is also true for the population.

It is false that the higher the confidence level, the wider the interval.
• Increasing sample size; decreases standard error, gives narrower interval.
(Estimate population proportion more precisely with a large sample).
• Estimating the population mean (): (  is the standard deviation)

Case 1:  is Use the statistic Z,


known and
s s
when n >= 30 P( y - za 2 £ µ £ y + za 2 ) = 1-a
n n
𝑦̅ − 𝜇𝑜
𝑍𝑐 = 𝜎
√𝑛
Case 2:  is Use the statistic t,
unknown and
s s
n < 30 P( y - ta 2 £ µ £ y + ta 2 ) = 1-a
n n
ta/2, is the tabulated value of the t statistic with
=n-1 df corresponding to a tail area of (a/2).

Population Use the estimation of variance 2, and the


standard statistic 2 (Chi square)
deviation
(n - 1) S 2 (n - 1) S 2
P( £s £
2
) = 1-a
ca 2
2
c1-a 2
2

( n - 1) S 2
c2 =
s o2
How would you achieve a shorter interval for estimating the mean life?
Increase sample size

• If the interval has 0 in between then the difference is not significant, but if
there is no 0 in the interval then there is a significant difference.

• Hypothesis testing: using sample evidence to reject or accept a claim


regarding the value of a population parameter.

- We are to make a decision based on sample data to either reject or accept a


claim.

- Type I error (): conditional probability of rejecting Ho when Ho is true.


- Type II error (): conditional probability of accepting Ho when it’s false.
- The power of the test (1-): is the probability of rejecting Ho, when Ho is
false.

Elements of the test:

• The null hypotheses, H0


• The alternative hypotheses, H1
• The test statistic
• Rejection region

Also summarized in the thumb rules:

1. The equal sign is always with Ho.


2. To know which tail, look at H1. H1 is equal then it’s two-tailed, H1 > then
it’s upper tail, and H1 < then its lower tailed.
3. Calculate the test statistic and put on the curve, if it’s in the acceptance
region then accept the Ho and if it’s in the rejection region then reject the
Ho.

Lower-tailed

Ho : q > qo Vs. H1: q < qo

Reject
(a)
Accept

qL qo
Upper-tailed

Ho : q < qo Vs. H1: q > qo

Reject
(a)
Accept

qo qU
A two-tailed test
two-directional:

Ho: q = qo Vs. H1: q ¹ qo

Reject Reject
(a/2) (a/2)
Accept

qL qo qU

4. Any subscript o, the right-hand side of the hypothesis.

For two samples:


• D is the difference between the two dependent data.
• If the F interval has 1 in it, then the difference is not significant, but when the
1 is not included then the difference is significant.
• If it is not stated if the variances are equal, test for variance first.
• P-value is the smallest level of significance at which Ho can be rejected.

P-value
Accept

qc qo

• If the level of significance decreases than the p-value, Ho will be accepted.


• The decision criterion is to reject Ho if  > p-value.
• When P-value is greater than t statistic, Ho is accepted.
• When P-value is lower than t statistic, Ho is rejected.
• In terms of Z statistic:

Upper Tailed:

p-value = P(Z>Zc)

Lower Tailed:

p-value = P(Z<Zc)

Two Tailed:

p-value = P(Z>|Zc|)

• Using the p-value allow to reporting test results and leave the selection of ()
to the decision maker.
• Normal Probability Plot:

- To construct an NPP, the observations in the sample are first ranked from
smallest to largest. That is the sample y 1, y2,…yn is arranged as y(1),
y(2)….y(n) where y(1) is the smallest observation, y(2) is the second smallest
observation, and so forth with y(n) the largest. The ordered observations y(j)
are then plotted against their observed cumulative frequency 100(j-0.5)/n.
The cumulative frequency scale has been arranged so that if the points fall
on a straight line (approximately), the data is normal (or approximately
normal).
- While solving just comment if there is one or two outlier points (which are
the only ones far from the line) and continue solving.

You might also like