Navidi_ch4 (1)
Navidi_ch4 (1)
1
Section 4.1:
The Bernoulli Distribution
We use the Bernoulli distribution when we have an
experiment which can result in one of two outcomes.
One outcome is labeled “success,” and the other
outcome is labeled “failure.”
2
Examples 1 and 2
1. The simplest Bernoulli trial is the toss of a coin.
The two outcomes are heads and tails. If we define
heads to be the success outcome, then p is the
probability that the coin comes up heads. For a fair
coin, p = 1/2.
2. Another Bernoulli trial is a selection of a
component from a population of components, some
of which are defective. If we define “success” to be
a defective component, then p is the proportion of
defective components in the population.
3
X ~ Bernoulli(p)
For any Bernoulli trial, we define a random variable X as
follows:
4
Mean and Variance
If X ~ Bernoulli(p), then
X = 0(1- p) + 1(p) = p
.
(0 p) (1 p) (1 p) ( p) p(1 p)
2
X
2 2
5
Example 3
6
Section 4.2:
The Binomial Distribution
If a total of n Bernoulli trials are conducted, and
7
Example 4
8
Another Use of the Binomial
Assume that a finite population contains items of two
types, successes and failures, and that a simple random
sample is drawn from the population. Then if the
sample size is no more than 5% of the population, the
binomial distribution may be used to model the number
of successes.
9
Example 5
10
Binomial R.V.:
pmf, mean, and variance
If X ~ Bin(n, p), the probability mass function of X
is
n!
p x
(1 p ) n x
, x 0,1,..., n
p ( x) P ( X x) x!(n x)!
0, otherwise
Mean: X = np
Variance: X2 np (1 p )
11
Example 6
12
More on the Binomial
• Assume n independent Bernoulli trials are conducted.
• Each trial has probability of success p.
• Let Y1, …, Yn be defined as follows: Yi = 1 if the ith
trial results in success, and Yi = 0 otherwise. (Each of
the Yi has the Bernoulli(p) distribution.)
• Now, let X represent the number of successes among
the n trials. So, X = Y1 + …+ Yn .
Note:
Bias is the difference
pˆ p.
is unbiased.
p̂
The uncertainty in is
p̂
p (1 p )
pˆ .
n
In practice, when computing , we substitute for p,
since p is unknown. p̂
14
Example 7
In a sample of 100 newly manufactured automobile
tires, 7 are found to have minor flaws on the tread. If
four newly manufactured tires are selected at random
and installed on a car, estimate the probability that none
of the four tires have a flaw, and find the uncertainty in
this estimate.
15
Section 4.3:
The Poisson Distribution
One way to think of the Poisson distribution is as
an approximation to the binomial distribution when n
is large and p is small.
It is the case when n is large and p is small the mass
function depends almost entirely on the mean np, and
very little on the specific values of n and p.
We can therefore approximate the binomial mass
function with a quantity λ = np; this λ is the
parameter in the Poisson distribution.
16
Poisson R.V.:
pmf, mean, and variance
If X ~ Poisson(λ), the probability mass function of X is
e x
, for x = 0, 1, 2, ...
p ( x) P ( X x) x!
0, otherwise
Mean: X = λ
Variance: 2
X
18
Poisson Distribution to Estimate Rate
19
Notes on Estimating a Rate
̂ is unbiased.
The uncertainty in ̂ is ˆ .
t
20
Example 9
A suspension contains particles at an unknown
concentration of λ per mL. The suspension is
thoroughly agitated, and then 4 mL are withdrawn and
17 particles are counted. Estimate λ and find the
uncertainty in the estimate.
21
Section 4.4:
Some Other Discrete Distributions
• Consider a finite population containing two types
of items, which may be called successes and
failures.
• A simple random sample is drawn from the
population.
• Each item sampled constitutes a Bernoulli trial.
• As each item is selected, the probability of
successes in the remaining population decreases
or increases, depending on whether the sampled
item was a success or a failure.
22
Hypergeometric
• For this reason the trials are not independent,
so the number of successes in the sample does
not follow a binomial distribution.
• The distribution that properly describes the
number of successes is the hypergeometric
distribution.
23
Hypergeometric pmf
Assume a finite population contains N items, of which R are
classified as successes and N – R are classified as failures.
Assume that n items are sampled from this population, and let X
represent the number of successes in the sample. Then X has a
hypergeometric distribution with parameters N, R, and n, which
can be denoted X ~ H(N, R, n). The probability mass function of
X is
R N R
x
n x , if max(0, R n N ) x min( n, R)
p ( x) P ( X x) N
n
0, otherwise
24
Mean and Variance
If X ~ H(N, R, n), then
nR
Mean of X: X
N
R R N n
Variance of X: n 1
2
X
N N N 1
25
Example 10
Of 50 buildings in an industrial park, 12 have electrical
code violations. If 10 buildings are selected at random
for inspection, what is the probability that exactly 3 of
the 10 have code violations? What is the mean and
variance of X?
26
Geometric Distribution
• Assume that a sequence of independent Bernoulli
trials is conducted, each with the same probability of
success, p.
• Let X represent the number of trials up to and
including the first success.
• Then X is a discrete random variable, which is said to
have the geometric distribution with parameter p.
• We write X ~ Geom(p).
27
Geometric R.V.:
pmf, mean, and variance
If X ~ Geom(p), then
p (1 p ) x 1 , x 1,2,...
The pmf of X is p ( x) P ( X x)
0, otherwise
1
The mean of X is X .
p
1 p
The variance of X is 2 .
2
X
p
28
Example 11
A test of weld strength involves loading welded joints
until a fracture occurs. For a certain type of weld, 80%
of the fractures occur in the weld itself, while the other
20% occur in the beam. A number of welds are tested.
Let X be the number of tests up to and including the first
test that results in a beam fracture.
29
Negative Binomial Distribution
The negative binomial distribution is an extension of the
geometric distribution. Let r be a positive integer.
Assume that independent Bernoulli trials, each with
success probability p, are conducted, and let X denote
the number of trials up to and including the rth success.
Then X has the negative binomial distribution with
parameters r and p. We write X ~ NB(r,p).
The variance of X is X2 r (1 2 p) .
p
31
Example 11 cont.
Find the mean and variance of X, where X represents the
number of tests up to and including the third beam
fracture.
32
Multinomial Trials
A Bernoulli trial is a process that results in one of two
possible outcomes. A generalization of the Bernoulli
trial is the multinomial trial, which is a process that
can result in any of k outcomes, where k ≥ 2.
33
Multinomial Distribution
• Now assume that n independent multinomial trials are
conducted each with k possible outcomes and with the
same probabilities p1,…,pk.
• Number the outcomes 1, 2, …, k. For each outcome i,
let Xi denote the number of trials that result in that
outcome.
• Then X1,…,Xk are discrete random variables.
• The collection X1,…,Xk said to have the multinomial
distribution with parameters n, p1,…,pk. We write X1,
…,Xk ~ MN(n, p1,…,pk).
34
Multinomial R.V.
If X1,…,Xk ~ MN(n, p1,…,pk), then the pmf of X1,…,Xk is
n! x1 x2 xk
p
x ! x !L x ! 1 2 p L p k , xi 0,1, 2,..., k
1 2 k
p ( x ) P ( X x )
and xi n
0, otherwise
37
Normal R.V.:
pdf, mean, and variance
The probability density function of a normal population
with mean and variance 2 is given by
1 2 2
f ( x) e ( x ) / 2 , x
2
If X ~ N(, 2), then the mean and variance of X are
given by
X
X2 2
38
68-95-99.7% Rule
40
Standard Normal Distribution
In general, we convert to standard units by subtracting
the mean and dividing by the standard deviation. Thus,
if x is an item sampled from a normal population with
mean and variance 2, the standard unit equivalent of
x is the number z, where
z = (x - )/.
The number z is sometimes called the “z-score” of x.
The z-score is an item sampled from a normal
population with mean 0 and standard deviation of 1.
This normal distribution is called the standard normal
distribution.
41
Example 13
Aluminum sheets used to make beverage cans have
thicknesses that are normally distributed with mean 10
and standard deviation 1.3. A particular sheet is 10.8
thousandths of an inch thick. Find the z-score.
42
Example 13 cont.
The thickness of a certain sheet has a z-score of -1.7.
Find the thickness of the sheet in the original units of
thousandths of inches.
43
Finding Areas Under the Normal Curve
The proportion of a normal population that lies within a given
interval is equal to the area under the normal probability density
above that interval. This would suggest integrating the normal
pdf, but this integral does not have a closed form solution.
So, the areas under the curve are approximated numerically and
are available in Table A.2. This table provides area under the
curve for the standard normal density. We can convert any
normal into a standard normal so that we can compute areas
under the curve.
The table gives the area in the left-hand tail of the curve. Other
areas can be calculated by subtraction or by using the fact that the
total area under the curve is 1.
44
Example 14
Find the area under normal curve to the left of z = 0.47.
45
Example 15
Find the area under the normal curve between z = 0.71
and z = 1.28.
46
Estimating the Parameters
If X1,…,Xn are a random sample from a N(,2)
distribution, is estimated with the sample mean and 2
is estimated with the sample standard deviation.
47
Linear Functions of Normal Random
Variables
Let X ~ N(, 2) and let a ≠ 0 and b be constants.
Then aX + b ~ N(a + b, a22).
c1 X1 + c2 X2 +…+ cnXn
c12σ12 c22σ 22 L cn2σ n2
~ N(c11 + c2 2 +…+ cnn, , )
48
Example 16
A chemist measures the temperature of a solution in oC.
The measurement is denoted C, and is normally
distributed with mean 40oC and standard deviation 1oC.
The measurement is converted to oF by the equation F =
1.8C + 32. What is the distribution of F?
49
Distributions of Functions of Normals
Let X1, X2, …, Xn be independent and normally distributed with
mean and variance 2. Then
σ2
X ~ N μ, .
n
2
σ
Let X and Y be independent, with X ~ N(X, X ) and
2
Y ~ N(Y, σ Y ). Then
X Y ~ N ( μ X μY , σ X2 σY2 )
2 2
X Y ~ N ( μ X μY , σ σ )
X Y
50
Section 4.6:
The Lognormal Distribution
For data that contain outliers, the normal distribution is
generally not appropriate. The lognormal distribution,
which is related to the normal distribution, is often a
good choice for these data sets.
1 1 2
exp (ln x ) , x 0
f ( x) x 2 2
2
0, otherwise
52
Section 4.7:
The Exponential Distribution
The exponential distribution is a continuous
distribution that is sometimes used to model the time
that elapses before an event occurs. Such a time is often
called a waiting time.
The probability density of the exponential distribution
involves a parameter, which is a positive constant λ
whose value determines the density function’s location
and shape.
We write X ~ Exp(λ).
53
Exponential R.V.:
pdf, cdf, mean and variance
The pdf of an exponential r.v. is
e x , x 0
f ( x) .
0, otherwise
The cdf of an exponential r.v. is
0, x 0
F ( x) x
.
1 e , x 0
The mean of an exponential r.v. is
X 1/ .
55
Lack of Memory Property
The exponential distribution has a property known as the lack of
memory property: If T ~ Exp(λ), and t and s are positive
numbers, then P(T > t + s | T > s) = P(T > t).
57
Section 4.8:
The Uniform, Gamma and Weibull
Distributions
The uniform distribution has two parameters, a and b,
with a < b. If X is a random variable with the
continuous uniform distribution then it is uniformly
distributed on the interval (a, b). We write X ~ U(a,b).
The pdf is
1
, a x b
f ( x) b a
0, otherwise
58
Mean and Variance
If X ~ U(a, b).
60
The Gamma Distribution
First, let’s consider the gamma function:
For r > 0, the gamma function is defined by
r 1 t
(r ) 0 t e dt .
61
Gamma R.V.
If X1,…,Xr are independent random variables, each distributed as Exp(λ), then
the sum X1+…+Xr is distributed as a gamma random variable with parameters
r and λ, denoted as Γ(r, λ ).
62
Example 20
Assume that arrival times at a drive-through window
follow a Poisson process with mean λ = 0.2 arrivals per
minute. Let T be the waiting time until the third arrival.
63
The Weibull Distribution
The Weibull distribution is a continuous random
variable that is used in a variety of situations. A
common application of the Weibull distribution is to
model the lifetimes of components. The Weibull
probability density function has two parameters, both
positive constants, that determine the location and
shape. We denote these parameters and .
64
Weibull R.V.
The pdf of the Weibull distribution is
1 ( x )
x e
,x 0
f ( x) .
0, x 0
The mean of the Weibull is
1 1
X 1 .
The variance of the Weibull is
1 2 1
2
X 2 1 1 .
2
65
Section 4.9: Some Principles of Point
Estimation
• We collect data for the purpose of estimating some
numerical characteristic of the population from which
they come.
• A quantity calculated from the data is called a
statistic, and a statistic that is used to estimate an
unknown constant, or parameter, is called a point
estimator. Once the data has been collected, we call it
a point estimate.
66
Questions of Interest
• Given a point estimator, how do we determine how
good it is?
• What methods can be used to for construct good point
estimators?
67
Measuring the Goodness of an Estimator
68
Mean Squared Error
Let be a parameter, and θ̂ an estimator of . The
mean squared error (MSE) of θ̂ is
2 2
MSEθˆ ( μθˆ θ ) σ θˆ .
69
Example 21
Let X ~ Bin(n, p) where p is unknown. Find the MSE of
the estimator pˆ X / n .
70
Method of Maximum Likelihood
• The idea is to estimate a parameter with the value that
makes the observed data most likely.
• When a probability mass function or probability
density function is considered to be a function of the
parameters, it is called a likelihood function.
• The maximum likelihood estimate is the value of the
parameters that when substituted in for the parameters
maximizes the likelihood function.
71
Desirable Properties
Maximum likelihood is the most commonly used
method of estimation. The main reason for this is that
in most cases that arise in practice, MLEs have two very
desirable properties,
1. In most cases, as the sample size n increases, the bias
of the MLE converges to 0.
2. In most cases, as the sample size n increases, the
variance of the MLE converges to a theoretical
minimum.
72
Section 4.10: Probability Plots
Scientists and engineers often work with data that can
be thought of as a random sample from some
population. In many cases, it is important to determine
the probability distribution that approximately describes
the population.
More often than not, the only way to determine an
appropriate distribution is to examine the sample to find
a sample distribution that fits.
73
Finding a Distribution
Probability plots are a good way to determine an appropriate distribution.
Here is the idea: Suppose we have a random sample X1,…,Xn. We first
arrange the data in ascending order. Then assign evenly spaced values
between 0 and 1 to each Xi. There are several acceptable ways to this; the
simplest is to assign the value (i – 0.5)/n to Xi.
The distribution that we are comparing the X’s to should have a mean and
variance that match the sample mean and variance. We want to plot
(Xi, F(Xi)), if this plot resembles the cdf of the distribution that we are
interested in, then we conclude that that is the distribution the data came from.
74
Software
Many software packages take the (i – 0.5)/n assigned to
each Xi, and calculate the quantile (Qi) corresponding to
that number from the distribution of interest. Then it
plots each (Xi, Qi). If this plot is a reasonably straight
line then you may conclude that the sample came from
the distribution that we used to find quantiles.
75
Normal Probability Plots
77
Rule of Thumb
For most populations, if the sample size is greater than 30, the
Central Limit Theorem approximation is good.
78
Continuity Correction
• The binomial distribution is discrete, while the normal
distribution is continuous.
• The continuity correction is an adjustment, made when
approximating a discrete distribution with a
continuous one, that can improve the accuracy of the
approximation.
• If you want to include the endpoints in your
probability calculation, then extend each endpoint by
0.5. Then proceed with the calculation.
• If you want exclude the endpoints in your probability
calculation, then include 0.5 less from each endpoint
in the calculation.
79
Example 22
The manufacturer of a certain part requires two different
machine operations. The time on machine 1 has mean
0.4 hours and standard deviation 0.1 hours. The time on
machine 2 has mean 0.45 hours and standard deviation
0.15 hours. The times needed on the machines are
independent. Suppose that 65 parts are manufactured.
What is the distribution of the total time on machine 1?
On machine 2? What is the probability that the total
time used by both machines together is between 50 and
55 hours?
80
Example 23
If a fair coin is tossed 100 times, use the normal curve
to approximate the probability that the number of heads
is between 45 and 55 inclusive.
81
Section 4.12: Simulation
Simulation refers to the process of generating random
numbers and treating them as if they were data
generated by an actual scientific distribution. The data
so generated are called simulated or synthetic data.
82
Example 24
An engineer has to choose between two types of cooling fans to
install in a computer. The lifetimes, in months, of fans of type A
are exponentially distributed with mean 50 months, and the
lifetime of fans of type B are exponentially distributed with mean
30 months. Since type A fans are more expensive, the engineer
decides that she will choose type A fans if the probability that a
type A fan will last more than twice as long as a type B fan is
greater than 0.5. Estimate this probability.
83
Simulation
• We perform a simulation experiment, using samples of size
1000.
• Generate a random sample A1* , A2* ,..., A1000
*
from an exponential
distribution with mean 50 (λ = 0.02).
• Generate a random sample B1* , B2* ,..., B1000
*
from an exponential
distribution with mean 30 (λ = 0.033).
* *
• Count the number of times that Ai 2 Bi .
• Divide the number of times that Ai* 2 Bi* occurred by the total
number of trials. This is the estimate of the probability that
type A fans last twice as long as type B fans.
84
Summary
• We considered various discrete distributions: Bernoulli,
Binomial, Poisson, Hypergeometric, Geometric, Negative
Binomial, and Multinomial.
• Then we looked at some continuous distributions: Normal,
Exponential, Uniform, Gamma, and Weibull.
• We learned about the Central Limit Theorem.
• We discussed Normal approximations to the Binomial and
Poisson distributions.
• Finally, we discussed simulation studies.
85