0% found this document useful (0 votes)
1K views

Negative Exponential Distribution

This document discusses probability distributions, probability density functions, and exponential distributions. It provides definitions and properties of cumulative distribution functions, probability density functions, and independent random variables. It then discusses the negative exponential or exponential distribution, including its probability density function and cumulative distribution function. It provides examples of using exponential distributions to model the time until failure of independent systems and subsystems. It determines the distribution of the minimum and maximum values of independent random variables and the probability that a given random variable is the minimum value.

Uploaded by

Emily Scott
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views

Negative Exponential Distribution

This document discusses probability distributions, probability density functions, and exponential distributions. It provides definitions and properties of cumulative distribution functions, probability density functions, and independent random variables. It then discusses the negative exponential or exponential distribution, including its probability density function and cumulative distribution function. It provides examples of using exponential distributions to model the time until failure of independent systems and subsystems. It determines the distribution of the minimum and maximum values of independent random variables and the probability that a given random variable is the minimum value.

Uploaded by

Emily Scott
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Probability Distribution Function.

If X is a continuous random variable, then its cumulative distribution function (abbreviated cdf ) or probability distribution function is dened for all real x, by: Fx (x) = P (X x) Probability Density Function. The probability density function (pdf) of a continuous random variable is: fX (x) = lim
dx0

FX (x + dx) FX (x) P (x < X x + dx) = lim dx0 dx dx

Let X1 , ..., Xn be independent random variables. When the random variable are independent, then by denition: f (x1 , x2 , ..., xn ) = fX1 (x1 )fX2 (x2 )...fXn (xn ) If X1 and X2 are random variables, then for any sets A and B: P (X1 A, X2 B ) = P (X1 A)P (X2 B ) This holds for more than two independent random variables, so for example: P (X1 x1 , X2 x2 , ..., Xn xn ) = P (X1 x1 )P (X2 x2 )...P (Xn xn )

Negative exponential distribution

In probability theory and statistics, the exponential distributions (a.k.a. negative exponential distributions) are a class of continuous probability distributions. They describe the times between events in a Poisson process, i.e. a process in which events occur continuously and independently at a constant average rate. The probability density function of an exponential distribution is: f (x; ) = ex , x 0 and 0 when x < 0 If a random variable X has this distribution, we say that the random variable is exponentially distributed with parameter . The cumulative distribution function is: F (x; ) = P (X x) = 1 ex and: P (X > x) = ex The mean value is given by: E (X ) = 1/ Let X1 , ..., Xn be independent exponential random variables representing the time to failure of n subsystems. For subsystem i, the mean is i E (Xi ) = 1 , i = 1, ..., n. i 1

a) Assume that the time unit is years, what is the probability that all n subsystems function for at least y years? If you think of one subsystem i, in isolation, the probability that it lasts for more than y years is simply P (Xi > y ). According the above denition then, the probability that all n subsystems function for at least y years is:
n

P (X1 > y, X2 > y, ...., X2 > y ) =


k=1 n n

P (Xk > y )

=
k=1

1 FXi (y ) =
k=1

ei y

b) What is the distribution of the time that passes until the rst system fails The system failure time is y . This means that the system works up until time y and has failed when the time is greater than y . The rst system that fails must be the minimum of all Xi s and when this system fails all the other systems are still working which means that none of the other systems can have failed. This can thus be stated as:
n

P (X1 > y, X2 > y, ...., X2 > y ) =


k=1 n n

P (Xk > y )

=
k=1

1 FXi (y ) =
k=1

ei y

Which is exactly the same as above. The distribution is thus exponential with parameter 1 + 2 + ... + 3 There are numerous applications where random variables are ordered from least to greatest, with a particular value in the ordering being of interest, such as the smallest, the largest, median etc. Dene Y1 = min(X1 , ..., Xn ) and Yn = max(X1 , ..., Xn ) . c) Determine the distribution of Y1 and Yn

FY1 (y ) = P (Y1 y )

= P (min{X1 , X2 , ......Xn } y ) = P (X1 y or X1 y or... or X1 y ) For one of these to be less then y means that not all can be greater than y : = 1 P (X1 > y, X2 > y....Xn > y )
n

=1
n

[P (Xi > y ]
i=1

=1
i=1

[1 (1 ei y )]
n

=1

[ei y )]
i=1

FYn (y ) = P (Yn y ) = P (max{X1 , X2 , ......Xn } y ) IF the max is smaller than y , then all of them must be smaller than y so: = P (X1 y, X2 y, ..., Xn y )
n

=
i=1 n

P (Xi y ) (1 ei y )
i=1

d) Show that the probability that Xi is the smallest one among X1 , ..., Xn is equal i to 1 +... +n , i = 1, ..., n P (Xi = min(X1 , ...Xn )) = P (next event is from Xi ) = P (next event is from Xi in interval t, t+dt no events in the others processes) Using the continuous version of total probability:

P (Xi = min(X1 , ...., Xn )) =


0

fXi (x)
j =i

P (Xj > x)dx

e(1 +2 +..n )x i ei x dx =
0

i 1 + 2 + ... + n

You might also like