Optimal Reliability Modeling Principles and Applications 1st Edition Way Kuo download pdf
Optimal Reliability Modeling Principles and Applications 1st Edition Way Kuo download pdf
https://ebookfinal.com/download/cyber-physical-distributed-systems-
modeling-reliability-analysis-and-applications-1st-edition-huadong-mo/
https://ebookfinal.com/download/gis-based-chemical-fate-modeling-
principles-and-applications-1st-edition-alberto-pistocchi/
https://ebookfinal.com/download/principles-of-combustion-2nd-edition-
kenneth-kuan-yun-kuo/
https://ebookfinal.com/download/maintenance-replacement-and-
reliability-theory-and-applications-second-edition-jardine/
Coated Textiles Principles and Applications Principles and
Applications Second Edition Ashish Kumar Sen
https://ebookfinal.com/download/coated-textiles-principles-and-
applications-principles-and-applications-second-edition-ashish-kumar-
sen/
https://ebookfinal.com/download/real-time-digital-signal-processing-
fundamentals-implementations-and-applications-3rd-edition-sen-m-kuo/
https://ebookfinal.com/download/multistate-systems-reliability-theory-
with-applications-1st-edition-bent-natvig/
https://ebookfinal.com/download/foundations-of-dynamic-economic-
analysis-optimal-control-theory-and-applications-1st-edition-michael-
r-caputo/
Optimal Reliability Modeling Principles and Applications
1st Edition Way Kuo Digital Instant Download
Author(s): Way Kuo
ISBN(s): 9780471397618, 047139761X
Edition: 1
File Details: PDF, 27.15 MB
Year: 2002
Language: english
OPTIMAL RELIABILITY MODELING
OPTIMAL RELIABILITY
MODELING
Principles and Applications
WAY KUO
Texas A&M University
MING J. ZUO
The University of Alberta
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form
or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as
permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior
written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to
the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978)
750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be
addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ
07030, (201) 748-6011, fax (201) 748-6008, e-mail: [email protected].
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in
preparing this book, they make no representations or warranties with respect to the accuracy or
completeness of the contents of this book and specifically disclaim any implied warranties of
merchantability or fitness for a particular purpose. No warranty may be created or extended by sales
representatives or written sales materials. The advice and strategies contained herein may not be suitable
for your situation. You should consult with a professional where appropriate. Neither the publisher nor
author shall be liable for any loss of profit or any other commercial damages, including but not limited to
special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our
Customer Care Department within the United States at (800) 762-2974, outside the United States at
(317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may
not be available in electronic books.
Preface xi
Acknowledgments xv
1 Introduction 1
1.1 Needs for Reliability Modeling, 2
1.2 Optimal Design, 3
2 Reliability Mathematics 5
2.1 Probability and Distributions, 5
2.1.1 Events and Boolean Algebra, 5
2.1.2 Probabilities of Events, 8
2.1.3 Random Variables and Their Characteristics, 11
2.1.4 Multivariate Distributions, 16
2.1.5 Special Discrete Distributions, 20
2.1.6 Special Continuous Distributions, 27
2.2 Reliability Concepts, 32
2.3 Commonly Used Lifetime Distributions, 35
2.4 Stochastic Processes, 40
2.4.1 General Definitions, 40
2.4.2 Homogeneous Poisson Process, 41
2.4.3 Nonhomogeneous Poisson Process, 43
2.4.4 Renewal Process, 44
2.4.5 Discrete-Time Markov Chains, 46
2.4.6 Continuous-Time Markov Chains, 50
2.5 Complex System Reliability Assessment Using Fault
Tree Analysis, 58
v
vi CONTENTS
3 Complexity Analysis 62
3.1 Orders of Magnitude and Growth, 63
3.2 Evaluation of Summations, 69
3.3 Bounding Summations, 73
3.4 Recurrence Relations, 75
3.4.1 Expansion Method, 77
3.4.2 Guess-and-Prove Method, 80
3.4.3 Master Method, 82
3.5 Summary, 83
References 513
Bibliography 527
Index 539
PREFACE
Recent progress in science and technology has made today’s engineering systems
more powerful than ever. The increasing level of sophistication in high-tech indus-
trial processes implies that reliability problems will not only continue to exist but
are likely to require ever more complex solutions. Furthermore, system failures are
having more significant effects on society as a whole than ever before. Consider, for
example, the impact of the failure or mismanagement of a power distribution system
in a major city, the malfunction of an air traffic control system at an international
airport, failure of a nanosystem, miscommunication in today’s Internet systems, or
the breakdown of a nuclear power plant. As a consequence, the importance of relia-
bility at all stages of modern engineering processes, including design, manufacture,
distribution, and operation, can hardly be overstated.
Today’s engineering systems are also complicated. For example, a space shuttle
consists of hundreds of thousands of components. These components functioning
together form a system. The reliable performance of the system depends on the reli-
able performance of its constituent components. In recent years, statistical and prob-
abilistic models have been developed for evaluating system reliability based on the
components’ reliability, the system design, and the assembly of the components. At
the same time, we should pay close attention to the usefulness of these models. Some
models and published books are too abstract to understand, and others are too basic
to address solutions for today’s systems.
System reliability models are the focus of this book. We have attempted to include
many of the system reliability models that have been reported in the literature with
emphasis on the more significant ones. The models extensively covered include par-
allel, series, standby, k-out-of-n, consecutive-k-out-of-n, multistate, and general sys-
tem models, including some maintainable systems. For each model, we discuss the
evaluation of exact system reliability, the development of bounds for system reliabil-
ity approximation, extensions to dual failure modes and/or multistates, and optimal
system design in terms of the arrangement of components. Both static and dynamic
xi
xii PREFACE
2. Markov chain imbeddable structures, which is another effective tool for system
reliability analysis;
3. majorization, which is a powerful tool for the development of invariant optimal
designs for some system structures;
4. multistate system reliability theory, which is systematically introduced for the
first time in a text on engineering system reliability analysis; and
5. applications of the k-out-of-n and the consecutive-k-out-of-n system models
in remaining life estimation.
This book provides the reader with a complete picture of reliability evaluation and
optimal system design for many well-studied system structures in both the binary and
the multistate contexts. Based on the comparisons of computational complexities of
the algorithms presented in this book, users can determine which evaluation meth-
ods can be most efficiently applied to their own problems. The book can be used as a
handbook for practicing engineers. It includes the latest results and the most compre-
hensive algorithms for system reliability analysis available in the literature as well as
for the optimal design of the various system reliability models.
This book can serve as an advanced textbook for graduate students wishing to
study reliability for the purpose of engaging in research. We outline various mathe-
matical tools and approaches that have been used successfully in research on system
reliability evaluation and optimal design. In addition, a primer on complexity analy-
sis is included. With the help of complexity metrics, we discuss how to analyze and
determine the right algorithm for optimal system design. The background required
for comprehending this textbook includes only calculus, basic probability theory, and
some knowledge of computer programming. There are 263 cited references and an
additional 244 entries in the bibliography that are related to the material presented in
this book.
Way Kuo
Texas A&M University
Ming J. Zuo
The University of Alberta
ACKNOWLEDGMENTS
Reliability is the probability that a system will perform satisfactorily for at least a
given period of time when used under stated conditions. Therefore, the probability
that a system successfully performs as designed is called “system reliability,” or the
“probability of survival.” Often, unreliability refers to the probability of failure. Sys-
tem reliability is a measure of how well a system meets its design objective. A system
can be characterized as a group of stages or subsystems integrated to perform one or
more specified operational functions.
In describing the reliability of a given system, it is necessary to specify (1) the
failure process, (2) the system configuration that describes how the system is con-
nected and the rules of operation, and (3) the state in which the system is defined
to be failed. The failure process describes the probability law governing those fail-
ures. The system configuration, on the other hand, defines the manner in which the
system reliability function will behave. The third consideration in developing the re-
liability function for a nonmaintainable system is to define the conditions of system
failure.
Other measures of performance include failure rate, percentile of system life,
mean time to failure, mean time between failures, availability, mean time between
repairs, and maintainability. Depending on the nature and complexity of the system,
some measures are better used than others. For example, failure rate is widely used
for single-component analysis and reliability is better used for large-system analy-
sis. For a telecommunication system, mean time to failure is widely used, but for a
medical treatment, survivability (reliability) is used. In reliability optimization, the
maximization of percentile life of a system is another useful measure of interest to
the system designers, according to Prasad et al. [196]. For man–machine systems,
1
2 INTRODUCTION
Abbas and Kuo [1] and Rupe and Kuo [207] report stochastic modeling measures
that go beyond reliability as it is traditionally defined.
Many of today’s systems, hardware and software, are large and complex and often
have special features and structures. To enhance the reliability of such systems, one
needs to access their reliability and other related measures. Furthermore, the system
concept extends to service systems and supply chain systems for which reliability
and accuracy are an important goal to achieve. There is a need to present state-of-
the-art optimal modeling techniques for such assessments.
Recent progress in science and technology has made today’s engineering systems
more powerful than ever. The increasing level of sophistication in high-tech indus-
trial processes implies that reliability problems not only will continue to exist but also
are likely to require ever more complex solutions. Furthermore, reliability failures are
having more significant effects on society as a whole than ever before. Consider, for
example, the impact of the failure or mismanagement of a power distribution system
in a major city, the malfunction of an air traffic control system at an international
airport, failure of a nanosystem, miscommunication in today’s Internet systems, or
the breakdown of a nuclear power plant. The importance of reliability at all stages
of modern engineering processes, including design, manufacture, distribution, and
operation, can hardly be overstated.
Today’s engineering systems are also complicated. For example, a space shuttle
consists of hundreds of thousands of components. These components functioning to-
gether form a system. The reliable performance of the system depends on the reliable
performance of its constituent components. In recent years, statistical and probabilis-
tic models have been developed for evaluating system reliability based on component
reliability, the system design, and the assembly of the components. At the same time,
we should pay close attention to the usefulness of these models. Some models and
published books are too abstract to understand and others are too basic to address
solutions for today’s systems.
System reliability models are the focus of this book. We have attempted to in-
clude all of the system reliability models that have been reported in the literature
with emphasis on the significant ones. The models extensively covered include par-
allel, series, standby, k-out-of-n, consecutive-k-out-of-n, multistate, and general sys-
tem models, including some maintainable systems. For each model, we discuss the
evaluation of exact system reliability, development of bounds for system reliability
approximation, extensions to dual failure modes and/or multistates, and optimal sys-
tem design in terms of arrangement of components. Both static and dynamic perfor-
mance measures are discussed. Failure dependency among components within some
systems is also addressed. In addition, we believe that this is the first time that mul-
tistate system reliability models have been systematically introduced and discussed
in a book. The result is a state-of-the-art reference manuscript for students, system
designers, researchers, and teachers of reliability engineering.
OPTIMAL DESIGN 3
Many modern systems do not simply work or fail. Instead, they may experience
degraded levels of performance before a complete failure is observed. Multistate
system models allow both the system and its components to have more than two
possible states. In addition to special multistate system reliability models, methods
for performance evaluation of general multistate systems are discussed.
The new topics and unique features on optimal system reliability modeling in this
book include
In this chapter, we introduce the mathematical concepts and techniques that are rele-
vant to reliability analysis. We first cover the basic concepts of probability, the char-
acteristics of random variables and commonly used discrete and continuous distribu-
tions. The definitions of reliability and of commonly used lifetime distributions are
discussed. Stochastic processes are also introduced here. Finally, we explain how to
assess the reliability of complex system using fault tree analysis.
E1 E2
FIGURE 2.1 Venn diagram showing that events E 1 and E 2 are disjoint.
An event has occurred if the outcome of the experiment is included in the set of
outcomes of the event.
For a specific experiment, we may be interested in more than one event. For ex-
ample, we may be interested in the event, denoted by E 1 , that the measured machine
temperature is between 40 and 60◦ C and the event, denoted by E 2 , that it is above
100◦ C. To illustrate the relationship among the sample space S and events E 1 and
E 2 , we often use the so-called Venn diagram as shown in Figure 2.1. We use a rectan-
gle to represent the sample space and circles to represent events. All events must be
subsets of the sample space. Based on our definitions of E 1 and E 2 , these two events
cannot occur simultaneously. In other words, for a measured temperature value, if it
is in E 1 , then it cannot be in E 2 , and vice versa. Two events are defined to be mutu-
ally exclusive or disjoint if they cannot occur simultaneously or if they do not have
any outcome in common. Figure 2.1 shows that events E 1 and E 2 are disjoint.
The union of two events A and B includes all outcomes that are either in A, or
in B, or in both. We use A ∪ B to indicate the union of events A and B. If we write
C = A ∪ B, then we say that event C occurs if and only if at least one of the two
events A and B occurs. In Figure 2.2, the shaded area represents the union of events
A and B. The intersection of two events A and B includes all outcomes that are in
both A and B. We use A ∩ B, or AB for simplicity, to indicate the intersection of A
and B. If we write C = A ∩ B or C = AB, then event C occurs if and only if both
events A and B occur. The shaded area in Figure 2.3 represents the intersection of
events A and B.
A B
A B
For a given event E, its complement, denoted by E, indicates that event E does
not occur. Here, E includes all outcomes that are in the sample space S but not in
event E. For example, if E represents the event that the number of visitors to a theme
park is greater than 4000, then E represents the event that the number of visitors to
the theme park is no more than 4000. It is clear that any event and its complement
together comprise the whole sample space. We usually use ∅ to indicate an empty
set.
General operations on events, including unions, intersections, and complements,
are governed by a set of rules called the laws of Boolean algebra, which are summa-
rized below:
• Commutative law:
A ∪ B = B ∪ A, A ∩ B = B ∩ A.
• Associative law:
(A ∪ B) ∪ C = A ∪ (B ∪ C), (A ∩ B) ∩ C = A ∩ (B ∩ C).
• Distributive law:
A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C), A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C).
• Identity law:
A ∪ S = S, A ∩ S = A,
A ∪ ∅ = A, A ∩ ∅ = ∅.
• Complementation law:
A ∪ A = S, A ∩ A = ∅, A = A.
• Idempotent law:
A ∪ A = A, A ∩ A = A.
• De Morgan’s law:
A ∪ B = A ∩ B, A ∩ B = A ∪ B.
8 RELIABILITY MATHEMATICS
• Equally Likely Approach. This approach applies when the total number of
possible outcomes of an experiment is finite and each possible outcome has
an equal chance of being observed. If an event of interest includes n possible
outcomes and the sample space has N possible outcomes, then the probabil-
ity for this event to occur is given by the ratio n/N . This approach finds wide
application in games of chance and making selections based on the generation
of random variables. For example, names are selected randomly in a poll and
numbers are selected randomly in a lottery. This approach cannot be used when
the possible outcomes of an experiment are not equally likely or the number
of possible outcomes is infinite. For example, is it going to rain tomorrow?
What will be tomorrow’s highest temperature reading? These questions can-
not be answered with this approach. This approach has limited applications in
engineering reliability analysis.
• Frequency Approach. According to this approach, the probability of an event
is the proportion of occurrences of the event under similar conditions in the
long run. This approach is the most widely used one. If a manufacturer claims
that its product has a 0.90 probability of functioning properly for one year, this
means that of the new units of this product that are sold for use under specified
conditions, 90% of them will work properly for a full year, while the other
10% will experience some sort of problem within a year. If the weather office
predicts that there is a 30% chance of rain tomorrow, this means that historically
under similar weather conditions 30% of the time it has rained. This approach
is very useful in obtaining reliability measures in engineering as multiple units
of the same product may be tested under the same working conditions. The
proportion of surviving units is used as a measure of the probability of survival
for each unit of this product.
• Subjective Approach. According to this approach, the probability of an event
represents the strength of one’s belief with regard to the uncertainties involved
in the event. Such probabilities are simply one’s “educated” guesses based on
his or her personal experience or expertise. It is used when there are no or few
historical records of such events and setting experiments to observe such events
is too expensive or impossible. This approach is gaining in favor due to the
high speed of technology advancement in today’s world. For example, what is
the probability of success in the development of a new medical procedure using
DNA technology?
One or a combination of the above approaches may be used to assign the prob-
abilities of some basic events of a statistical experiment. Probabilities are values of
a set function. This set function assigns real numbers to various subsets of the sam-
PROBABILITY AND DISTRIBUTIONS 9
ple space S of a statistical experiment. Once such probabilities are obtained, we can
follow some mathematical axioms to derive the probability measures of events that
can be expressed as a function of those basic events. The following axioms are often
used to restrict the ways in which we assign probabilities to events:
1. The probability of any event is a nonnegative real number, that is, Pr(A) ≥ 0
for any subset A of S.
2. The probability of the sample space is 1, that is, Pr(S) = 1.
3. If A1 , A2 , A3 , . . . , are a finite or infinite sequence of disjoint events within S,
then Pr(A1 ∪ A2 ∪ A3 ∪ · · · ) = Pr(A1 ) + Pr(A2 ) + Pr(A3 ) + · · · .
Based on these axioms, we have the following equations for the probability eval-
uation of events:
Pr(∅) = 0, (2.1)
Pr( A) = 1 − Pr(A), (2.2)
Pr(A ∪ B) = Pr(A) + Pr(B) − Pr(A ∩ B). (2.3)
If A and B are two events and Pr(A) = 0, then the conditional probability of B given
A is defined as
Pr(A ∩ B)
Pr(B | A) = . (2.4)
Pr(A)
From equation (2.4), the probability of the intersection of two events is the following:
Two events are defined to be independent if whether one event has occurred or
not does not affect whether the other event will occur or not. If events A and B are
independent, we have
Note that if two events A and B are independent, then the two events A and B are
also independent.
For a group of n events A1 , A2 , . . . , An to be independent, we require that the
probability of the intersection of any 2, 3, . . . n of these events equal the product
of their respective probabilities. These events may be pairwise independent with-
out being independent. If we have three events A, B, and C, it is possible to have
Pr(A ∩ B ∩ C) = Pr(A) Pr(B) Pr(C) while these three events are not pairwise
independent.
10 RELIABILITY MATHEMATICS
Example 2.1 A manufacturer orders 30, 45, and 25% of the total demand for a
certain part from suppliers A, B, and C, respectively. The defect rates of the units
provided by suppliers A, B, and C are 2, 3, and 4%, respectively. Assume that the
received units of this part are well mixed. What is the probability that a randomly
selected unit is defective and supplied by supplier A? What is the probability that a
randomly selected unit is defective? If a randomly selected unit is defective, what is
the probability that it is provided by supplier A?
In this example, we assume that each unit has an equal chance of being selected.
Define:
Then, we have Pr(A) = 0.30, Pr(B) = 0.45, Pr(C) = 0.25, Pr(D | A) = 0.02,
Pr(D | B) = 0.03, and Pr(D | C) = 0.04. We also know that events A, B, and C
are mutually exclusive and A ∪ B ∪ C = S. The probability that a selected unit is
defective and from supplier A can be calculated as
In Example 2.1, to find the defective rate of all received units, we divided them
into three mutually exclusive groups, namely, A, B, and C. These three groups repre-
sent the exclusive suppliers for the manufacturer, that is, S = A ∪ B ∪ C. The defect
rate of a unit from each of these groups is known. Thus, we can use conditional prob-
ability to find the overall defective rate of the received units. This approach can be
generalized to the case where there are k mutually exclusive groups, as stated in the
following theorem.
PROBABILITY AND DISTRIBUTIONS 11
Let X = {x1 , x 2 , . . . , x n }; then the following equation can be used to calculate f (xi )
for i = 1, 2, . . . , n from F(x):
F(x1 ) if i = 1,
f (xi ) = (2.11)
F(xi ) − F(xi−1 ) if i = 2, 3, . . . , n.
where 0 < p < 1. Verify that it qualifies to be the pmf of a discrete random variable
X with sample space S = {1, 2, . . . }. Find the CDF of X . What is the probability for
X ≥ 10?
that f (k) ≥ 0 for each possible k ∈ S because 0 < p < 1.
First of all, we note
We need to verify that ∞ k=1 f (k) = 1:
∞
∞
1
f (k) = p(1 − p)k−1 = p × = 1.
k=1 k=1
p
k
k
F(k) = f (i) = p(1 − p)i−1 = 1 − (1 − p)k , k = 1, 2 . . . ,
i=1 i=1
to find the CDF defined over (−∞, ∞), we use the following function when x is not
necessarily a positive integer:
0 if x < 1,
F(x) =
F(k) if k ≤ x < k + 1, where k is a positive integer,
The pdf used in this example actually describes the geometric distribution, which
will be further discussed in Section 2.1.5.
for any real constants a and b such that a ≤ b. In words, the probability for the
continuous random variable to be in interval (a, b] is measured by the area under
the curve of f (x) within this interval. Based on this definition, the probability for
a continuous random variable to take any fixed value is equal to zero. As a result,
PROBABILITY AND DISTRIBUTIONS 13
Pr(a ≤ X ≤ b) = Pr(a < X < b) = Pr(a ≤ X < b) = Pr(a < X ≤ b). (2.13)
A function f (x) can serve as a pdf of a continuous random variable if and only if it
satisfies the following conditions:
Given the CDF of a random variable, we can use the following equations to find the
probability that the continuous random variable takes values in interval [a, b] with
a ≤ b and the pdf of the random variable:
f (x) = λe−λx , x ≥ 0,
The pdf used in this example actually describes the exponential distribution, which
is the most commonly used in reliability.
Whether a random variable is discrete or continuous, its CDF satisfies the follow-
ing conditions:
• F(−∞) = 0,
• F(∞) = 1, and
• F(a) ≤ F(b) for any real numbers a and b such that a ≤ b.
Consider two independent continuous random variables X with CDF G(x) and Y
with CDF H (y). Let Z be the sum of these two random variables, that is, Z = X +Y .
The CDF of Z , denoted by U (z), can be expressed as
∞
U (z) = Pr(Z ≤ z) = Pr(X + Y ≤ z) = Pr(X + Y ≤ z | Y = y) d H (y)
−∞
∞ ∞
= Pr(X ≤ z − y) d H (y) = G(z − y) d H (y)
−∞ −∞
= (G ∗ H )(z),
where
∞
(G ∗ H )(z) ≡ G(z − y) d H (y) (2.17)
−∞
is called the convolution of functions G and H . In words, the CDF of the sum of
two independent random variables is equal to the convolution of the CDFs of these
two individual random variables. This result can be extended to the sum of n ≥ 2
independent random variables; namely, the CDF of the sum of n independent ran-
dom variables is equal to the convolution of the CDFs of these n individual random
variables. If these individual random variables are independent and identically dis-
tributed (i.i.d) with CDF F(x), we use Fn to indicate the n-fold convolution of F
with itself. The CDF of the sum of n i.i.d. random variables with CDF F(x) is the
n-fold convolution of F with itself. Generally, the following recursive formula can
be used for evaluation of convolutions of a function with itself:
Fn = F ∗ Fn−1 , n ≥ 2, (2.18)
where F1 = F.
Median The median of a random variable X is defined to be the value of x such that
F(x) = 0.5. The probability for X to take a value less than or equal to its median and
the probability for X to take a value greater than or equal to its median are both equal
to 50%. The 100 pth percentile, denoted by x p , of a random variable X is defined to
be the value of x such that F(x p ) = p. For example, the 10th percentile of X is
denoted by x0.1 and the 90th percentile of X is denoted by x0.9 . Thus, x0.5 represents
the median of the random variable X .
PROBABILITY AND DISTRIBUTIONS 15
Expected Value The expected value E(X ), or µ, of a random variable X with pdf
f (x) is
x f (x) if X is discrete,
x
µ ≡ E(X ) = ∞ (2.19)
x f (x) d x if X is continuous.
−∞
The expected value E(X ) is also referred to as the mean, the average value, or the
first moment about the origin of the random variable X .
When g(X ) in equation (2.20) takes the form of X r , where r is a nonnegative integer,
E(X r ) is called the r th moment about the origin or the ordinary moment of random
variable X , often denoted by µr . When r = 1, we have the first moment about the
origin E(X ), which is exactly the expected value of X . Thus, we have µ1 ≡ µ. Note
that µ0 = 1.
When g(X ) in equation (2.20) takes the form of (X −µ)r , where r is a nonnegative
integer and µ is the expected value of X , E((X − µ)r ) is called the r th moment
about the mean or the central moment of random variable X , often denoted by µr .
Note that µ0 = 1 and µ1 = 0. The second moment about the mean, µ2 , is of
special importance in statistics because it indicates the spread of the distribution of
the random variable. As a result, it is called the variance of the random variable
and denoted by σ 2 or Var(X ). The positive square root of the variance is called the
standard deviation of the random variable and denoted by σ . The following equation
indicates the definition and the calculation of Var(X ):
Theorem 2.2 (Chebyshev Theorem) For any given positive value k, the prob-
ability for a random variable to take on a value within k standard deviations of its
mean is at least 1 − 1/k 2 . In other words, if µ and σ are the mean and the standard
deviation of the random variable X , the following inequality is satisfied:
1
Pr(| X − µ | < kσ ) ≥ 1 − .
k2
This theorem gives the lower bound on the probability that a random variable
will take on a value within a certain number of standard deviations of its mean. This
lower bound does not depend on the actual distribution of the random variable. By
choosing k to be 2 and 3 respectively, we can see that the probabilities are at least 34
and 89 that a random variable X will take on a value within two and three standard
deviations of its mean, respectively. To find the exact value of such probabilities, we
need to know the exact distribution of the random variable.
1. f (x, y) ≥ 0 for each pair (x, y) within the range of the random variables and
2. x y f (x, y) = 1, where the summations cover all possible values of x and
y within the range of the random variables.
PROBABILITY AND DISTRIBUTIONS 17
The joint CDF of discrete random variables X and Y , denoted by F(x, y), over all
possible pairs of real values is defined as
F(x, y) = Pr(X ≤ x, Y ≤ y)
= f (s, t), −∞ < x < ∞, −∞ < y < ∞, (2.22)
s≤x t≤y
where f (s, t) is the value of the joint pdf of X and Y at point (s, t).
If X and Y are continuous random variables, f (x, y) defined over the two-
dimensional real space is the joint pdf of random variables X and Y if and only
if
Pr((X, Y ) ∈ A) = f (x, y) d x dy (2.23)
A
for any region A in the two-dimensional real space. A bivariate function f (x, y) can
serve as a joint pdf of two continuous random variables X and Y if and only if it
satisfies
The joint CDF of continuous random variables X and Y , denoted by F(x, y), over
all possible pairs of real values is defined as
F(x, y) = Pr(X ≤ x, Y ≤ y)
x y
= f (s, t) dt ds, −∞ < x < ∞, −∞ < y < ∞, (2.24)
−∞ −∞
where f (s, t) is the value of the joint pdf of X and Y at point (s, t).
For the continuous random variables, we have
∂2
f (x, y) = F(x, y). (2.25)
∂ x ∂y
The bivariate CDF of both discrete and continuous random variables satisfy the
following conditions:
1. F(−∞, −∞) = 0,
2. F(∞, ∞) = 1, and
3. if a < b and c < d, then F(a, c) ≤ F(b, d).
Even when there is more than one random variable of interest in an experiment,
we may want to know the distribution of one of the random variables irrespective of
what values the other random variables may take. In this case, we are interested in the
18 RELIABILITY MATHEMATICS
for each y in the range of Y . Once the marginal pdf of a random variable is obtained,
we can use it to find the CDF of the random variable ignoring all other variables.
Since the random variables of an experiment may depend on each other, we are
sometimes interested in the conditional distribution of one random variable given
that the other random variables have taken certain values or certain ranges of values.
If f (x, y) is the joint pdf of (discrete or continuous) random variables X and Y , g(x)
is the marginal pdf of X , and h(y) is the marginal pdf of Y , the function given by
f (x, y)
g(x | y) = , h(y) = 0, (2.28)
h(y)
for each x within the range of X is called the conditional pdf of X given Y = y.
Correspondingly, the function given by
f (x, y)
h(y | x) = , g(x) = 0, (2.29)
g(x)
for each y within the range of Y is called the conditional pdf of Y given X = x.
For two random variables X and Y with joint pdf f (x, y), the expected value of
a function of these two random variables, g(X, Y ), is given by
g(x, y) f (x, y) if X and Y are discrete,
x y
E(g(X, Y )) = ∞ ∞
g(x, y) f (x, y) d x dy if X and Y are continuous.
−∞ −∞
(2.30)
Let µ X and µY indicate the expected values of random variables X and Y , re-
spectively. The covariance of X and Y , denoted by Cov(X, Y ) or σ X Y , is given by
Cov(X, Y )
ρXY = √ . (2.32)
Var(X )Var(Y )
The correlation coefficient takes nominal values between −1 and 1. A positive value
indicates that X and Y are positively correlated, and a negative value indicates that X
and Y are negatively correlated. A positive correlation between two random variables
indicates that there is a high probability that large values of one variable will go
with large values of the other. A negative correlation indicates that there is a high
probability that high values of one variable will go with low values of the other.
Two random variables X and Y are said to be independent if and only if their joint
pdf is equal to the product of the marginal pdf’s of the two random variables. We can
also say that two random variables are independent if and only if the conditional
pdf of each random variable is equal to its own marginal pdf irrespective of what
value the other random variable takes. If X and Y are independent, we also have the
equations
Multivariate Distribution The definitions provided above with two random vari-
ables can be generalized to the multivariate case. The joint pdf and the joint CDF
of n discrete random variables X 1 , X 2 , . . . , X n defined over their sample spaces are
given, respectively, by
f (x 1 , x2 , . . . , x n ) = Pr(X 1 = x 1 , X 2 = x 2 , . . . , X n = xn ),
F(x1 , x2 , . . . , x n ) = ... f (s, t, . . . µ).
s≤x 1 t≤x 2 µ≤x n
When dealing with more than two random variables, we may also be interested
in the joint marginal distribution of several of the random variables. For ex-
ample, suppose that f (x 1 , x2 , . . . , xn ) is the joint pdf of discrete random variables
X 1 , X 2 , . . . , X n (n > 3). The joint marginal pdf of random variables X 1 , X 2 , X 3 is
given by
m(x1 , x2 , x3 ) = ··· f (x1 , x2 , . . . , x n )
x4 x5 xn
20 RELIABILITY MATHEMATICS
for all values of x1 , x2 , and x 3 within the ranges of X 1 , X 2 , and X 3 , respectively. The
joint marginal CDF of several random variables can be defined in a similar manner.
The joint conditional distribution of several random variables can also be defined.
For example, suppose that f (x1 , x2 , x3 , x4 ) is the joint pdf of discrete random vari-
ables X 1 , X 2 , X 3 , X 4 and m(x1 , x2 , x3 ) is the joint marginal pdf of random variables
X 1 , X 2 , X 3 . Then, the joint conditional pdf of X 4 , given that X 1 = x1 , X 2 = x 2 ,
and X 3 = x3 , is given by
f (x 1 , x 2 , x 3 , x 4 )
q(x 4 | x1 , x 2 , x 3 ) = , m(x 1 , x 2 , x3 ) = 0.
m(x1 , x2 , x 3 )
Note that while equations (2.35) and (2.36) are necessary conditions for the random
variables to be independent, random variables satisfying these conditions are not
necessarily independent.
Discrete Uniform Distribution Consider a random variable X that can take k distinct
possible values. If each value has an equal chance of being taken by X , we say that
X has a discrete uniform distribution. The pmf of X can be written as
1
Pr(X = x) = for x = x1 , x2 , . . . , xk , (2.37)
k
where xi = x j when i = j. The mean and variance of such a random variable can
be expressed as
1 k
µ= xi , (2.38)
k i=1
1 k
σ2 = (xi − µ)2 . (2.39)
k i=1
PROBABILITY AND DISTRIBUTIONS 21
Bernoulli Distribution Consider a statistical experiment that has only two possible
outcomes, which we will call “success” and “failure.” The probability of observing
success and failure in the experiment is denoted by p and 1 − p, respectively. The
random variable X is used to count the number of successes in the experiment. Ap-
parently, X can only take one of two possible values, 0 or 1. The pmf of X is given
by
A random variable that has such a pmf is said to follow the Bernoulli distribution.
The corresponding statistical experiment just described is referred to as a Bernoulli
trial.
The mean and variance of a Bernoulli random variable X are
µ = p, (2.44)
2
σ = p(1 − p). (2.45)
X = X 1 + X2 + · · · + Xn .
Based on our assumptions, X i ’s are i.i.d. with the pmf given in equation (2.43). If
we complete n trials, we would get a specific sequence of n numbers consisting of
0’s and 1’s. The probability of getting a specific sequence with exactly x 1’s is equal
to p x (1 − p)n−x . The total number of sequences of 0’s and 1’s such that there are
exactly x 1’s is equal to nx . Thus, the probability of observing x 1’s in whatever
sequence from n Bernoulli trials is equal to nx p x (1 − p)n−x . As a result, we can
22 RELIABILITY MATHEMATICS
µ = np, (2.47)
2
σ = np(1 − p). (2.48)
Example 2.4 One hundred fluorescent light tubes are used for lighting in a build-
ing. They are inspected every 30 days and failed tubes are replaced at inspection
times. Thus, after each inspection, all 100 tubes are working. The failures of the
light tubes are statistically independent. The probability for a working tube to last 30
days is constant at 0.80. What is the probability that at least 10 will be failed at the
next inspection time? What is the average number of failed tubes at each inspection
time? What is the interval such that the probability that the number of failed tubes at
each inspection time is within this interval is at least 0.75?
In this example, we use X to indicate the number of failed tubes at the time of
the next inspection given that all 100 tubes are working properly at the end of the
PROBABILITY AND DISTRIBUTIONS 23
previous inspection. Then, X follows the binomial distribution with n = 100 and
p = 1 − 0.8 = 0.2:
9
Pr(X ≥ 10) = 1 − Pr(X ≤ 9) = 1 − Pr(X = i)
i=0
9
100
=1− × 0.2i × 0.8100−i ≈ 1 − 0.0023 = 0.9977.
i=0
i
This means that there is a 75% chance that the number of failed tubes to be observed
at each inspection will be somewhere between 12 and 28. This range can help the
inspector to bring enough light tubes for replacement of the failed ones.
Title: Savolaisjuttuja
Seitsemän murrehumoreskia
Language: Finnish
Seitsemän murrehumoreskia
Kirj.
Herra rokuristi
Erreys
Herra pormestar ja muut mussiikkihenkilöt
Halierakesänä
Juljaanan Junnupetter
Kello
Kilipailu sehhii
HERRA ROKURISTI
*****
— Nyt ottaa ja lähtöö Toonan valssi, että jos yritättä! ilimotti Pekka
ja alako vaivertoo viuluvasa. Ensmäinen, joka astu palakkiin poikki
taamiin puolelle, ol herra rokurist. Ol heittännä palttoosa ja
porskuttel ehassa verassa. Pännätaskuusa ol pistännä puelsilikki
nästyykin. Hään astu Annakaisan etteen ja kumartoo kaikerreltuasa
ja sipastuasa vasemmuksellasa puelkuaren lattiaa, että tomu
ikkunaan hiekkana räsäht, hään vuati tyttöö taamiksesa. Vuan
Annakaisoo iletti, ettei osanna kun niijata nöksäyttöö ja hupeltoo:
*****
*****
Pekka tuntu hokevan, että sen sitä vielä, mokoma elätys, mutta
rettuutti risun kalisevia pulloja salonkiin.
*****
— Hiljoo etteenpäin!
— Uuik-uik — a-a-autta-kee.
— Häh?
— Rokurist!…
— No, ota sitte sesta ja tarjoo sen piätä, mutt anna olla pikkusen
aikoo kylymän veen ollessa…
Nuin arviolta viis syltä ol lommoo, kun Kalle uupu, ramppiko lie vai
laaki siärvartee tavanna. Ol huutanna:
— Heikki hoi!
— Nii häh?
— Tuota, elähän nyt vielä, Kalle, nii että huku! Ootahan sen
verran, että minä ehin ja autan!
— Ollaan kait sitä siihen sorttiin, vaikka kaikki kait myö ollaan
tässä mualimassa matkantekijöitä. Vuan onko niitä nurkkia löysinä?
— Nyt lie hitto. Vuan pittää kait sitä nyt kievarissa olla ies yks
huoneen kyhhäys. Myö ollaan kaukoo ja ei muuvanne osata. Eihän
tässäkään salissa oo huokuvoo henkee, että jos —
— Herra pormestarin?
— Niin. Vuan kun ootta suanna oottoo näin kauvan, niin — tulukee
nyt tänne pijjaanopöksään, niin suatta.
Pijjaanohuoneessa ol se kuappi, se koputsoitin. Kun myö oltiin just
semmosta pelkalua tultu pelloomaan, arvel tover, että jos ois
rietautua harjottammaan kahvia oottaissa.
— Hyväh! Ravooh!
*****
— Ja muut mussiikkihenkilöt.
*****
— Siinä tul erreys. Minä en osanna älytä, että työ tuntija herra
pormestarin.
Kukaties ois myö vaikka keskellä yötäi piästy vielä saunaannii, kun
tunnettiin herra pormestar; ehkä ois suatu riilippu lumero viis —
vaikka se kuuluuhii yksinomaan hänelle.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebookfinal.com