100% found this document useful (1 vote)
50 views

Optimal Reliability Modeling Principles and Applications 1st Edition Way Kuo download pdf

The document provides information about the book 'Optimal Reliability Modeling Principles and Applications' by Way Kuo, detailing its content and structure. It includes links to download the book and other related ebooks, along with a brief overview of the topics covered in the book, such as reliability mathematics and system design. The book is published by John Wiley & Sons, Inc. and is available in PDF format.

Uploaded by

ganusbleamlo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
50 views

Optimal Reliability Modeling Principles and Applications 1st Edition Way Kuo download pdf

The document provides information about the book 'Optimal Reliability Modeling Principles and Applications' by Way Kuo, detailing its content and structure. It includes links to download the book and other related ebooks, along with a brief overview of the topics covered in the book, such as reliability mathematics and system design. The book is published by John Wiley & Sons, Inc. and is available in PDF format.

Uploaded by

ganusbleamlo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

Visit https://ebookfinal.

com to download the full version and


explore more ebooks

Optimal Reliability Modeling Principles and


Applications 1st Edition Way Kuo

_____ Click the link below to download _____


https://ebookfinal.com/download/optimal-reliability-
modeling-principles-and-applications-1st-edition-way-
kuo/

Explore and download more ebooks at ebookfinal.com


Here are some suggested products you might be interested in.
Click the link to download

Cyber Physical Distributed Systems Modeling Reliability


Analysis and Applications 1st Edition Huadong Mo

https://ebookfinal.com/download/cyber-physical-distributed-systems-
modeling-reliability-analysis-and-applications-1st-edition-huadong-mo/

GIS Based Chemical Fate Modeling Principles and


Applications 1st Edition Alberto Pistocchi

https://ebookfinal.com/download/gis-based-chemical-fate-modeling-
principles-and-applications-1st-edition-alberto-pistocchi/

Principles of Combustion 2nd Edition Kenneth Kuan-Yun Kuo

https://ebookfinal.com/download/principles-of-combustion-2nd-edition-
kenneth-kuan-yun-kuo/

Maintenance Replacement and Reliability Theory and


Applications Second Edition Jardine

https://ebookfinal.com/download/maintenance-replacement-and-
reliability-theory-and-applications-second-edition-jardine/
Coated Textiles Principles and Applications Principles and
Applications Second Edition Ashish Kumar Sen

https://ebookfinal.com/download/coated-textiles-principles-and-
applications-principles-and-applications-second-edition-ashish-kumar-
sen/

Real Time Digital Signal Processing Fundamentals


Implementations and Applications 3rd Edition Sen M. Kuo

https://ebookfinal.com/download/real-time-digital-signal-processing-
fundamentals-implementations-and-applications-3rd-edition-sen-m-kuo/

Multistate systems reliability theory with applications


1st Edition Bent Natvig

https://ebookfinal.com/download/multistate-systems-reliability-theory-
with-applications-1st-edition-bent-natvig/

Critical Reflections on Nuclear and Renewable Energy


Environmental Protection and Safety in the Wake of the
Fukushima Nuclear Accident 1st Edition Way Kuo
https://ebookfinal.com/download/critical-reflections-on-nuclear-and-
renewable-energy-environmental-protection-and-safety-in-the-wake-of-
the-fukushima-nuclear-accident-1st-edition-way-kuo/

Foundations of Dynamic Economic Analysis Optimal Control


Theory and Applications 1st Edition Michael R. Caputo

https://ebookfinal.com/download/foundations-of-dynamic-economic-
analysis-optimal-control-theory-and-applications-1st-edition-michael-
r-caputo/
Optimal Reliability Modeling Principles and Applications
1st Edition Way Kuo Digital Instant Download
Author(s): Way Kuo
ISBN(s): 9780471397618, 047139761X
Edition: 1
File Details: PDF, 27.15 MB
Year: 2002
Language: english
OPTIMAL RELIABILITY MODELING
OPTIMAL RELIABILITY
MODELING
Principles and Applications

WAY KUO
Texas A&M University

MING J. ZUO
The University of Alberta

JOHN WILEY & SONS, INC.


This book is printed on acid-free paper. ∞
Copyright 
c 2003 by John Wiley & Sons, Inc. All rights reserved

Published by John Wiley & Sons, Inc., Hoboken, New Jersey


Published simultaneously in Canada

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form
or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as
permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior
written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to
the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978)
750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be
addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ
07030, (201) 748-6011, fax (201) 748-6008, e-mail: [email protected].
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in
preparing this book, they make no representations or warranties with respect to the accuracy or
completeness of the contents of this book and specifically disclaim any implied warranties of
merchantability or fitness for a particular purpose. No warranty may be created or extended by sales
representatives or written sales materials. The advice and strategies contained herein may not be suitable
for your situation. You should consult with a professional where appropriate. Neither the publisher nor
author shall be liable for any loss of profit or any other commercial damages, including but not limited to
special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our
Customer Care Department within the United States at (800) 762-2974, outside the United States at
(317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may
not be available in electronic books.

Library of Congress Cataloging-in-Publication Data:


Kuo, Way, 1951–
Optimal reliability modeling : principles and applications / Way Kuo,
Ming J. Zuo.
p. cm.
ISBN 0-471-39761-X (acid-free paper)
1. Reliability (Engineering)—Mathematical models. I. Zuo, Ming J.
II. Title.
TA169 .K86 2002
620 .00452—DC21 2002005287
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1
CONTENTS

Preface xi

Acknowledgments xv

1 Introduction 1
1.1 Needs for Reliability Modeling, 2
1.2 Optimal Design, 3

2 Reliability Mathematics 5
2.1 Probability and Distributions, 5
2.1.1 Events and Boolean Algebra, 5
2.1.2 Probabilities of Events, 8
2.1.3 Random Variables and Their Characteristics, 11
2.1.4 Multivariate Distributions, 16
2.1.5 Special Discrete Distributions, 20
2.1.6 Special Continuous Distributions, 27
2.2 Reliability Concepts, 32
2.3 Commonly Used Lifetime Distributions, 35
2.4 Stochastic Processes, 40
2.4.1 General Definitions, 40
2.4.2 Homogeneous Poisson Process, 41
2.4.3 Nonhomogeneous Poisson Process, 43
2.4.4 Renewal Process, 44
2.4.5 Discrete-Time Markov Chains, 46
2.4.6 Continuous-Time Markov Chains, 50
2.5 Complex System Reliability Assessment Using Fault
Tree Analysis, 58

v
vi CONTENTS

3 Complexity Analysis 62
3.1 Orders of Magnitude and Growth, 63
3.2 Evaluation of Summations, 69
3.3 Bounding Summations, 73
3.4 Recurrence Relations, 75
3.4.1 Expansion Method, 77
3.4.2 Guess-and-Prove Method, 80
3.4.3 Master Method, 82
3.5 Summary, 83

4 Fundamental System Reliability Models 85


4.1 Reliability Block Diagram, 86
4.2 Structure Functions, 87
4.3 Coherent Systems, 90
4.4 Minimal Paths and Minimal Cuts, 93
4.5 Logic Functions, 96
4.6 Modules within a Coherent System, 97
4.7 Measures of Performance, 100
4.8 One-Component System, 105
4.9 Series System Model, 107
4.9.1 System Reliability Function and MTTF, 107
4.9.2 System Availability, 110
4.10 Parallel System Model, 112
4.10.1 System Reliability Function and MTTF, 112
4.10.2 System Availability of Parallel System with Two
i.i.d. Components, 114
4.10.3 System Availability of Parallel System with Two
Different Components, 118
4.10.4 Parallel Systems with n i.i.d. Components, 122
4.11 Parallel–Series System Model, 124
4.12 Series–Parallel System Model, 127
4.13 Standby System Model, 129
4.13.1 Cold Standby Systems, 130
4.13.2 Warm Standby Systems, 137

5 General Methods for System Reliability Evaluation 140


5.1 Parallel and Series Reductions, 141
5.2 Pivotal Decomposition, 145
5.3 Generation of Minimal Paths and Minimal Cuts, 148
5.3.1 Connection Matrix, 148
5.3.2 Node Removal Method for Generation of Minimal Paths, 149
5.3.3 Generation of Minimal Cuts from Minimal Paths, 152
5.4 Inclusion–Exclusion Method, 153
5.5 Sum-of-Disjoint-Products Method, 157
CONTENTS vii

5.6 Markov Chain Imbeddable Structures, 164


5.6.1 MIS Technique in Terms of System Failures, 165
5.6.2 MIS Technique in Terms of System Success, 170
5.7 Delta–Star and Star–Delta Transformations, 171
5.7.1 Star or Delta Structure with One Input Node and
Two Output Nodes, 173
5.7.2 Delta Structure in Which Each Node May Be either
an Input Node or an Output Node, 178
5.8 Bounds on System Reliability, 180
5.8.1 IE Method, 181
5.8.2 SDP Method, 182
5.8.3 Esary–Proschan (EP) Method, 183
5.8.4 Min–Max Bounds, 185
5.8.5 Modular Decompositions, 186
5.8.6 Notes, 187

6 General Methodology for System Design 188


6.1 Redundancy in System Design, 189
6.2 Measures of Component Importance, 192
6.2.1 Structural Importance, 192
6.2.2 Reliability Importance, 193
6.2.3 Criticality Importance, 195
6.2.4 Relative Criticality, 197
6.3 Majorization and Its Application in Reliability, 199
6.3.1 Definition of Majorization, 199
6.3.2 Schur Functions, 200
6.3.3 L-Additive Functions, 203
6.4 Reliability Importance in Optimal Design, 206
6.5 Pairwise Rearrangement in Optimal Design, 207
6.6 Optimal Arrangement for Series and Parallel Systems, 209
6.7 Optimal Arrangement for Series–Parallel Systems, 210
6.8 Optimal Arrangement for Parallel–Series Systems, 222
6.9 Two-Stage Systems, 227
6.10 Summary, 230

7 The k-out-of-n System Model 231


7.1 System Reliability Evaluation, 232
7.1.1 The k-out-of-n:G System with i.i.d. Components, 233
7.1.2 The k-out-of-n:G System with Independent Components, 234
7.1.3 Bounds on System Reliability, 250
7.2 Relationship between k-out-of-n G and F Systems, 251
7.2.1 Equivalence between k-out-of-n:G and
(n − k + 1)-out-of-n:F Systems, 251
7.2.2 Dual Relationship between k-out-of-n G and F Systems, 252
viii CONTENTS

7.3 Nonrepairable k-out-of-n Systems, 255


7.3.1 Systems with i.i.d. Components, 256
7.3.2 Systems with Nonidentical Components, 258
7.3.3 Systems with Load-Sharing Components Following
Exponential Lifetime Distributions, 258
7.3.4 Systems with Load-Sharing Components Following
Arbitrary Lifetime Distributions, 262
7.3.5 Systems with Standby Components, 264
7.4 Repairable k-out-of-n Systems, 266
7.4.1 General Repairable System Model, 267
7.4.2 Systems with Active Redundant Components, 270
7.4.3 Systems with Load-Sharing Components, 276
7.4.4 Systems with both Active Redundant and Cold
Standby Components, 277
7.5 Weighted k-out-of-n:G Systems, 279

8 Design of k-out-of-n Systems 281


8.1 Properties of k-out-of-n Systems, 281
8.1.1 Component Reliability Importance, 281
8.1.2 Effects of Redundancy in k-out-of-n Systems, 282
8.2 Optimal Design of k-out-of-n Systems, 285
8.2.1 Optimal System Size n, 286
8.2.2 Simultaneous Determination of n and k, 290
8.2.3 Optimal Replacement Time, 293
8.3 Fault Coverage, 294
8.3.1 Deterministic Analysis, 295
8.3.2 Stochastic Analysis, 299
8.4 Common-Cause Failures, 302
8.4.1 Repairable System with Lethal Common-Cause Failures, 303
8.4.2 System Design Considering Lethal Common-Cause
Failures, 305
8.4.3 Optimal Replacement Policy with Lethal
Common-Cause Failures, 308
8.4.4 Nonlethal Common-Cause Failures, 310
8.5 Dual Failure Modes, 311
8.5.1 Optimal k or n Value to Maximize System Reliability, 313
8.5.2 Optimal k or n Value to Maximize System Profit, 317
8.5.3 Optimal k and n Values to Minimize System Cost, 319
8.6 Other Issues, 321
8.6.1 Selective Replacement Optimization, 321
8.6.2 TMR and NMR Structures, 322
8.6.3 Installation Time of Repaired Components, 323
8.6.4 Combinations of Factors, 323
8.6.5 Partial Ordering, 324
CONTENTS ix

9 Consecutive-k-out-of-n Systems 325


9.1 System Reliability Evaluation, 328
9.1.1 Systems with i.i.d. Components, 328
9.1.2 Systems with Independent Components, 339
9.2 Optimal System Design, 350
9.2.1 B-Importances of Components, 350
9.2.2 Invariant Optimal Design, 356
9.2.3 Variant Optimal Design, 361
9.3 Consecutive-k-out-of-n:G Systems, 363
9.3.1 System Reliability Evaluation, 363
9.3.2 Component Reliability Importance, 365
9.3.3 Invariant Optimal Design, 366
9.3.4 Variant Optimal Design, 369
9.4 System Lifetime Distribution, 369
9.4.1 Systems with i.i.d. Components, 370
9.4.2 System with Exchangeable Dependent Components, 372
9.4.3 System with (k − 1)-Step Markov-Dependent
Components, 375
9.4.4 Repairable Consecutive-k-out-of-n Systems, 378
9.5 Summary, 383

10 Multidimensional Consecutive-k-out-of-n Systems 384


10.1 System Reliability Evaluation, 386
10.1.1 Special Multidimensional Systems, 386
10.1.2 General Two-Dimensional Systems, 387
10.1.3 Bounds and Approximations, 391
10.2 System Logic Functions, 395
10.3 Optimal System Design, 396
10.4 Summary, 400

11 Other k-out-of-n and Consecutive-k-out-of-n Models 401


11.1 The s-Stage k-out-of-n Systems, 401
11.2 Redundant Consecutive-k-out-of-n Systems, 405
11.3 Linear and Circular m-Consecutive-k-out-of-n Model, 405
11.4 The k-within-Consecutive-m-out-of-n Systems, 407
11.4.1 Systems with i.i.d. Components, 408
11.4.2 Systems with Independent Components, 411
11.4.3 The k-within-(r, s)/(m, n):F Systems, 416
11.5 Series Consecutive-k-out-of-n Systems, 424
11.6 Combined k-out-of-n:F and Consecutive-kc -out-of-n:F
System, 429
11.7 Combined k-out-of-mn:F and Linear (r, s)/(m, n):F System, 432
11.8 Combined k-out-of-mn:F, One-Dimensional Con/kc /n:F, and
Two-Dimensional Linear (r, s)/(m, n):F Model, 435
x CONTENTS

11.9 Application of Combined k-out-of-n and Consecutive-k-out-of-n


Systems, 436
11.10 Consecutively Connected Systems, 438
11.11 Weighted Consecutive-k-out-of-n Systems, 447
11.11.1 Weighted Linear Consecutive-k-out-of-n:F Systems, 447
11.11.2 Weighted Circular Consecutive-k-out-of-n:F Systems, 450

12 Multistate System Models 452


12.1 Consecutively Connected Systems with Binary System State and
Multistate Components, 453
12.1.1 Linear Multistate Consecutively Connected Systems, 453
12.1.2 Circular Multistate Consecutively Connected Systems, 458
12.1.3 Tree-Structured Consecutively Connected Systems, 464
12.2 Two-Way Consecutively Connected Systems, 470
12.3 Key Concepts in Multistate Reliability Theory, 474
12.4 Special Multistate Systems and Their
Performance Evaluation, 480
12.4.1 Simple Multistate k-out-of-n:G Model, 480
12.4.2 Generalized Multistate k-out-of-n:G Model, 482
12.4.3 Generalized Multistate Consecutive-k-out-of-n:F System, 490
12.5 General Multistate Systems and Their Performance Evaluation, 494
12.6 Summary, 502

Appendix: Laplace Transform 504

References 513

Bibliography 527

Index 539
PREFACE

Recent progress in science and technology has made today’s engineering systems
more powerful than ever. The increasing level of sophistication in high-tech indus-
trial processes implies that reliability problems will not only continue to exist but
are likely to require ever more complex solutions. Furthermore, system failures are
having more significant effects on society as a whole than ever before. Consider, for
example, the impact of the failure or mismanagement of a power distribution system
in a major city, the malfunction of an air traffic control system at an international
airport, failure of a nanosystem, miscommunication in today’s Internet systems, or
the breakdown of a nuclear power plant. As a consequence, the importance of relia-
bility at all stages of modern engineering processes, including design, manufacture,
distribution, and operation, can hardly be overstated.
Today’s engineering systems are also complicated. For example, a space shuttle
consists of hundreds of thousands of components. These components functioning
together form a system. The reliable performance of the system depends on the reli-
able performance of its constituent components. In recent years, statistical and prob-
abilistic models have been developed for evaluating system reliability based on the
components’ reliability, the system design, and the assembly of the components. At
the same time, we should pay close attention to the usefulness of these models. Some
models and published books are too abstract to understand, and others are too basic
to address solutions for today’s systems.
System reliability models are the focus of this book. We have attempted to include
many of the system reliability models that have been reported in the literature with
emphasis on the more significant ones. The models extensively covered include par-
allel, series, standby, k-out-of-n, consecutive-k-out-of-n, multistate, and general sys-
tem models, including some maintainable systems. For each model, we discuss the
evaluation of exact system reliability, the development of bounds for system reliabil-
ity approximation, extensions to dual failure modes and/or multistates, and optimal
system design in terms of the arrangement of components. Both static and dynamic
xi
xii PREFACE

performance measures are discussed. Failure dependency among components within


some systems is also addressed. In addition, we believe that this is the first time that
multistate system reliability models are systematically introduced and discussed in
a book. The result is a state-of-the-art manuscript for students, system designers,
researchers, and teachers in reliability engineering.
We provide unique interpretations of the existing reliability evaluation methods.
In addition to presenting physical explanations for k-out-of-n and consecutive-k-out-
of-n models that have recently been developed, we also show how these evaluation
methods for assessing system reliability can be applied to several other areas. These
include (1) general network systems that have multistage failure modes (i.e., degra-
dation), (2) tree and reverse tree structures that are widely found in computer soft-
ware development, and (3) applications of stochastic processes where the concern
is location instead of time (Markov chain imbeddable structures). Design issues are
also extensively addressed in this book. Given the same quantity of resources, an op-
timal system design can lead to much higher system reliability. Furthermore, with a
thorough understanding of its design, we can seek better ways to diagnose, maintain,
and improve an existing system. Optimal design by analytical and heuristic method-
ologies is thoroughly discussed here as well.
The book is organized as follows. An introduction is given in Chapter 1. Chapter
2 provides reliability mathematics. In Chapter 2, not only the traditional notion of
probability and distributions are provided, but also fundamental stochastic processes
related to reliability evaluation are addressed. Chapter 3 briefly discusses complexity
analysis because it is useful in the analysis of algorithm efficiency. In later chapters,
complexity analysis will serve as a base for comparisons of different algorithms for
system reliability evaluation. Chapter 4 starts by introducing coherent reliability sys-
tems along with the structure function, coherent systems, minimal cuts and paths,
logic functions, and performance measures. It then introduces the fundamental sys-
tem reliability models, including parallel, series, combinations of parallel and series,
and standby systems.
General methodologies for system reliability evaluation are covered in Chapter 5.
Commonly used system reliability evaluation techniques such as parallel–series re-
duction, pivotal decomposition, the inclusion–exclusion method, and the sum-of-
disjoint-products method are introduced. Techniques for generation of minimal paths
and/or minimal cuts are also discussed. The delta–star and star–delta transformations
are analyzed as tools for system reliability evaluation. A new technique that has re-
cently been reported in the literature for system reliability evaluation utilizes the
so-called Markov chain imbeddable structures that exist in many system structures.
This new technique is introduced here. Methods for system reliability approximation
are also included in Chapter 5.
In Chapter 6, we introduce general methodologies for optimal system design.
Various measures of component reliability importance are introduced. The concept
of majorization that is useful for optimal system design is discussed. Applications
of importance measures and majorization along with pairwise rearrangements of
components are illustrated for the optimal design of series, parallel, and mixed
series–parallel systems. One may be able to design an optimal reliability system by
PREFACE xiii

examining a carefully selected importance measure. These general design method-


ologies will also be useful for the design of more complicated system structures that
will be covered in later chapters.
Chapters 7 and 8 focus on reliability evaluation and optimal design of the k-out-
of-n system model, respectively. Although the k-out-of-n system is a special one, it
has unique properties that allow it to demonstrate the efficiency of various reliability
evaluation techniques. In Chapter 7, we introduce four different techniques that have
been used in the development of reliability evaluation algorithms for k-out-of-n sys-
tems with independent components. The relationship between an F and a G system is
thoroughly examined. Also extensively analyzed is the performance of nonrepairable
and repairable systems. In Chapter 8, under the context of optimal design, we cover
topics such as component reliability importance, imperfect fault coverage, common-
cause failure, and dual failure modes.
In recent years, the consecutive-k-out-of-n systems have been extensively stud-
ied. Chapter 9 covers the consecutive-k-out-of-n models and interpretations of these
models when applied to a number of existing problems that would be difficult to
handle otherwise. Specifically, we introduce both linear and circular systems and
those with nonidentical components as well as approximations and bounds with var-
ious lifetime distributions. In this chapter, we present the notion of optimal configu-
ration and invariant optimal configuration for both the F and G systems. We believe
that readers should gain a thorough understanding of the newly developed paradigms
being applied to consecutive-k-out-of-n systems and their optimal design. Chapter
10 gives results on multidimensional consecutive-k-out-of-n models and optimal de-
sign for such systems, including some time-dependent situations. Chapter 11 focuses
on the combined k-out-of-n and consecutive-k-out-of-n models, including issues of
both system reliability evaluation and optimal design. A case study on applying these
combined k-out-of-n and consecutive-k-out-of-n models in remaining life estimation
of a hydrogen furnace in a petrochemical company is included in this chapter. Other
extended and related system models are briefly outlined in the general discussions
presented in the previous chapters.
Many modern systems do not simply work or fail. Instead, they may experience
degraded levels of performance before a complete failure is observed. Multistate sys-
tem models allow both the system and its components to have more than two possible
states. Chapter 12 provides coverage of multistate system reliability models. In this
chapter, we first discuss consecutively connected systems and two-way communi-
cation systems wherein the system is binary while the components are multistate.
Then we extend some of the concepts used in binary system reliability theory, such
as relevancy, coherency, minimal path vector, minimal cut vector, and duality into
the multistate context. Some special multistate system reliability models are then in-
troduced. Finally, methods for performance evaluation of general multistate systems
are discussed.
The new topics and unique features of this book on optimal system reliability
modeling include
1. complexity analysis, which provides background knowledge on efficiency
comparison of system reliability evaluation algorithms;
xiv PREFACE

2. Markov chain imbeddable structures, which is another effective tool for system
reliability analysis;
3. majorization, which is a powerful tool for the development of invariant optimal
designs for some system structures;
4. multistate system reliability theory, which is systematically introduced for the
first time in a text on engineering system reliability analysis; and
5. applications of the k-out-of-n and the consecutive-k-out-of-n system models
in remaining life estimation.

This book provides the reader with a complete picture of reliability evaluation and
optimal system design for many well-studied system structures in both the binary and
the multistate contexts. Based on the comparisons of computational complexities of
the algorithms presented in this book, users can determine which evaluation meth-
ods can be most efficiently applied to their own problems. The book can be used as a
handbook for practicing engineers. It includes the latest results and the most compre-
hensive algorithms for system reliability analysis available in the literature as well as
for the optimal design of the various system reliability models.
This book can serve as an advanced textbook for graduate students wishing to
study reliability for the purpose of engaging in research. We outline various mathe-
matical tools and approaches that have been used successfully in research on system
reliability evaluation and optimal design. In addition, a primer on complexity analy-
sis is included. With the help of complexity metrics, we discuss how to analyze and
determine the right algorithm for optimal system design. The background required
for comprehending this textbook includes only calculus, basic probability theory, and
some knowledge of computer programming. There are 263 cited references and an
additional 244 entries in the bibliography that are related to the material presented in
this book.

Way Kuo
Texas A&M University
Ming J. Zuo
The University of Alberta
ACKNOWLEDGMENTS

We acknowledge the National Science Foundation, Army Research Office, Office of


Naval Research, Air Force Office for Scientific Research, National Research Coun-
cil, Fulbright Foundation, Texas Advanced Technology Program, Bell Labs, Hewlett
Packard, and IBM for their funding of W. Kuo’s research activities over the past 25
years. We also acknowledge the Natural Sciences and Engineering Research Coun-
cil of Canada (NSERC), University Grants Council of Hong Kong, and Syncrude
Canada Ltd. for their support of M. J. Zuo’s research activities over the past 15 years.
This manuscript grows from the authors’ collaborative and individual research and
development projects, supported in part by the above agencies.
The first draft of this book has been examined by Chunghun Ha, Wen Luo, and
Jung Yoon Hwang of Texas A&M University. We are very grateful to their valuable
suggestions and criticisms regarding reorganization and presentation of the materials.
We acknowledge input to this manuscript from Jinsheng Huang of the University of
Alberta, Kyungmee O. Kim of Texas A&M University, and Chang Woo Kang of
Corning.
Mary Ann Dickson and the Wiley editorial staff edited the manuscript. Dini S.
Sunardi made significant effort in formatting the original LATEX files. Shiang Lee,
Linda Malie, Fan Jiang, Mobin Akhtar, Jing Lin, Xinhao Tian, and Martin Agelin-
chaab have helped with checking the references. Lona Houston handled the corre-
spondence.
In the book, we try hard to give due credit to those who have contributed to the
topics addressed. We apologize if we have inadvertently overlooked specific top-
ics and other contributors. We have obtained permission to use material from the
following IEEE Transactions on Reliability: M. J. Phillips, “k-out-of-n:G systems
are preferable,” IEEE Transactions on Reliability, R-29(2): 166–169, 1980,  c 1980
IEEE; T. K. Boehme, A. Kossow, and W. Preuss, “A generalization of consecutive-
k-out-of-n:F system,” IEEE Transactions on Reliability, R-41(3): 451–457, 1992,
c 1992 IEEE; M. Zuo, “Reliability of linear and circular consecutively-connected
xv
xvi ACKNOWLEDGMENTS

systems,” IEEE Transactions on Reliability, R-42(3): 484–487, 1993,  c 1993 IEEE;


M. Zuo, “Reliability and design of 2-dimensional consecutive-k-out-of-n systems,”
IEEE Transactions on Reliability, R-42(3): 488–490, 1993,  c 1993 IEEE; J. S. Wu
and R. J. Chen, “Efficient algorithms for k-out-of-n & consecutive-weighted-k-out-
of-n:F system,” IEEE Transactions on Reliability, R-43(4): 650–655, 1994,  c 1994
IEEE; M. Zuo, D. Lin, and Y. Wu, “Reliability evaluation of combined k-out-of-n:F,
consecutive-kc -out-of-n:F and linear connected-(r ,s)-out-of-(m,n):F system struc-
tures,” IEEE Transactions on Reliability, R-49(1): 99–104, 2000,  c 2000 IEEE;
J. Huang, M. J. Zuo, and Y. H. Wu, “Generalized multi-state k-out-of-n:G systems,”
IEEE Transactions on Reliability, R-49(1): 105–111, 2000,  c 2000 IEEE.
Permissions are granted for use of materials from V. R. Prasad, K. P. K. Nair,
and Y. P. Aneja, “Optimal assignment of components to parallel-series and series-
parallel systems,” Operations Research, 39(3): 407–414, 1991,  c 1991, INFORMS;
M. V. Koutras, “On a Markov chain approach for the study of reliability structures,”
Journal of Applied Probability, 33:357–367, 1996,  c 1996, The Applied Probabil-
ity Trust; J. Malinowski and W. Preuss, “Reliability evaluation for tree-structured
systems with multi-state components,” Microelectronics and Reliability, 36(1): 9–
17, 1996,  c 1996, Elsevier Science; J. Malinowski and W. Preuss, “Reliability of
reverse-tree-structured systems with multi-state components,” Microelectronics and
Reliability, 36(1): 1–7, 1996,  c 1996, Elsevier Science; J. Shen and M. J. Zuo,
“Optimal design of series consecutive-k-out-of-n:G systems,” Reliability Engineer-
ing and System Safety, 45: 277–283, 1994,  c 1994, Elsevier Science; M. J. Zuo and
M. Liang, “Reliability of multistate consecutively-connected systems,” Reliability
Engineering and System Safety, 44: 173–176, 1994,  c 1994, Elsevier Science; Y. L.
Zhang, M. J. Zuo, and R. C. M. Yam, “Reliability analysis for a circular consecutive-
2-out-of-n:F repairable system with priority in repair,” Reliability Engineering and
System Safety, 68: 113–120, 2000,  c 2000, Elsevier Science; and M. J. Zuo, “Re-
liability and component importance of a consecutive-k-out-of-n system,” Microelec-
tronics and Reliability, 33(2): 243–258, 1993, c 1993, Elsevier Science.
1
INTRODUCTION

Reliability is the probability that a system will perform satisfactorily for at least a
given period of time when used under stated conditions. Therefore, the probability
that a system successfully performs as designed is called “system reliability,” or the
“probability of survival.” Often, unreliability refers to the probability of failure. Sys-
tem reliability is a measure of how well a system meets its design objective. A system
can be characterized as a group of stages or subsystems integrated to perform one or
more specified operational functions.
In describing the reliability of a given system, it is necessary to specify (1) the
failure process, (2) the system configuration that describes how the system is con-
nected and the rules of operation, and (3) the state in which the system is defined
to be failed. The failure process describes the probability law governing those fail-
ures. The system configuration, on the other hand, defines the manner in which the
system reliability function will behave. The third consideration in developing the re-
liability function for a nonmaintainable system is to define the conditions of system
failure.
Other measures of performance include failure rate, percentile of system life,
mean time to failure, mean time between failures, availability, mean time between
repairs, and maintainability. Depending on the nature and complexity of the system,
some measures are better used than others. For example, failure rate is widely used
for single-component analysis and reliability is better used for large-system analy-
sis. For a telecommunication system, mean time to failure is widely used, but for a
medical treatment, survivability (reliability) is used. In reliability optimization, the
maximization of percentile life of a system is another useful measure of interest to
the system designers, according to Prasad et al. [196]. For man–machine systems,

1
2 INTRODUCTION

Abbas and Kuo [1] and Rupe and Kuo [207] report stochastic modeling measures
that go beyond reliability as it is traditionally defined.

1.1 NEEDS FOR RELIABILITY MODELING

Many of today’s systems, hardware and software, are large and complex and often
have special features and structures. To enhance the reliability of such systems, one
needs to access their reliability and other related measures. Furthermore, the system
concept extends to service systems and supply chain systems for which reliability
and accuracy are an important goal to achieve. There is a need to present state-of-
the-art optimal modeling techniques for such assessments.
Recent progress in science and technology has made today’s engineering systems
more powerful than ever. The increasing level of sophistication in high-tech indus-
trial processes implies that reliability problems not only will continue to exist but also
are likely to require ever more complex solutions. Furthermore, reliability failures are
having more significant effects on society as a whole than ever before. Consider, for
example, the impact of the failure or mismanagement of a power distribution system
in a major city, the malfunction of an air traffic control system at an international
airport, failure of a nanosystem, miscommunication in today’s Internet systems, or
the breakdown of a nuclear power plant. The importance of reliability at all stages
of modern engineering processes, including design, manufacture, distribution, and
operation, can hardly be overstated.
Today’s engineering systems are also complicated. For example, a space shuttle
consists of hundreds of thousands of components. These components functioning to-
gether form a system. The reliable performance of the system depends on the reliable
performance of its constituent components. In recent years, statistical and probabilis-
tic models have been developed for evaluating system reliability based on component
reliability, the system design, and the assembly of the components. At the same time,
we should pay close attention to the usefulness of these models. Some models and
published books are too abstract to understand and others are too basic to address
solutions for today’s systems.
System reliability models are the focus of this book. We have attempted to in-
clude all of the system reliability models that have been reported in the literature
with emphasis on the significant ones. The models extensively covered include par-
allel, series, standby, k-out-of-n, consecutive-k-out-of-n, multistate, and general sys-
tem models, including some maintainable systems. For each model, we discuss the
evaluation of exact system reliability, development of bounds for system reliability
approximation, extensions to dual failure modes and/or multistates, and optimal sys-
tem design in terms of arrangement of components. Both static and dynamic perfor-
mance measures are discussed. Failure dependency among components within some
systems is also addressed. In addition, we believe that this is the first time that mul-
tistate system reliability models have been systematically introduced and discussed
in a book. The result is a state-of-the-art reference manuscript for students, system
designers, researchers, and teachers of reliability engineering.
OPTIMAL DESIGN 3

1.2 OPTIMAL DESIGN

Many modern systems do not simply work or fail. Instead, they may experience
degraded levels of performance before a complete failure is observed. Multistate
system models allow both the system and its components to have more than two
possible states. In addition to special multistate system reliability models, methods
for performance evaluation of general multistate systems are discussed.
The new topics and unique features on optimal system reliability modeling in this
book include

1. complexity analysis, which provides background knowledge on efficiency


comparison of system reliability evaluation algorithms;
2. Markov chain imbeddable structures, which is another effective tool for system
reliability analysis;
3. majorization, which is another powerful tool for the development of invariant
optimal designs for some systems;
4. multistate system reliability theory, which is systematically introduced for the
first time in a text on engineering system reliability analysis; and
5. applications of the k-out-of-n and the consecutive-k-out-of-n system models
in remaining life estimation.

In the past half of a century, numerous well-written books on reliability have


become available. Among the system-oriented reliability texts, refer to Barlow and
Proschan [22] for a theoretical foundation and to Schneeweiss [220] and Kapur and
Lamberson [114] for a practical engineering approach. The primary goal of the relia-
bility engineer has always been to find the best way to increase system reliability. Ac-
cording to Kuo et al. [132], accepted principles for doing this include (1) keeping the
system as simple as is compatible with the performance requirements; (2) increas-
ing the reliability of the components in the system; (3) using parallel redundancy
for the less reliable components; (4) using standby redundancy, which is switched to
active components when failure occurs; (5) using repair maintenance where failed
components are replaced but not automatically switched in, as in 4; (6) using pre-
ventive maintenance such that components are replaced by new ones whenever they
fail or at some fixed interval, whichever comes first; (7) using better arrangements
for exchangeable components; (8) using large safety factors or a product improve-
ment management program; and (9) using burn-in for components that have high
infant mortality. Implementation of the above steps to improve system reliability
will normally consume resources. A balance between system reliability and resource
consumption is essential. All of these nine methods to enhance system reliability are
based on a solid understanding of the system and system reliability modeling.
This book provides the reader with a complete picture of reliability evaluation and
optimal system design for many well-studied system structures under both the binary
4 INTRODUCTION

and the multistate contexts. Based on the comparisons of computational complexi-


ties of the algorithms presented in this book, users can determine which evaluation
methods can be more efficiently applied to their own problems.
The book includes the latest results and the most comprehensive algorithms for
system reliability analysis available in the literature as well as optimal designs of the
various system reliability models.
2
RELIABILITY MATHEMATICS

In this chapter, we introduce the mathematical concepts and techniques that are rele-
vant to reliability analysis. We first cover the basic concepts of probability, the char-
acteristics of random variables and commonly used discrete and continuous distribu-
tions. The definitions of reliability and of commonly used lifetime distributions are
discussed. Stochastic processes are also introduced here. Finally, we explain how to
assess the reliability of complex system using fault tree analysis.

2.1 PROBABILITY AND DISTRIBUTIONS

2.1.1 Events and Boolean Algebra


A process of observation or measurement is often referred to as a statistical experi-
ment in statistics terminology. Examples of an experiment include counting the num-
ber of visitors to a theme park during a day, measuring the temperature at a specific
point of a piece of machinery, and flipping a coin. Each experiment has a set of all
possible outcomes, which is called the sample space and denoted by S. The following
lists three experiments and their sample spaces:

1. Counting the number of visitors to a theme park: S = {0, 1, 2, . . . }.


2. Measuring the temperature of machinery: S = {any real number}.
3. Flipping a coin: S = {head, tail}.

When conducting an experiment, we are often interested in knowing whether the


outcome is in a subset of the sample space. This subset may represent desired out-
comes or undesired outcomes. We will use event to represent the set of outcomes that
is of interest. For example, we may be interested in the following events:
5
6 RELIABILITY MATHEMATICS

E1 E2

FIGURE 2.1 Venn diagram showing that events E 1 and E 2 are disjoint.

1. A = The number of visitors to a theme park is greater than 4000.


2. B = The machine temperature is between 40◦ C and 60◦ C.
3. C = The coin is head.

An event has occurred if the outcome of the experiment is included in the set of
outcomes of the event.
For a specific experiment, we may be interested in more than one event. For ex-
ample, we may be interested in the event, denoted by E 1 , that the measured machine
temperature is between 40 and 60◦ C and the event, denoted by E 2 , that it is above
100◦ C. To illustrate the relationship among the sample space S and events E 1 and
E 2 , we often use the so-called Venn diagram as shown in Figure 2.1. We use a rectan-
gle to represent the sample space and circles to represent events. All events must be
subsets of the sample space. Based on our definitions of E 1 and E 2 , these two events
cannot occur simultaneously. In other words, for a measured temperature value, if it
is in E 1 , then it cannot be in E 2 , and vice versa. Two events are defined to be mutu-
ally exclusive or disjoint if they cannot occur simultaneously or if they do not have
any outcome in common. Figure 2.1 shows that events E 1 and E 2 are disjoint.
The union of two events A and B includes all outcomes that are either in A, or
in B, or in both. We use A ∪ B to indicate the union of events A and B. If we write
C = A ∪ B, then we say that event C occurs if and only if at least one of the two
events A and B occurs. In Figure 2.2, the shaded area represents the union of events
A and B. The intersection of two events A and B includes all outcomes that are in
both A and B. We use A ∩ B, or AB for simplicity, to indicate the intersection of A
and B. If we write C = A ∩ B or C = AB, then event C occurs if and only if both
events A and B occur. The shaded area in Figure 2.3 represents the intersection of
events A and B.

A B

FIGURE 2.2 Venn diagram showing union of events A and B.


PROBABILITY AND DISTRIBUTIONS 7

A B

FIGURE 2.3 Venn diagram showing intersection of events A and B.

For a given event E, its complement, denoted by E, indicates that event E does
not occur. Here, E includes all outcomes that are in the sample space S but not in
event E. For example, if E represents the event that the number of visitors to a theme
park is greater than 4000, then E represents the event that the number of visitors to
the theme park is no more than 4000. It is clear that any event and its complement
together comprise the whole sample space. We usually use ∅ to indicate an empty
set.
General operations on events, including unions, intersections, and complements,
are governed by a set of rules called the laws of Boolean algebra, which are summa-
rized below:
• Commutative law:

A ∪ B = B ∪ A, A ∩ B = B ∩ A.
• Associative law:

(A ∪ B) ∪ C = A ∪ (B ∪ C), (A ∩ B) ∩ C = A ∩ (B ∩ C).
• Distributive law:

A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C), A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C).
• Identity law:

A ∪ S = S, A ∩ S = A,
A ∪ ∅ = A, A ∩ ∅ = ∅.
• Complementation law:

A ∪ A = S, A ∩ A = ∅, A = A.
• Idempotent law:

A ∪ A = A, A ∩ A = A.
• De Morgan’s law:

A ∪ B = A ∩ B, A ∩ B = A ∪ B.
8 RELIABILITY MATHEMATICS

2.1.2 Probabilities of Events


There are three approaches to measurement of the probability of a certain event. Each
has its advantages and areas of applications. In the following, we briefly describe
each of them.

• Equally Likely Approach. This approach applies when the total number of
possible outcomes of an experiment is finite and each possible outcome has
an equal chance of being observed. If an event of interest includes n possible
outcomes and the sample space has N possible outcomes, then the probabil-
ity for this event to occur is given by the ratio n/N . This approach finds wide
application in games of chance and making selections based on the generation
of random variables. For example, names are selected randomly in a poll and
numbers are selected randomly in a lottery. This approach cannot be used when
the possible outcomes of an experiment are not equally likely or the number
of possible outcomes is infinite. For example, is it going to rain tomorrow?
What will be tomorrow’s highest temperature reading? These questions can-
not be answered with this approach. This approach has limited applications in
engineering reliability analysis.
• Frequency Approach. According to this approach, the probability of an event
is the proportion of occurrences of the event under similar conditions in the
long run. This approach is the most widely used one. If a manufacturer claims
that its product has a 0.90 probability of functioning properly for one year, this
means that of the new units of this product that are sold for use under specified
conditions, 90% of them will work properly for a full year, while the other
10% will experience some sort of problem within a year. If the weather office
predicts that there is a 30% chance of rain tomorrow, this means that historically
under similar weather conditions 30% of the time it has rained. This approach
is very useful in obtaining reliability measures in engineering as multiple units
of the same product may be tested under the same working conditions. The
proportion of surviving units is used as a measure of the probability of survival
for each unit of this product.
• Subjective Approach. According to this approach, the probability of an event
represents the strength of one’s belief with regard to the uncertainties involved
in the event. Such probabilities are simply one’s “educated” guesses based on
his or her personal experience or expertise. It is used when there are no or few
historical records of such events and setting experiments to observe such events
is too expensive or impossible. This approach is gaining in favor due to the
high speed of technology advancement in today’s world. For example, what is
the probability of success in the development of a new medical procedure using
DNA technology?

One or a combination of the above approaches may be used to assign the prob-
abilities of some basic events of a statistical experiment. Probabilities are values of
a set function. This set function assigns real numbers to various subsets of the sam-
PROBABILITY AND DISTRIBUTIONS 9

ple space S of a statistical experiment. Once such probabilities are obtained, we can
follow some mathematical axioms to derive the probability measures of events that
can be expressed as a function of those basic events. The following axioms are often
used to restrict the ways in which we assign probabilities to events:

1. The probability of any event is a nonnegative real number, that is, Pr(A) ≥ 0
for any subset A of S.
2. The probability of the sample space is 1, that is, Pr(S) = 1.
3. If A1 , A2 , A3 , . . . , are a finite or infinite sequence of disjoint events within S,
then Pr(A1 ∪ A2 ∪ A3 ∪ · · · ) = Pr(A1 ) + Pr(A2 ) + Pr(A3 ) + · · · .

Based on these axioms, we have the following equations for the probability eval-
uation of events:

Pr(∅) = 0, (2.1)
Pr( A) = 1 − Pr(A), (2.2)
Pr(A ∪ B) = Pr(A) + Pr(B) − Pr(A ∩ B). (2.3)

If A and B are two events and Pr(A) = 0, then the conditional probability of B given
A is defined as
Pr(A ∩ B)
Pr(B | A) = . (2.4)
Pr(A)
From equation (2.4), the probability of the intersection of two events is the following:

Pr(A ∩ B) = Pr(A) Pr(B | A) = Pr(B) Pr(A | B). (2.5)

Two events are defined to be independent if whether one event has occurred or
not does not affect whether the other event will occur or not. If events A and B are
independent, we have

Pr(B | A) = Pr(B), (2.6)


Pr(A | B) = Pr(A), (2.7)
Pr(A ∩ B) = Pr(A) Pr(B). (2.8)

Note that if two events A and B are independent, then the two events A and B are
also independent.
For a group of n events A1 , A2 , . . . , An to be independent, we require that the
probability of the intersection of any 2, 3, . . . n of these events equal the product
of their respective probabilities. These events may be pairwise independent with-
out being independent. If we have three events A, B, and C, it is possible to have
Pr(A ∩ B ∩ C) = Pr(A) Pr(B) Pr(C) while these three events are not pairwise
independent.
10 RELIABILITY MATHEMATICS

Example 2.1 A manufacturer orders 30, 45, and 25% of the total demand for a
certain part from suppliers A, B, and C, respectively. The defect rates of the units
provided by suppliers A, B, and C are 2, 3, and 4%, respectively. Assume that the
received units of this part are well mixed. What is the probability that a randomly
selected unit is defective and supplied by supplier A? What is the probability that a
randomly selected unit is defective? If a randomly selected unit is defective, what is
the probability that it is provided by supplier A?
In this example, we assume that each unit has an equal chance of being selected.
Define:

• A: a selected unit is from supplier A


• B: a selected unit is from supplier B
• C: a selected unit is from supplier C
• D: a selected unit is defective

Then, we have Pr(A) = 0.30, Pr(B) = 0.45, Pr(C) = 0.25, Pr(D | A) = 0.02,
Pr(D | B) = 0.03, and Pr(D | C) = 0.04. We also know that events A, B, and C
are mutually exclusive and A ∪ B ∪ C = S. The probability that a selected unit is
defective and from supplier A can be calculated as

Pr(A ∩ D) = Pr(A) Pr(D | A) = 0.30 × 0.02 = 0.0060.

The probability that a selected unit is defective can be calculated as

Pr(D) = Pr(D ∩ (A ∪ B ∪ C))


= Pr(D ∩ A) + Pr(D ∩ B) + Pr(D ∩ C)
= Pr(A) Pr(D | A) + Pr(B) Pr(D | B) + Pr(C) Pr(D | C)
= 0.0295.

The probability that a defective unit is from suppler A can be calculated as


Pr(A ∩ D) 0.0060
Pr(A | D) = = ≈ 0.2034.
Pr(D) 0.0295
From these calculations, we can say that the defect rate of all received units is 2.95%.
If a unit is found to be defective, there is a 20.34% of chance that it is from supplier
A. Similar calculations and conclusions can be made for other suppliers.

In Example 2.1, to find the defective rate of all received units, we divided them
into three mutually exclusive groups, namely, A, B, and C. These three groups repre-
sent the exclusive suppliers for the manufacturer, that is, S = A ∪ B ∪ C. The defect
rate of a unit from each of these groups is known. Thus, we can use conditional prob-
ability to find the overall defective rate of the received units. This approach can be
generalized to the case where there are k mutually exclusive groups, as stated in the
following theorem.
PROBABILITY AND DISTRIBUTIONS 11

Theorem 2.1 (Decomposition Theorem) If the events B1 , B2 , . . . , Bk consti-


tute a partition of the sample space S, that is, Bi ∩ B j = ∅ for all i = j and
∪i=1
k B = S and Pr(B )  = 0 for i = 1, 2, . . . , k, then for any event A, we have
i i

k
Pr(A) = Pr(Bi ) Pr(A | Bi ). (2.9)
i=1
Theorem 2.1 is called the decomposition theorem, the factoring theorem, or the
rule of elimination. It is used when it is easier to obtain conditional probabilities. It
has wide applications in system reliability evaluation.

2.1.3 Random Variables and Their Characteristics


Random Variables In many applications involving uncertain outcomes, we are of-
ten interested only in a certain aspect of the outcomes. For example, what will be
the highest temperature tomorrow? What number will appear when a die is rolled?
What is the total number when a pair of dies are rolled? How many light tubes are
failed when the light fixtures in an office are inspected ? The random variable X is a
function that assigns each outcome in sample space S with a real value. We will use
a capital letter (e.g., X, Y ) to indicate a random variable and a lowercase letter (e.g.,
x, y) to indicate the specific value that a random variable may take. If X is used to
represent the number of a rolled die, then X ≥ 3 represents the event that the number
of a rolled die is at least 3 and X ∈ {2, 4, 6} represents the event that the number of
the rolled die is even.
Random variables can be divided into two classes, discrete random variables and
continuous random variables. A discrete random variable may take a finite or count-
ably infinite number of values. For example, the number of a rolled die can take only
six possible values and the number of visitors to a theme park during a day can only
be a nonnegative integer. A continuous random variable can take values on a con-
tinuous scale. For example, the highest daily temperature may be any value in the
interval (−∞, ∞), and the life of a light bulb may be any nonnegative real value.

Probability Distribution Function and Cumulative Distribution Function Con-


sider a discrete random variable X . The function given by f (x) = Pr(X = x)
for all x is called the probability mass function (pmf) of X . A function f (x) can
serve as the pmf of a discrete random variable if and only if it satisfies the following
requirements:

1. f (x) ≥ 0 for all x and



2. x f (x) = 1.

The cumulative distribution function (CDF) of a discrete random variable X is de-


fined to be

F(x) = Pr(X ≤ x) = f (t). (2.10)
t≤x
12 RELIABILITY MATHEMATICS

Let X = {x1 , x 2 , . . . , x n }; then the following equation can be used to calculate f (xi )
for i = 1, 2, . . . , n from F(x):

F(x1 ) if i = 1,
f (xi ) = (2.11)
F(xi ) − F(xi−1 ) if i = 2, 3, . . . , n.

Example 2.2 Consider the following discrete function:

f (k) = p(1 − p)k−1 , k = 1, 2, . . . ,

where 0 < p < 1. Verify that it qualifies to be the pmf of a discrete random variable
X with sample space S = {1, 2, . . . }. Find the CDF of X . What is the probability for
X ≥ 10?
that f (k) ≥ 0 for each possible k ∈ S because 0 < p < 1.
First of all, we note
We need to verify that ∞ k=1 f (k) = 1:

 ∞
 1
f (k) = p(1 − p)k−1 = p × = 1.
k=1 k=1
p

Our conclusion is that f (k) does qualify to be a pmf. Noting that


k 
k
F(k) = f (i) = p(1 − p)i−1 = 1 − (1 − p)k , k = 1, 2 . . . ,
i=1 i=1

to find the CDF defined over (−∞, ∞), we use the following function when x is not
necessarily a positive integer:

0 if x < 1,
F(x) =
F(k) if k ≤ x < k + 1, where k is a positive integer,

Pr(X ≥ 10) = 1 − F(9) = 1 − [1 − (1 − p)9 ] = (1 − p)9 .

The pdf used in this example actually describes the geometric distribution, which
will be further discussed in Section 2.1.5.

For a continuous random variable X we define a probability density function (pdf)


associated with it. A function f (x) with −∞ < x < ∞ is called a pdf of the
continuous random variable X if and only if
 b
Pr(a < X ≤ b) = f (x) d x (2.12)
a

for any real constants a and b such that a ≤ b. In words, the probability for the
continuous random variable to be in interval (a, b] is measured by the area under
the curve of f (x) within this interval. Based on this definition, the probability for
a continuous random variable to take any fixed value is equal to zero. As a result,
PROBABILITY AND DISTRIBUTIONS 13

when one is calculating the probability for a continuous random variable to be in a


certain interval, it does not make any difference whether the endpoints of the interval
are included or not, as shown below,

Pr(a ≤ X ≤ b) = Pr(a < X < b) = Pr(a ≤ X < b) = Pr(a < X ≤ b). (2.13)

A function f (x) can serve as a pdf of a continuous random variable if and only if it
satisfies the following conditions:

1. f (x) ≥ 0 for −∞ < x < ∞ and


∞
2. −∞ f (x) d x = 1.

The CDF of a continuous random variable is defined as follows:


 x
F(x) = Pr(X ≤ x) = f (t) dt, −∞ < x < ∞. (2.14)
−∞

Given the CDF of a random variable, we can use the following equations to find the
probability that the continuous random variable takes values in interval [a, b] with
a ≤ b and the pdf of the random variable:

Pr(a ≤ X ≤ b) = F(b) − F(a), (2.15)


d F(x)
f (x) = , (2.16)
dx
where the derivative exists.

Example 2.3 Consider the function f (x):

f (x) = λe−λx , x ≥ 0,

where λ > 0 is a constant. Verify that it qualifies to be a pdf of a continuous random


variable X , which may take nonnegative values. Find the CDF of X . What is the
probability that X takes values in the interval [10, 20]? 

It is apparent that f (x) ≥ 0 for all x ≥ 0. Since 0 f (x) d x = 1, as shown
below, f (x) qualifies to be a probability density function (pdf):
 ∞  ∞
f (x) d x = λe−λx d x = −e−λx |∞ 0 = 1.
0 0

The CDF of X is given by


 x  x
F(x) = f (t) dt = λe−λt dt = −e−λt |0x = 1 − e−λx , x ≥ 0.
0 0

When x < 0, F(x) = 0, which is often omitted. Then

Pr(10 ≤ X ≤ 20) = F(20) − F(10) = e−10λ − e−20λ .


14 RELIABILITY MATHEMATICS

The pdf used in this example actually describes the exponential distribution, which
is the most commonly used in reliability.

Whether a random variable is discrete or continuous, its CDF satisfies the follow-
ing conditions:
• F(−∞) = 0,
• F(∞) = 1, and
• F(a) ≤ F(b) for any real numbers a and b such that a ≤ b.

Consider two independent continuous random variables X with CDF G(x) and Y
with CDF H (y). Let Z be the sum of these two random variables, that is, Z = X +Y .
The CDF of Z , denoted by U (z), can be expressed as
 ∞
U (z) = Pr(Z ≤ z) = Pr(X + Y ≤ z) = Pr(X + Y ≤ z | Y = y) d H (y)
−∞
 ∞  ∞
= Pr(X ≤ z − y) d H (y) = G(z − y) d H (y)
−∞ −∞
= (G ∗ H )(z),

where
 ∞
(G ∗ H )(z) ≡ G(z − y) d H (y) (2.17)
−∞

is called the convolution of functions G and H . In words, the CDF of the sum of
two independent random variables is equal to the convolution of the CDFs of these
two individual random variables. This result can be extended to the sum of n ≥ 2
independent random variables; namely, the CDF of the sum of n independent ran-
dom variables is equal to the convolution of the CDFs of these n individual random
variables. If these individual random variables are independent and identically dis-
tributed (i.i.d) with CDF F(x), we use Fn to indicate the n-fold convolution of F
with itself. The CDF of the sum of n i.i.d. random variables with CDF F(x) is the
n-fold convolution of F with itself. Generally, the following recursive formula can
be used for evaluation of convolutions of a function with itself:
Fn = F ∗ Fn−1 , n ≥ 2, (2.18)

where F1 = F.

Median The median of a random variable X is defined to be the value of x such that
F(x) = 0.5. The probability for X to take a value less than or equal to its median and
the probability for X to take a value greater than or equal to its median are both equal
to 50%. The 100 pth percentile, denoted by x p , of a random variable X is defined to
be the value of x such that F(x p ) = p. For example, the 10th percentile of X is
denoted by x0.1 and the 90th percentile of X is denoted by x0.9 . Thus, x0.5 represents
the median of the random variable X .
PROBABILITY AND DISTRIBUTIONS 15

Expected Value The expected value E(X ), or µ, of a random variable X with pdf
f (x) is


 x f (x) if X is discrete,
 x
µ ≡ E(X ) =  ∞ (2.19)


 x f (x) d x if X is continuous.
−∞

The expected value E(X ) is also referred to as the mean, the average value, or the
first moment about the origin of the random variable X .

Moment The expected value of a deterministic function g(X ) of a random variable


X with sample space S and pdf f (x) is given by


 g(x) f (x) if X is discrete,
 x
E(g(X )) =  ∞ (2.20)


 g(x) f (x) d x if X is continuous.
−∞

When g(X ) in equation (2.20) takes the form of X r , where r is a nonnegative integer,
E(X r ) is called the r th moment about the origin or the ordinary moment of random
variable X , often denoted by µr . When r = 1, we have the first moment about the
origin E(X ), which is exactly the expected value of X . Thus, we have µ1 ≡ µ. Note
that µ0 = 1.
When g(X ) in equation (2.20) takes the form of (X −µ)r , where r is a nonnegative
integer and µ is the expected value of X , E((X − µ)r ) is called the r th moment
about the mean or the central moment of random variable X , often denoted by µr .
Note that µ0 = 1 and µ1 = 0. The second moment about the mean, µ2 , is of
special importance in statistics because it indicates the spread of the distribution of
the random variable. As a result, it is called the variance of the random variable
and denoted by σ 2 or Var(X ). The positive square root of the variance is called the
standard deviation of the random variable and denoted by σ . The following equation
indicates the definition and the calculation of Var(X ):

σ 2 ≡ Var(X ) = E(X 2 ) − (E(X ))2




 (x − µ)2 f (x) if X is discrete,
 x
σ 2 = µ2 − µ2 =  ∞ (2.21)


 (x − µ)2 f (x) d x if X is continuous.
−∞

The following summarizes the equations for the evaluation of expectations:

1. E(a) = a, where a is a constant;


2. E(a X ) = a E(X ), where a is a constant;
16 RELIABILITY MATHEMATICS

3. E(a X + b) = a E(X ) + b, where a and b are constants;


4. E(X + Y ) = E(X ) + E(Y );
5. E(g(X ) + h(Y )) = E(g(X )) + E(h(Y )), where g and h are deterministic
function forms; and
6. Var(a X + b) = a 2 Var(X ), where a and b are constants.

The variance represents the spread of a distribution. The Chebyshev theorem


given below illustrates this point and provides a means of estimating the probability
for a random variable to take values within the neighborhood of its mean.

Theorem 2.2 (Chebyshev Theorem) For any given positive value k, the prob-
ability for a random variable to take on a value within k standard deviations of its
mean is at least 1 − 1/k 2 . In other words, if µ and σ are the mean and the standard
deviation of the random variable X , the following inequality is satisfied:

1
Pr(| X − µ | < kσ ) ≥ 1 − .
k2

This theorem gives the lower bound on the probability that a random variable
will take on a value within a certain number of standard deviations of its mean. This
lower bound does not depend on the actual distribution of the random variable. By
choosing k to be 2 and 3 respectively, we can see that the probabilities are at least 34
and 89 that a random variable X will take on a value within two and three standard
deviations of its mean, respectively. To find the exact value of such probabilities, we
need to know the exact distribution of the random variable.

2.1.4 Multivariate Distributions


Bivariate Distribution In some situations, we are interested in the outcomes of two
aspects of a statistical experiment. We use different random variables to indicate
these different aspects. For example, we may use X and Y to represent the level of
education and the annual salary, respectively, of a randomly selected individual who
lives in a certain area. In this case, we are interested in the distribution of these two
random variables simultaneously. We refer to their joint distribution as a bivariate
distribution.
If X and Y are discrete random variables, f (x, y) = Pr(X = x, Y = y) for
each pair (x, y) within the sample space of (X, Y ) is called the joint probability
distribution function or simply the joint pdf of X and Y . A bivariate function f (x, y)
can serve as the joint pdf of discrete random variables X and Y if and only if it
satisfies the following conditions:

1. f (x, y) ≥ 0 for each pair (x, y) within the range of the random variables and
 
2. x y f (x, y) = 1, where the summations cover all possible values of x and
y within the range of the random variables.
PROBABILITY AND DISTRIBUTIONS 17

The joint CDF of discrete random variables X and Y , denoted by F(x, y), over all
possible pairs of real values is defined as

F(x, y) = Pr(X ≤ x, Y ≤ y)
 
= f (s, t), −∞ < x < ∞, −∞ < y < ∞, (2.22)
s≤x t≤y

where f (s, t) is the value of the joint pdf of X and Y at point (s, t).
If X and Y are continuous random variables, f (x, y) defined over the two-
dimensional real space is the joint pdf of random variables X and Y if and only
if

Pr((X, Y ) ∈ A) = f (x, y) d x dy (2.23)
A

for any region A in the two-dimensional real space. A bivariate function f (x, y) can
serve as a joint pdf of two continuous random variables X and Y if and only if it
satisfies

1. f (x, y) ≥ 0 for −∞ < x < ∞ and −∞ < y < ∞ and


∞ ∞
2. −∞ −∞ f (x, y) d x dy = 1.

The joint CDF of continuous random variables X and Y , denoted by F(x, y), over
all possible pairs of real values is defined as

F(x, y) = Pr(X ≤ x, Y ≤ y)
 x  y
= f (s, t) dt ds, −∞ < x < ∞, −∞ < y < ∞, (2.24)
−∞ −∞

where f (s, t) is the value of the joint pdf of X and Y at point (s, t).
For the continuous random variables, we have

∂2
f (x, y) = F(x, y). (2.25)
∂ x ∂y
The bivariate CDF of both discrete and continuous random variables satisfy the
following conditions:

1. F(−∞, −∞) = 0,
2. F(∞, ∞) = 1, and
3. if a < b and c < d, then F(a, c) ≤ F(b, d).

Even when there is more than one random variable of interest in an experiment,
we may want to know the distribution of one of the random variables irrespective of
what values the other random variables may take. In this case, we are interested in the
18 RELIABILITY MATHEMATICS

marginal distribution of a single random variable. If X and Y are random variables


with joint pdf f (x, y), then the marginal pdf of X is given by


 f (x, y) if X and Y are discrete,

 y
g(x) =  ∞ (2.26)



 f (x, y) dy if X and Y are continuous
−∞

for each x in the range of X , and the marginal pdf of Y is given by




 f (x, y) if X and Y are discrete,
 x
h(y) =  ∞ (2.27)


 f (x, y) d x if X and Y are continuous
−∞

for each y in the range of Y . Once the marginal pdf of a random variable is obtained,
we can use it to find the CDF of the random variable ignoring all other variables.
Since the random variables of an experiment may depend on each other, we are
sometimes interested in the conditional distribution of one random variable given
that the other random variables have taken certain values or certain ranges of values.
If f (x, y) is the joint pdf of (discrete or continuous) random variables X and Y , g(x)
is the marginal pdf of X , and h(y) is the marginal pdf of Y , the function given by
f (x, y)
g(x | y) = , h(y) = 0, (2.28)
h(y)
for each x within the range of X is called the conditional pdf of X given Y = y.
Correspondingly, the function given by
f (x, y)
h(y | x) = , g(x) = 0, (2.29)
g(x)
for each y within the range of Y is called the conditional pdf of Y given X = x.
For two random variables X and Y with joint pdf f (x, y), the expected value of
a function of these two random variables, g(X, Y ), is given by
 

 g(x, y) f (x, y) if X and Y are discrete,

 x y
E(g(X, Y )) =  ∞  ∞



 g(x, y) f (x, y) d x dy if X and Y are continuous.
−∞ −∞

(2.30)

Let µ X and µY indicate the expected values of random variables X and Y , re-
spectively. The covariance of X and Y , denoted by Cov(X, Y ) or σ X Y , is given by

σ X Y ≡ Cov(X, Y ) = E ((X − µ X )(Y − µY )) = E(X Y ) − µ X µY . (2.31)


PROBABILITY AND DISTRIBUTIONS 19

The correlation coefficient of two random variables, denoted by ρ X Y , is given by

Cov(X, Y )
ρXY = √ . (2.32)
Var(X )Var(Y )

The correlation coefficient takes nominal values between −1 and 1. A positive value
indicates that X and Y are positively correlated, and a negative value indicates that X
and Y are negatively correlated. A positive correlation between two random variables
indicates that there is a high probability that large values of one variable will go
with large values of the other. A negative correlation indicates that there is a high
probability that high values of one variable will go with low values of the other.
Two random variables X and Y are said to be independent if and only if their joint
pdf is equal to the product of the marginal pdf’s of the two random variables. We can
also say that two random variables are independent if and only if the conditional
pdf of each random variable is equal to its own marginal pdf irrespective of what
value the other random variable takes. If X and Y are independent, we also have the
equations

E(X Y ) = E(X )E(Y ), (2.33)


Cov(X, Y ) = 0. (2.34)

Multivariate Distribution The definitions provided above with two random vari-
ables can be generalized to the multivariate case. The joint pdf and the joint CDF
of n discrete random variables X 1 , X 2 , . . . , X n defined over their sample spaces are
given, respectively, by

f (x 1 , x2 , . . . , x n ) = Pr(X 1 = x 1 , X 2 = x 2 , . . . , X n = xn ),
  
F(x1 , x2 , . . . , x n ) = ... f (s, t, . . . µ).
s≤x 1 t≤x 2 µ≤x n

The joint CDF of n continuous random variables X 1 , X 2 , . . . , X n defined over their


sample spaces is given by
 x1  x2  xn
F(x 1 , x2 , . . . , xn ) = ··· f (t1 , t2 , . . . , tn ) dtn dtn−1 · · · dt2 dt1 .
−∞ −∞ −∞

When dealing with more than two random variables, we may also be interested
in the joint marginal distribution of several of the random variables. For ex-
ample, suppose that f (x 1 , x2 , . . . , xn ) is the joint pdf of discrete random variables
X 1 , X 2 , . . . , X n (n > 3). The joint marginal pdf of random variables X 1 , X 2 , X 3 is
given by
 
m(x1 , x2 , x3 ) = ··· f (x1 , x2 , . . . , x n )
x4 x5 xn
20 RELIABILITY MATHEMATICS

for all values of x1 , x2 , and x 3 within the ranges of X 1 , X 2 , and X 3 , respectively. The
joint marginal CDF of several random variables can be defined in a similar manner.
The joint conditional distribution of several random variables can also be defined.
For example, suppose that f (x1 , x2 , x3 , x4 ) is the joint pdf of discrete random vari-
ables X 1 , X 2 , X 3 , X 4 and m(x1 , x2 , x3 ) is the joint marginal pdf of random variables
X 1 , X 2 , X 3 . Then, the joint conditional pdf of X 4 , given that X 1 = x1 , X 2 = x 2 ,
and X 3 = x3 , is given by

f (x 1 , x 2 , x 3 , x 4 )
q(x 4 | x1 , x 2 , x 3 ) = , m(x 1 , x 2 , x3 ) = 0.
m(x1 , x2 , x 3 )

If X 1 , X 2 , . . . , X n are independent, then,

E(X 1 X 2 · · · X n ) = E(X 1 )E(X 2 ) · · · E(X n ), (2.35)



n 
n
Var ai X i = ai2 Var(X i ). (2.36)
i=1 i=1

Note that while equations (2.35) and (2.36) are necessary conditions for the random
variables to be independent, random variables satisfying these conditions are not
necessarily independent.

2.1.5 Special Discrete Distributions


In this section, we review some commonly used discrete distributions. The statisti-
cal experiments under which such distributions are derived will be discussed. The
characteristics of these distributions will be derived or simply given. For a discrete
random variable, it is often easier to use its pmf to characterize its distribution.

Discrete Uniform Distribution Consider a random variable X that can take k distinct
possible values. If each value has an equal chance of being taken by X , we say that
X has a discrete uniform distribution. The pmf of X can be written as

1
Pr(X = x) = for x = x1 , x2 , . . . , xk , (2.37)
k

where xi = x j when i = j. The mean and variance of such a random variable can
be expressed as

1 k
µ= xi , (2.38)
k i=1

1 k
σ2 = (xi − µ)2 . (2.39)
k i=1
PROBABILITY AND DISTRIBUTIONS 21

In the special case of xi = i for i = 1, 2, . . . , k, we have


1
Pr(X = x) = for x = 1, 2, . . . , k, (2.40)
k
k+1
µ= , (2.41)
2
k2 − 1
σ2 = . (2.42)
12
The uniform distribution truly reflects the equally likely interpretation of proba-
bility. However, the assumption of equal likeliness is often used instead in statistical
experiments.

Bernoulli Distribution Consider a statistical experiment that has only two possible
outcomes, which we will call “success” and “failure.” The probability of observing
success and failure in the experiment is denoted by p and 1 − p, respectively. The
random variable X is used to count the number of successes in the experiment. Ap-
parently, X can only take one of two possible values, 0 or 1. The pmf of X is given
by

Pr(X = x) = p x (1 − p)1−x , x = 0, 1. (2.43)

A random variable that has such a pmf is said to follow the Bernoulli distribution.
The corresponding statistical experiment just described is referred to as a Bernoulli
trial.
The mean and variance of a Bernoulli random variable X are

µ = p, (2.44)
2
σ = p(1 − p). (2.45)

Binomial Distribution Suppose that we are to conduct n Bernoulli trials where n is


fixed. All these trials are independent and the probability of observing the success in
each trial is a constant denoted by p. We are interested in the total number of suc-
cesses that will be observed in these n trials. Let X i be the Bernoulli random variable
representing the number of successes observed in the ith trial for i = 1, 2, . . . , n.
Then, the total number of successes in n trials, denoted by X , can be expressed as

X = X 1 + X2 + · · · + Xn .

Based on our assumptions, X i ’s are i.i.d. with the pmf given in equation (2.43). If
we complete n trials, we would get a specific sequence of n numbers consisting of
0’s and 1’s. The probability of getting a specific sequence with exactly x 1’s is equal
to p x (1 − p)n−x . The total number of sequences of 0’s and 1’s such that there are
exactly x 1’s is equal to nx . Thus, the probability of observing x 1’s in whatever
sequence from n Bernoulli trials is equal to nx p x (1 − p)n−x . As a result, we can
22 RELIABILITY MATHEMATICS

express the pmf of the random variable X as


 
n x
Pr(X = x) = p (1 − p)n−x , x = 0, 1, 2, . . . , n. (2.46)
x
A random variable with such a pmf is said to follow the binomial distribution with
parameters n and p. The name “binomial distribution” comes from the fact that the
values of the pmf of a binomial random variable for x = 0, 1, 2, . . . , n are the
successive terms of the binomial expansion of function [(1 − p) + p]n , as shown
below:
   
n 0 n 1
[(1 − p) + p]n = p (1 − p)n + p (1 − p)n−1
0 1
   
n 2 n n
+ p (1 − p)n−2 + · · · + p (1 − p)0 .
2 n
Since the left-hand side of the above equation is equal to 1, this verifies that the sum
of the binomial pmf over all possible x values is equal to 1.
The mean and variance of a binomial random variable are given below. Their
derivations are left as exercises:

µ = np, (2.47)
2
σ = np(1 − p). (2.48)

In deriving the binomial distribution, we have defined X to be the number of


successes in n Bernoulli trials. We could have used Y to indicate the number of
failures in n Bernoulli trials. Then, we have X + Y = n. Since X has the binomial
distribution with parameters n and p, Y has the binomial distribution with parameters
n and 1 − p. The probability for X to take a value of k is equal to the probability
for Y to take the value of n − k for k = 0, 1, 2, . . . , n. The binomial distribution has
applications in sampling with replacement.
If X 1 and X 2 are independent random variables following the binomial distri-
butions with the parameters n 1 and p and n 2 and p, respectively, the sum of these
two random variables follows the binomial distribution with the parameters n 1 + n 2
and p.

Example 2.4 One hundred fluorescent light tubes are used for lighting in a build-
ing. They are inspected every 30 days and failed tubes are replaced at inspection
times. Thus, after each inspection, all 100 tubes are working. The failures of the
light tubes are statistically independent. The probability for a working tube to last 30
days is constant at 0.80. What is the probability that at least 10 will be failed at the
next inspection time? What is the average number of failed tubes at each inspection
time? What is the interval such that the probability that the number of failed tubes at
each inspection time is within this interval is at least 0.75?
In this example, we use X to indicate the number of failed tubes at the time of
the next inspection given that all 100 tubes are working properly at the end of the
PROBABILITY AND DISTRIBUTIONS 23

previous inspection. Then, X follows the binomial distribution with n = 100 and
p = 1 − 0.8 = 0.2:


9
Pr(X ≥ 10) = 1 − Pr(X ≤ 9) = 1 − Pr(X = i)
i=0
9 
 
100
=1− × 0.2i × 0.8100−i ≈ 1 − 0.0023 = 0.9977.
i=0
i

The average number of failed tubes at√each inspection time is µ = n × p = 20.


The standard deviation of X is σ = np(1 − p) = 4. Using Chebyshev’s theo-
rem, we know that the number of failed tubes at each inspection has a 0.75 probability
to be within two standard deviations of its mean. Thus, the interval for X is

| X − 20 | < 8 or 12 < X < 28.

This means that there is a 75% chance that the number of failed tubes to be observed
at each inspection will be somewhere between 12 and 28. This range can help the
inspector to bring enough light tubes for replacement of the failed ones.

Negative Binomial Distribution and Geometric Distribution Consider a statistical


experiment wherein repeated Bernoulli trials are performed. The probability of suc-
cess in each trial is a constant p. We are interested in the number of trials needed to
observe the kth success, denoted by X . For X to take a value of x, x = k, k + 1, k +
2, . . . , we must observe k − 1 successes in the first x − 1 trials, and a success must
be realized in the xth trial. Thus, the pmf of X is
 
x −1 k
Pr(X = x) = p (1 − p)x−k , x = k, k + 1, k + 2, . . . . (2.49)
k−1
A random variable that has such a pmf is said to have a negative binomial distribution
with the parameters k and p. The name “negative binomial distribution” comes from
the fact that the values of the pmf given in equation (2.49) are the successive terms of
the binomial expansion of (1/ p−(1− p)/ p)−k . The negative binomial distribution is
also referred to as the binomial waiting time distribution or as the Pascal distribution.
If the pmf of a negative binomial random variable X with the parameters k and p
is denoted by Pr(X = x | k, p) and that of a binomial random variable Y with the
parameters x and p is denoted by Pr(Y = k | x, p), the following equation describes
the relationship between them:
k
Pr(X = x | k, p) = Pr(Y = k | x, p). (2.50)
x
The mean and variance of the negative binomial distribution are as follows:
k
µ= , (2.51)
p
Exploring the Variety of Random
Documents with Different Content
The Project Gutenberg eBook of
Savolaisjuttuja
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.

Title: Savolaisjuttuja
Seitsemän murrehumoreskia

Author: Santeri Rissanen

Release date: May 24, 2024 [eBook #73688]

Language: Finnish

Original publication: Hämeenlinna: Arvi A. Karisto, 1910

Credits: Tapio Riikonen

*** START OF THE PROJECT GUTENBERG EBOOK


SAVOLAISJUTTUJA ***
SAVOLAISJUTTUJA

Seitsemän murrehumoreskia

Kirj.

SAV'OJAN SAMPPA [Santeri Rissanen]

Hämeenlinnassa, Arvi A. Karisto Osakeyhtiö, 1910.


SISÄLLYS:

Herra rokuristi
Erreys
Herra pormestar ja muut mussiikkihenkilöt
Halierakesänä
Juljaanan Junnupetter
Kello
Kilipailu sehhii
HERRA ROKURISTI

Tässä kolomannen kesän kekkeimmilläsä männä hörötettiin myö


kaikki Iirannan immeiset sälleeväläisten lottierinkiin Kölomsopen
Kolehmallaan. Ol hyyrätty se yhtijön Serupaapel-massiina ja sillä
aijottiin oikein olettain ajjoo kuin ainai asjan ymmärtävät perille.

Ol jo kiännetty Ihalanrannassa vöörpuel seleville vesille ja


pakattiin, kun äkkiästäsä pyräht, niinkun pelemahtaa ihan itestäsä
piiskuttava pyy oksalle ojentelemmaan, Ihalan kommeelle
koivukuistikolle uus pasassier, joka nästyykilläsä viuhto, että ei pie
jättee, kun hännii ehättäs yhteen meinaan. Jo näk kapteen Valakos-
vainoo koko hötäkän ja komens komennusruuttiin, että:

— Taavetti! Elähän ennee pakkoo! Näkkyy tuolla kuistilla vielä


muuvvan ojentelevan. Näkkyy olevan Kasreenin uus rokuristi, se
Drokopee, että jos paat pikkuruikkusen rämiä, niin minä ruoroon
vöörin ryville. Sinä, Pekka, reestooppas suaha ne rossit rämille,
toppoot sinne tuplapollariin, niin suat lankongin maihin. En viihi
ennee alakoo ahteria ruveta kiäntelemmään.

— Toppoo? Helekutillako minä toppoon, kun ei oo touvia!

— Ka, ota sitte sesta ja niin suahaan pasassier paattiin.


Hikisenä essiinty sitte tulija Serupaapelin sylliin. Meijän mualaisten
silimä oikein otti ja sirkes häntä silmätessä. Köyhä ei niät usko
kahtomata. Hään oi semmonen sutjakkaruumiinen, sattiinvuer
palttoo piälläsä, koppakuerhattu piässäsä, raaku ja rettiinskravatti
kaulassa ja riihin alla liivin aukeemassa kiils kommee köyhäntoistus
(= simpsetti). Kun alako teetätä ihteesä, niin aina käsvarsmeiningit,
niät mukit (= tärkkiniekka mansneerit) hihasta valu. Teetattuasa
ihtesä alako hään kävellä hyvin tärkeenä takilla.

— Kukkaan soon tämä herra? hymäht semmosella itikan surinalla


mun korvaen Koeraharjun Penu.

— Niin ettäkö tuo, joka tuossa tempoo? Se on se herra rokurist,


Brokopee, Kasreenin puukhollar. Kuuluu olevan yks ruukin
vöörvalttarin poika etelästä.

— Hyh, vai vöörvalttarin poika. Hiijen hömpsäkkä mies. Kato, kato


kun uljaasti pokkaileiksen, kato ryökästä.

— Oo osottelematal Se on herra ja herralla on jalannousu


juoksevamp kun meillä.

— Hittooko se nyt näin kesäkuumalla tuolla varaplyymillä tekköö,


jota vasemmuksessasa viehkuroittaa, ja kalloossit on jalassa…

— Hittooko sinä siitä vatupassoot? Ne on niitä herrain hökötyksiä.

Siihen suapu meijän joukkoon Valakonennii ja hilijoo mulle kähis


kämmenesä takkoo:

— Soon käyny Ihalassa suluhasissa! Kuuluu kiertävän sitä


Amantoo.
— Niin häh? Amantoo? Tuoko Grokopee?

Vuan Amantan rokurist oi painannna salonkiin herrasväen


resunierinkiin…

*****

— Että jos vieraat viihtii ruveta tämän ohjelman peräks pyörimään,


niin ison tuvan lattia on levvsellään!

Tuvassa jo temmas viuluvasa Pekka, se Kyyrylän isäntä, ja myö


muut kokkoonnuttiin sinne tuvan kaihteeseen. Vastapuolella hämötti
hameväk, kaikessa kommeuvessasa, niin että ol Toholanmäen Tiina
uuvessa juussihiussissasa, jonka eessä rehoitti rimsulaita vyöliina,
Taanielin Annakaisa kukkaniekka pumasiinissa ja valakeessa
vörkkelissä, Ihotunlahen Iitu ihan punasissasa ja kirkherran Riitta
mansnierissa ja solokniekka peltissä, kun tuas Altalon Miinalla ol
vuan musta, sissään ottamalla vatvattu leninki. Miesten puolella ol
meitä tuas Kolehmalan poika kumkauluksessa ja topatussa ravatissa
ja suapasvarret housunkauluksen piällä, Kokkolan Matti rettinki
suussa, Koeraharjun Penu ja monj muu.

— Nyt ottaa ja lähtöö Toonan valssi, että jos yritättä! ilimotti Pekka
ja alako vaivertoo viuluvasa. Ensmäinen, joka astu palakkiin poikki
taamiin puolelle, ol herra rokurist. Ol heittännä palttoosa ja
porskuttel ehassa verassa. Pännätaskuusa ol pistännä puelsilikki
nästyykin. Hään astu Annakaisan etteen ja kumartoo kaikerreltuasa
ja sipastuasa vasemmuksellasa puelkuaren lattiaa, että tomu
ikkunaan hiekkana räsäht, hään vuati tyttöö taamiksesa. Vuan
Annakaisoo iletti, ettei osanna kun niijata nöksäyttöö ja hupeltoo:

— Minä en ossoo valsneerata…


Vuan osashan se, ja osas muuttii. Alakuun kun piästiin, niin män
yhtenä meininkinä koko tupa. Rokurist tuntu kysyvän, mistee hänen
taamisa ol muka kotosi.

— Iliman aikojan vuan Iimäeltä.

Pantiin sitte polokkoo, sottiisia ja reutspolokkoo ja vingerkkoo. Sen


jäläkeen alettiin astella rinkiä ja laalettiin:

»Syvän sillä on kun pehmitetty taula, jolla hään mulle


rakkavuutta laulaa! Ja käjet sillä on niin valakeet ja viinit,
joilla, hään kieto mun ympäri kiini.»

Rokurist ol sillä aijalla männy pihalle jiähyttelemmään ja pyys


Penua mukkaasa. Ol sitte halakopinon kuppeella ottanna
povtaskustasa oikeeta eekumia, semmosta siiniä viiniä ja ryypättyäsä
käskennä Pennunnii painaltoo.

— Mittee se herra Trokopee nyt mulle… No herrapa teitä


heilauttakkoon…!

*****

Sol poikoo se Rokopee. Ennen min en oo ottanna muihin


herrastanssiin ossoo, kun jalakan niät ei oo oikein ottavoo luatua,
vuan kun rokurist ehoiti, että jos ois nyt ransneerata, niin en oikein
osanna olla alottamata. Hään sito ensin itellesä nästyykin
käsvarteesa, että niät jokkainen äksentierois hänet ransnieringin
vikulieriks ja sitte, kun jokkaisella ol taami ja kavaljeer, myö ruvettiin
kahtapuolta pirttiä ja otettiin tuuria, se on ens hymmäys koko
hökötyksessä. Senjäläkeen vikuleitihi ja sitte pujoteltiin ja tuurata
ryöhkyytettihi. Rokurist aina riäkäs, että nyt tulloo se ja se remjääsi,
ruasiesi ja rantemua. Oikein ol imheen lyst ja kaikki män kun siist
livvaus, vaikka Annakaisa ja Kalle aina tahtovattii sapartoo tuuria
vasta toisten perästä. Rokurist viskel venskoo ja riäkkäs vieraita
heprejoita ja viuhto ja kaloppas kuin ois ottanna ja männy paha
ihteesä.

— Tämä se on oikeeta linoleumia, poijat, hään huus ja kahtoo


läimäytti nuin ylmalakasesti koko konkkaronkkia.

— Niin onnii, että oikein! Heerskatuvara kun on hempeetä!

— Ja nyt tulloo ylleinen rominaati. Pekka soittaa marssia ja myö


jokkainen jalotellaan taaminemmo tuvan ympär.

— Mittään marssia sit ois puottook?

— Vaikka pyörneporia! No, yhtä-aikoo, yks, kaks, kolome…!

— Yks …! Ai suta sumpvati rallaa, sulavalla rallaa rellullalee.

*****

Käyp semmonen kylymän kelemee tuulen humu. Yö on pimmee,


ettei sikkaar suussakaan taho tuntee. Serupaapel puhaltelloo ja
puuskuttaa vuahtosia laineita vasten, ja alakaa sattoo, kun ois
taivaan tarhat auk. Myö istuva kyyrötettään tuulen ruokana kaulus
pystyssä ja ootetaan kotrannan rykiä.

Salongissa vuan on elämee. Siellä on rokurist, herrasväk ja


hameväk. Aina alavarriin kuuluu sieltä seinän läp rokuristin puhe ja
tyttöparven nauru.
— Eiköön oo tuo Krokopee ottanna ja rietautunna? kysyä kujers
minulta
Niiralanniemen Joope.

— Nii, että mittee on?

— Että eiköön oo humalassa?

— Mittees soon, melekeen kun kello. Kuules, mittee se


pärmenttieroo tuas nuin rammeesti…

Rokurist tuntu tytöille selittävän siellä sisällä:

— Kylymä on yö! Kerrassaan kylymä. Mon nyt maksas lämpimästä


yösijasta satoja. Mutta minun povessan pallaa se lemmen lämmin
liekki, ettei luonnon kolleus sitä sua puhutuks.

— Sitä pitäs ottoo ja jiähyttee! sano muuvvan tytöistä.

— Hjaa! Heerskatuvara. Hers kapteen, yks kor limulaatia, tytöillä


on jano ja minulla on palo!

— Vie sinä Pekka sille limunierkor tuonne täkin alle.

Pekka tuntu hokevan, että sen sitä vielä, mokoma elätys, mutta
rettuutti risun kalisevia pulloja salonkiin.

— Täss on Korkopiälle olutta…!

*****

Oltiin jo puelmatkassa, Hermannin ryvissä, kun Annakaisaa pit


erota männäksesä hevoskyyvillä Ruotaanmäkkeen. Kun pyssäytettiin,
tulla tuoksaht hääri salongista muvassasa herra rokurist varaplyyn
alla.

— Hyväst nyt, herra rokurist!

— Atjöö, atjöö, lömmä port, sitta hiit. Hyväst ja eläköön — öyk!

— Hiljoo etteenpäin!

Annakaisa hypätä kiepsaht ryville, ja kun Serupaapel otti rämiä,


näk mitenkä vöörpuolessa joku huiskautti käelläsä ja kuulu
semmonen veen lossaus, kun ois joku ottanna ja puonna hottuun.

— Uuik-uik — a-a-autta-kee.

— Herra siunoo kun rokurist putos —

— Häh?

— Rokurist!…

Salongista syöksäht väen tuluva ja kapteen Valakonen. Hään kysäs


nuin kun ei mittään tietäin:

— Pekka hoi, näyttääkö se ottavan ja ossoovan uija?

— Hyväst näkkyy rynnistävän.

— No, ota sitte sesta ja tarjoo sen piätä, mutt anna olla pikkusen
aikoo kylymän veen ollessa…

Hilijoo ja hittaan hellävarroo nujjuutettiin hänet sitte paattiin.


Piästyäsä kannelle siristel hään kun koira märkee hänteesä ja nuin
ujonlaisesti uikutti:
— Kun jäi varaplyy ja toinen kalloossi…

— Vieläpä tässä. Männöö nyt rokurist ja sylleilöö tuota


onkapannua siks kun suavutaan kotrantaan. Vaikka taitas nyt mon
maksoo satoja suahessasa lämpimän yösijan…!

Salongissa tuntuvat tuumivan ahvieringistä, joka otti ja tapahtu.

— Kukkaan se ol, joka pilas nyt piimäkorvon?

— Kirkonkylän komesrootin humalainen puetpoika näky olovan. Se


Krokopee.

— Oiskoon tuo ilennä ennee männä kottiissa, jos ois ottanna ja


hukkunna?

Vuan surkeena kuilakkeena kuuntel herra rokurist onkapannua


sylleillen pilapuhheita.
ERREYS

Tämä juttu on oikeestasa Vastaniemen Iivo-vaeroon kertoma ja


näkemä, ite minä en oo sitä nähny, kuultu vuan, se kun Iivo miessä
ollessasa ja elälssäsä ol leikkiin mänevä ja koiruillessa virnistel
muillen vahingoille. Sentähe en viihi ruveta kommeilemmaan, että
juttua toiseen malliin viänteleimään ruppeisin, vuan kerron sen
sinäsä, justiisa samoilla sanoilla kun Iivo sen mulle tarus.

»Tunnethaan sinä Samppa sen massiinaherran, vai miks hyö sitä


karahtieroo, sen tohtuer Vorsteen-vainoon entisen renk-Kallen, sen
ruuvvittaren? Tikkala vai Tikkako sen nim lie, mää tiijä, voip olla
kirjoissa Tikkanennii. Se on nyt ruuvvitar, komesrooti, kun piäs
Sitorovin komprommiin. Oikee herra onnii ähähtännä, ettet tuntis
entiseks Kalleks. Ajjaa turunkiessillä, sahvierkengät jalassa ja
rikkoohousussa ja myöp siparattia, ompelumassiinoita ja vilusiipiä.
Mää tiijä, jos myöp urkuharmonikkojahhii ja vorttupienuja. Et oo
tainna nähhäkkää, kuleksii se näilläi maillahii. Meijännii Iitulle möi
Sikneerin massiinan. Mikäs siinä on myyvvessä, kun rosentti juoksoo
ja konerasvalla ellää, ja rammeesti se elläähii, rouvasa antaa tiällä
asuva ja ite vuan rustailoo värkkinesä. Juomaan kuulu kans
panneunneen, sehän soon suuriin työhii, vuatiihan se ruoka osasa.
Mutta tarkka soon, oikeen ilikeen tarkka, että oikein hävettää —
pyysin kerran, mikähän lie mullai ollu rahaton aeka, viitosta uamuun
asti, niin ei ryökäs ois antanna iliman kymmenitä luppauksita, muka
semmosta kun viitosta. — Mää tiijä, tarkkuuttako lie ollu, koetti se
vuan kerran siästeehii, mutt huonost se piätty. Sittehän se kissahii
naukuu kun käpäläsä kasteloo! Sol viime kesänä, siinä Juhannuksen
korvalla, se tappaus.

Kalle ol niät myönynnä Samettilaiselle tai Lyyran Juhaanalle


massiinan ja luppaunna siparaatinnii kauppoin ja sitte eksynnä
Monosen, sen lesken, rahtyöriin harjakkaille. Tyyvinkiä olivat ensin
isketelleet, niin ol Kalle jo ottanna liknööriä ja koljanttia, että rupes
miehillä linnunpoika syömmen ympär kävelemmään. Viimen jo ol
ruvenneet reuhoomaannii, ett ol pitännä hakkee Aku-polliissi
asettammaan. Kottiisa ol Kalle aikonna akkasa luo lähtee soutamaan.
Aku-polliissi ol hommanna soutajaks Porolahen Sonnis-Heikin.

Niin ol lähteneettii ja Kiäninniemessä ottaneet ryyppyjä, sillä ol


Kallella niät samppalkaljoo ja liknööriä ja koljanttia mukanasa. Sitte
humaltuvat, ensin ite, ja sitte Heikki, niin että nukku kapteen ja
soutaja soutaissasa torkku, Katastipas herättyvvään Heikki siinä
Näversuaren kuppeella ryypyn vaillingissa perrään, niin ei
näkynuäkkään insnyöriä. Kaukana selällä, missä lie ollu Patseepan-
luo'on ja Niiralanniemen välillä, huiski vuan kas ja oikein ilikee
huuvvon ökämä kuulu, että joutuva auttamaan. Kiireesti ollii Heikki
venneesä takakätteen pyöräyttännä ja selevä ol ollu kun puhaltain.
Sano käyneen valon piässäsä, että hukkuu se hiis, jos on
hukkuaksesa, ennenkun appuun ehtii.

Nuin arviolta viis syltä ol lommoo, kun Kalle uupu, ramppiko lie vai
laaki siärvartee tavanna. Ol huutanna:

— Heikki hoi!
— Nii häh?

— Nyt minä taijan männä! Jäsenet herpoo, elähän souva, p-—ako


sinä siitä venneestä kiskot! Et sinä ehi!

— Tuota, elähän nyt vielä, Kalle, nii että huku! Ootahan sen
verran, että minä ehin ja autan!

— Oo ökäästämätä, et kerkii kuitenkaan! Minä viskoon sulle


vörssin, että viet akallen. Siin on sata markkoo. Viskoon sinne
venneeseen. He, ota kiiti!

— Ka, viskoo hänet!

Viskanna Kalle ol, mutta viskanna vitalikkoon, monta syltä yl.


Sinne lupsahti laineen selekään ja painu. Niin Kalleko? Ei se
painunna, kun Heikki ennätti ja nujjuutti venneeseen.

— Taisit heittee vähä lujanlaiseen? Heikki sano.

— Tul erreys siinä matkassa. Luulin olevan montahii syltä lommoo.

— Erreys siinä tul. Kun oisit malttanna heittee hilijempoo.

— Ei se kiire kato! Sinne ne män ne massiinarahat.

— Vuan onhan sulla sitä liknööriä?

— Vieläpä lööriä tässä! Se män siinä kiireen kuppeessa sehhii!

— No voi hälläkkä tässä! Laskeppas semmoset tavarat nyt


taskustas, hyväkäs.
Ja ves ol herahtanna miehille silimään. Heikille aineista ja Kallelle
rahoista. Nolona olivat suaneet soutoo ja siunuustella. Vuan mittee
se semmonen, yks vaivanen satanen, kun mies on komesrooti ja
ruuvvitar ja rosentti juoksoo kun Oulunpuolessa metripiimä,
loppumata. Sattuu niitä erreyksiä hyväkkäitä, vuan ei ne tunnu, ei
semmoseen kun tohtuervainoon Kalleen — rasvainsnyöriin,
hantlangariin …»

Siihen se lopetti Iivo-vainoo ja nauroo turrauttettuvasa sano:

— On niitä erreyksiä sattunna mullehhii ja sulle, hyväkkäitä…


HERRA PORMESTAR JA MUUT
MUSSIIKKIHENKILÖT

Siit on kierähtännä kommeehii vuosviitinen, kun minä kuleksin


kerran sen piäkaupuntilaisporttupienupelurin kansa kylästä kylään ja
kaapunnista kaapuntiin. Miten lie ensistäsä, muistooksen
Muaningalla, yhen kesän kaihteessa kiännytty tutuks, ja hään
Himotti olevasa yks komeljantti ja sano nuin kun ahveeranneesa,
että jos ois hänen antoo rimmauttoo yks konsnertti meijännii
kyläkulumalla. Sen hään tehhii, ja tultuvammo oikein tutuntappaisiks
hään kysyä hymmäytti, että enköön ottas ja lähtis hänen
kumppalinnaan yhteen höläkkään, niät konsneeroomaan pitkin
mualimanpieltä. No, suattaa kait sitä männä, märkänöö kait sitä
yksilläi olilla ollessa ja suattaa nuamoosa näyttee muuvallai. Niine
puhheinemmo myö sitte lyöttäyttiin lähtemään syksyn käyvvessä
ihan kukkeimmilleen.

Paljo sitä semmosen pelmannin parissa mualiman mänöstä


näkköö. Niit on immeisiä muuvallai, ja omat on olosa kullai kylällä.
Vuan en oo vielä niin kummoo meininkiä muuvvalla nähny, kun ol
yhessä syrjäkylässä kaukana Pohjanmualla, jonnekka myö samalla
matkalla osuttiin.
Ol semmonen syksysen illan kajaastus, kun myö rautatiellä tulla
tömistettiin kylän tatsuunalle. Paljo ol immeisiä aseman latvormulla
kun helluntailauvantaina ennen muinon Apteekin mäellä.
Lienöövätkö lähteneet meitä vastaan, vai muutenko
mukavuuveksesa jaloittelivat, en mää karanttieroomaan, mutta kyllä
niitä ol, että oikein kihis kun kusjaispesässä ikkään. En malttanna
ennee olla iäneti, vuan kysyä kajjautin konehtyöriltä, että mittee se
tämä mänö nyt oikeestasa olettaa, kun on immeistä niin ilikeen
kosolta ja ryöhkäsöövät kun hukka naurishalametta. Konehtyör, yks
lihavanlevvee miehenkiäppänä, vastata viert:

— Soon nyt pormestar pallaunna kaapuntiin talavitiloille. Taitaavat


olla tappoomassa häntä. Ainahii minä tässä työntäyn niin
tuumoomaan, kun näky olevan avviississai, että se nyt tuas villaltasa
viäntäytyy lämpimän lähettyvä kaapuntiin ja näkkyy olevan
matkassai.

— Vai niin, elekee panna pahaksenne, että minä nuin kysyä


kilivotin. Aattelin niät, että jos ois ottaneet ja toisesta syystä
syöntyneet asemalle — myö niät tullaan tänne pienua
pimputtammaan, tämä herrannäkönen tässä on niät semmonen
koputsoitin konsneerooja — ootta kait suanu silimiinnä ilimotuksen
avviisista työhii — että jos oisivat meitä tulleet vastuuseen, vuan ei
taija olla niin päin pyrintö?

— Oisko nuo siitä niin ollakseen. Pormestaria noon tulleet


tatsuunalta tietämään. Vuan heitetäänpäs tämä puhheenpito tähän,
minun pittää piipahtoo tuonne pakkaassinpuolelle, pittää niät nuivata
pormestarin pakkaassi rilloin. Hyväst.

Hään män. Myöhii lähettiin. Outoja kun oltiin ja ossoomattomia


vieraassa kylässä, ei osattu oijustee ensistäsä mihinkään, ennenkun
yheltä poijankellukalta kyseltiin, että missään muanpaikassa sois
tässä kylässä kievarintapainen.

— Ra ottoo hevonen, niin ossootta. Ajurin kansa kun kuletta, niin


koht on sushuussi suunna eessä. Soon muuten tuolla vastapiätä
vuatekauppoo. Ensistäsä kun määttä Mähösen nurkkaan ja siitä
vehnäspuen sivute, että niättä kaukoo kiiluvan syltin, jok on riätäl
Rytkösen, niin sitte siitä kiännyttä nuin kuaressa Ruuttahuoneelle ja
siin on semmonen mäen kyöhkyrä, jolla on sustiettihii. On sillä
kattosa kiänteessä muuten kepin nenässä lakuhii lekottamassa, että
ossootta kait, jos oikein kuletta.

Myö ajuriin luo. Mäntiin ensin yhen ylypeimmän luo ja lakinlippua


leuhkautettua riäkästiin, että onkoon se tämä massiina meitä varten?

— Tämm ei oo leeti rilla. Mänkee muihin luo. Tämm on jo


pistellattu herra pormestarille. Kuuluu ajavan sustietiin.

— Vai niin. Vuan onkos tämä toinen leeti?

— Ei oo tämmäi. Tässä männöö pormestarin rouva.

Myö kolomannen luo. Hönkästiin hällehhii samat sanat silimille:

— Oisijakkoon niin hööli, että ohjoisija meijät sushuussiin, vai onko


se tämmäi vossikka jo vuokrattu?

— On, tässä männöö pormestarin pakkaassi.

— No entäs tuo tuossa?

— Kuuluu kyyvvihtevän pormestarin piikoja. Kaikki ne taitaavat


olla jo poisotetut; että taija suaha hevosta tähän tarpeeseen. Vuan
ossoo kaet sinne semmoseen männä kyökkästä jalakapeliliäi.
Tuossahaan on yks poijan toljake, jos ottaisija hänet ohjoomaam.
Pekka hoi, mittee sinä siinä kilimuilet, tule tänne, niin suat kyyvvitä
nämä herrat kievariin. Eivät ehtineet suaha hevosia, eivätkä ossoo,
että ota ja kuleta sinä. Tienoothan siinä tupakakseks yhen härmän,
jos et enemp.

Piästiinhän sitä jalannii. Pekka kuluk eeltä ja myö tarsittiin jälestä.


Hään sai sitte viiskolomattaisen vaivastasa.

— Mänkee tuohon ylypeimpään pykkäykseen, niin suatta het kysyä


kammaria, jos tahotta. Soon sillä puolen tämän kievarin
meriteeroovin komento, sano Pekka painautuin portille.

Sissään sorruttua myö pyyvvettiin yks lentee lehottava puhvetska


kiinni.

— Oiskoon sillä ryökkynällä huonetta tämännäkösille? Ois oltu


yötä, jos ois suatu katonalasta seiniinsisusta siksi aijaks.

— Onko herrat reissujoukkuetta?

— Ollaan kait sitä siihen sorttiin, vaikka kaikki kait myö ollaan
tässä mualimassa matkantekijöitä. Vuan onko niitä nurkkia löysinä?

— Eikö noita ehkä lie. Herrat läskeiksi tuonne


alarakennusliättänään, siel on sööterska justiisa teetoomassa. Kyssyy
siltä, minä en tiijä, kun oon vuan puhvetin piällä.

Myö männä reuhkastiin alarakennukseen. Tultiin uppeeseen salliin


ja tavattiin sööterska.
— Oiskoon sillä sööterskalla huonetta. Ois tarvittu yheks yöks, jos
oisija niin ovela, että osviittosia.

— Herrat kait on reesanttia? Se on nyt niin surkeeseen viisiin, että


meillä ei oo tähän tuskaan yhtään leetiä rumia. Kaikki on tilattu.

— Nyt lie hitto. Vuan pittää kait sitä nyt kievarissa olla ies yks
huoneen kyhhäys. Myö ollaan kaukoo ja ei muuvanne osata. Eihän
tässäkään salissa oo huokuvoo henkee, että jos —

— Ei, herrajee! Ei tähän sovi. Tämä on herra pormestarin huone.

— Entäs tuo vieruskammar?

— Soon kansa herra pormestarin.

— Entäs muut huoneet?

— Noon pormestarinnalle ja palvelijoille.

— No, kaikki huoneetko se yhtaikoo meinoo muata? Nyt on


kumma kynsissä, kavullakkoon sitä meijän pittää sitte yökaus yl
männä?

— Niin, en mää rohventieroomaan, vuan jos työ että ois hyvin


suuria herroja ja ylypeitä, niin suattasinhan minä antoo teille oman
huoneen tuolla vinnillä. Pittää kait sitä teijännii suaha pehkuja
piännä alle. Jos viihtisiä, niin suattasin minä sen yheks yöks hyyrätä
ja muata ite kahverissa kuukkerskan kansa.

— No, se sattuu sijoilleen! Oisijakkoon niin hööli sitte, että meijät


sinne suattasia? Pitäs näyttee vähä nuamallesa vettä ja kantata
piäkarvojasa, että ilikiis näyttäytä muillehhii.
Sieltä myö sitte ihtemmö löyvvettiin, korkeella vinnikolossa,
pienessä, kylymässä nurkankiänteessä.

— On tämä vähä uuteetti koko kyhhäys, selitti sööterska, vuan


millonkapa sitä joutanoo tiältä teetoomaan. Sitte jos herrat haluaa
ruokoo, niin tuolla puhvetin takana on maatsalj, että ossootta sinne.

Muutettuvammo muntierinkiä myö painuttiin puhvettiin.

— Oiskoon yks portsu rapuja?

— Juu, juu, vuan meill ei oo kun yks tusina ja ne on herra


pormestar pyytännä.

— Vai niin. Hemputin herra sehhii! No jos servieroisia pihviä.

Juu, juu, herrat oottaa vuan pikkuruikkusen.

Myö ootettiin pueltuntia.

Tuota ikkääsä, missään ne viipyy ne pihvit?

Juu, juu, noon siunoomahetkessä käsillä. Ei oo vielä suatu


paistetuks, kun on kärryytetty herra pormestarille. Sille pittää ollahhii
omat ruokasa: rapuportsut, ryynpuurot ja reemit.

— Suattaa pittee. Millonkaan sitä sitte myö piästäsiin pöyvän


viereen?

— Het huokauksessa, kun pormestar on purassu.

— No, sitte ei se ihan ens'mänössä taija tapahtua. Myö männään


sillaikoo saunaan ja huuhotaan pölyt pois. Ei tiijä, jos tuo ois siihen
ennättännä nielasta.
Kyselyin kautta tultiin myö saunalle. Ei sattunna kun yks kammar
asujameta. Myö painauttiin sinne. Het hatunheitolia tälläyty kuitennii
ovelle yks levvee mataminlässäke ja riäkäs:

— Herra Juumala! Tännekö työ — herra pormestarin


paatihuoneeseen! Ja kalloossit jalassa!

— Tännehän myö, kun ei ollut toista huonetta. Kait sitä suap


tännehhii ryysysä heittee?

— Ei mitenkään. Tämm on herra pormestarin.

— Ei hättee. Herra on sustietillä ravihtelemassa ruumistaan. Kyllä


myö siihen ennätettään kylypee, ennenkun se tänne tarkenoo.

— Ei sitä tiijä. Mitä liettäi rentaleita. Tämä huone on varattu vuan


oikeille immeisille ja etupäässä hänelle. Että alakoo vuan oikoja
koipijanno — tiällä ei sua löylyä tuommoset pitkätukkaset turjakkeet.

Tuas oltiin kavulla. Kello alako kiäntäytä konsnerttitunnille.

— Pistäytään vielä silipasemassa sushuussilla kuppi kuumoo


kittaammo.
Ihan kait tässä laihtuu litteeks tämmösessä mänössä.

— Oiskoon juotavoo kahvia?

— Kylläkään sitä, vuan ei oo kun yks pannu, sehhii on…

— Herra pormestarin?

— Niin. Vuan kun ootta suanna oottoo näin kauvan, niin — tulukee
nyt tänne pijjaanopöksään, niin suatta.
Pijjaanohuoneessa ol se kuappi, se koputsoitin. Kun myö oltiin just
semmosta pelkalua tultu pelloomaan, arvel tover, että jos ois
rietautua harjottammaan kahvia oottaissa.

— Ka, rimmauta yks rymmäys! Soita vaikka se Merkannon


»Kesävelli» (= Sommarqväll) niihin varjatsuuniis kansa. Kuuloo sitte
herra pormestarrii, kettee on kulussa.

Hään puulaatu pelloomaan. Oikeen ollii ovela soittaja. Soittoo


sivautti niin, että kintut tink panemaan polokkaan, ja tuassiisa,
oikeessa hilijasessa nuotissa, kiehto veenkietaleita silimäkulumaan.
Joka paikassa ol paukutettu tälle kappaleelle käsiä, huuvettu ja
hihkuttu; nyttii — aukes parraimmassa pijjaanossa ovet, ja sissään
lentee lupsaht ite ravintoloihtijatar:

— Kuka sennii korvennettu teijät on tänne völjännä? Heretkee het


ja hävetkee! Vai tänne työ tuletta kaiken mualiman retkutuksia
renkuttamaan! Ettettä häppee, kun viereiseen huoneeseen kuuloo
koko komennuksen herra pormestar ja muut mussiikkihenkilöt.

Soittaja huokas ja pyyhk hikkee — minä kans!

— Saakelin herra pormestar…! Mutta männään myö tästä


siunatusta sushuussista, männään Jumalan nimessä! — Ja
männessä: Ota sinä nyt oikein hyvä lippu, että oikein kuulet, sillä
kyllä minä taijan nyt soittoo pienut palasiks ja kuapit kappaleiks!

Lippumiehen luo piästyä hönkäs soittaja:

— Kuulkees työ, lippumies, tälle herralle annetaan riilippu. Tuo


lumero viis tuosta laijasta.

— Häh, ettäkö lumero viis? Ei se passoo, se on jo otettu.


— Eipään oo merkitty männeeks. Suanen kait minä ommaan
konsnerttiin ottoo riiliput, mitkä haluvan. Min oon se pelur.

— Suatatta olla, vuan tätä lippua ei anneta. Soon herra


pormestarin riipiletti. Se istuu aina lumero viljellä ja sillä hyvä!

— No minä otan lumero seihtemän.

Konsnertti alako. Kesken kappaleen viäntäyty essiin herra


pormestar. Kaikki nous seisaallesa, pait minä. Kuulin kuitennii
takanan yhen akankiähkyrän honottavan:

— Kukkaan soon tämä uuaartti herra, kun ei nouse ylös, vaikka


tulloo herra pormestar…

— Ja muut musikaaliset henkilöt, lisäsin minä mielessän.

Ens lumeron loputtua ol semmonen hilijasuuvven hetki kun


kirkossa papin pysähtyissä. Kaikkiin silimät sirrottivat herra
pormestariin. Hään viimen kohhautti kämmenesä, lyyvä lossautti
niitä hyvin heilävarroin vastakkain ja — het alako semmonen
nujakka, että luul seiniin särkyvän siinä mänössä.

— Hyväh! Ravooh!

Jokkaisen kappaleen jäläkeen alako taputuksen hyvin heilävarroin


herra pormestar.

*****

Illankiänteessä, konsnertin jälestä, myö istuttiin puhvetissa.


Sustietin emäntä ehätti meijän pakeille ja hupatti:
— Antoo nyt anteeks, hyvät herrat, minä kun en tiennä, että työ
olija niitä muskanttia, niitä pienunpimputtajia…

— Elekee olla milläsäkkään, sattuuhan semmosia.

Vieruspöyvvässä istua ryhjötti kaks herroo ja joivat


tuuvinkikoljanttia. Toinen tuhaht toiselle:

— Ei se soittanna oikeen seleväst. Herra pormestarrii viivyttel


ennenkun viihti aplootia lyyvvä. Minun pittää painautuva tässä
huomisessa lehessä ankaraks arvostelijaks. Kaikki ne konsneeroohii!

Tek miel viskata hympästä koko kahvikuppi vasten koko kötäleitä.


Vai ei selevästi. Sennii selevikkeet…! Mää tiijä, mittee oisin ryhtynnä
resunieroomaan, jollei samassa ois piipahtanna puhvetin ovelle yks
tytön tuippuna ja kielkulukussa kiljassu:

— Hyvä ihme, kun puhvettiin tulloo herra pormestar!

Ja hään tul. Suoroo piätä oikas hään meijän pöytään ja esitti


ihtesä.

— Kuulin teijän täällä olovan ja tulin kiittämään konsnertista.


Ettäköön ottas ja ois niin hööli, että tulisija tuonne
pijjaanokammariin kallistammaan kaks, kolome liknöörlasia. Meill on
tiällä niin sakramentskatun harvon mussiikkia, että ois nyt ies teitä
hyvänä piettävä, jotta vastai jalottelisia tähän mualimanlaitaan. Meit
on siellä vuan minä ja muita.

— Mussiikkihenkilöitä, lisäsin tuas iteksen.

Ovenpielessä pyssäytti minut toinen nuapurpöyvän herrosta.


— Suokee anteeks. Min oon tämän kaupunnin lehti — lehtimies.
Oisijakkoon hyvä ja kuiskoisija, kiittikö se vai moitti?

— Ka kiitti, kiitti ihan iniheesti.

— Vai niin. No, sitte minnäi suan sopivan rittiikin. Harva on


soittanna somemmin, oikein syön seisahtu kuullessa, ol oikeen
helekutin hempeetä…

Meille tarjottiin liknööriä. Sitä tuuvvessa sihaht sööterska minulle


hyvin siistist korvaan:

— Tät ei juokkaan muut kun herra pormestar.

— Ja muut mussiikkihenkilöt.

*****

Yö myö muattiin parraassa salissa. Tul niät emäntä tuas ja selitti:

— Siinä tul erreys. Minä en osanna älytä, että työ tuntija herra
pormestarin.

Kukaties ois myö vaikka keskellä yötäi piästy vielä saunaannii, kun
tunnettiin herra pormestar; ehkä ois suatu riilippu lumero viis —
vaikka se kuuluuhii yksinomaan hänelle.

Iltarukkouksen asemasta luin minä seuroovat sanat:

— Herra varjelkoon ja Herra siunatkoon tämän kaapunnin herra


pormestarin ja muut musikaaliset henkilöt!
HALIERAKESÄNÄ

On semmonen kylymän kelemee kevätuamu. Luonto leppee vielä. Ei


käy sen sylissä tuulen siistijäkkään sihhausta, suati että huavan
herkät lehet lauloo lekuttelis. Tyynenä uinuvaa vuan suuren sutjakka
Suoselekä, kun auringon säje sen pinnassa piirrätteleikse ja sen ylite
lelluu levvee sumunsekanen usvanuje, jota Itikkasuo syvästä
syömestäsä paksuna pilivipelemakkeena oksentoo ötkistää.

Vielä on »kyläläisettii pitkin pehkuja. Puhtaat ovat kyläkavut, eikä


ryvilläkään oo huokuvoo henkee, lukkuunottamata pollariin välissä
piirrättelevöö, likasen harmoota varpusta, joka sinne on pesästäsä
puottaunna uamutuimaasa tuikuttammaan.

Kaunis on kevväinen uamu ja hyvin kylymä. Luonnon lempeys


kuvastaikse kaikessa niin viehkeen veiteränä kun peilin pinnassa
puhassilimäsen tytön tyllerön meinoovan muhjakka silimän mulijaus.
Mutta äkkiä sen lempeyteen rikkeitähhii ryöstäytyy, kuten hangen
hohtavaan kalavoon ussein varisriähkän yl lentee leuhkastessa
rikkeitä rippoo.

Sumun seasta pörhältää niät äkkiä puuhkiva laeva, lesken


Satrakki. Tulloo Jussinsalamesta ja ohjoo Asarjaksen aution suaren
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

ebookfinal.com

You might also like