A First Course in Quality Engineering
A First Course in Quality Engineering
Engineering
Integrating Statistical and
Management Methods of Quality
Third Edition
http://taylorandfrancis.com
A First Course in Quality
Engineering
Integrating Statistical and
Management Methods of Quality
Third Edition
K. S. Krishnamoorthi
V. Ram Krishnamoorthi
Arunkumar Pennathur
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to
publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materi-
als or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material
reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained.
If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in
any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, micro-
filming, and recording, or in any information storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www
.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-
8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that
have been granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identifi-
cation and explanation without intent to infringe.
P r e fa c e to the Th i r d E d i t i o n xvii
P r e fa c e to the S e c o n d E d i t i o n xix
P r e fa c e to the F i r s t E d i t i o n xxi
A u t h o r s xxv
t o Q ua l i t y 1
C h a p t e r 1 I n t r o d u c t i o n
1.1 A Historical Overview 1
1.1.1 A Note about “Quality Engineering” 7
1.2 Defining Quality 8
1.2.1 Product Quality vs. Service Quality 10
1.3 The Total Quality System 11
1.4 Total Quality Management 13
1.5 Economics of Quality 14
1.6 Quality, Productivity, and Competitive Position 15
1.7 Quality Costs 16
1.7.1 Categories of Quality Costs 17
1.7.1.1 Prevention Cost 18
1.7.1.2 Appraisal Cost 19
1.7.1.3 Internal Failure Cost 19
1.7.1.4 External Failure Cost 20
1.7.2 Steps in Conducting a Quality Cost Study 20
1.7.3 Projects Arising from a Quality Cost Study 25
1.7.4 Quality Cost Scoreboard 26
1.7.5 Quality Costs Not Included in the TQC 28
1.7.6 Relationship among Quality Cost Categories 29
1.7.7 Summary on Quality Costs 30
1.7.8 A Case Study in Quality Costs 31
1.8 Success Stories 37
1.9 Exercise 37
1.9.1 Practice Problems 37
v
vi C o n t en t s
1.9.2 Mini-Projects 40
Mini-Project 1.1 40
Mini-Project 1.2 41
Mini-Project 1.3 42
Mini-Project 1.4 43
References 43
C h a p t e r 2 S tat i s t i c s f o r Q ua l i t y 45
2.1 Variability in Populations 45
2.2 Some Definitions 47
2.2.1 The Population and a Sample 47
2.2.2 Two Types of Data 47
2.3 Quality vs. Variability 48
2.4 Empirical Methods for Describing Populations 49
2.4.1 The Frequency Distribution 49
2.4.1.1 The Histogram 49
2.4.1.2 The Cumulative Frequency Distribution 52
2.4.2 Numerical Methods for Describing Populations 56
2.4.2.1 Calculating the Average and Standard Deviation 57
2.4.3 Other Graphical Methods 57
2.4.3.1 Stem-and-Leaf Diagram 57
2.4.3.2 Box-and-Whisker Plot 59
2.4.4 Other Numerical Measures 60
2.4.4.1 Measures of Location 60
2.4.4.2 Measures of Dispersion 61
2.4.5 Exercises in Empirical Methods 62
2.5 Mathematical Models for Describing Populations 64
2.5.1 Probability 64
2.5.1.1 Definition of Probability 65
2.5.1.2 Computing the Probability of an Event 66
2.5.1.3 Theorems on Probability 70
2.5.1.4 Counting the Sample Points in a Sample Space 78
2.5.2 Exercises in Probability 83
2.5.3 Probability Distributions 85
2.5.3.1 Random Variable 85
2.5.3.2 Probability Mass Function 87
2.5.3.3 Probability Density Function 89
2.5.3.4 The Cumulative Distribution Function 90
2.5.3.5 The Mean and Variance of a Distribution 91
2.5.4 Some Important Probability Distributions 93
2.5.4.1 The Binomial Distribution 94
2.5.4.2 The Poisson Distribution 97
2.5.4.3 The Normal Distribution 98
2.5.4.4 Distribution of the Sample Average X 106
2.5.4.5 The Central Limit Theorem 107
Summary on Probability Distributions 108
2.5.5 Exercises in Probability Distributions 110
2.6 Inference of Population Quality from a Sample 111
2.6.1 Definitions 112
2.6.2 Confidence Intervals 113
2.6.2.1 CI for the μ of a Normal Population When σ Is Known 113
2.6.2.2 Interpretation of CI 115
C o n t en t s vii
C h a p t e r 4 Q ua l i t y i n P r o d u c t i o n — P r o c e ss C o n t r o l I 215
4.1 Process Control 215
4.2 The Control Charts 216
4.2.1 Typical Control Chart 217
4.2.2 Two Types of Data 218
4.3 Measurement Control Charts 219
4.3.1 X- and R-Charts 220
4.3.2 A Few Notes about the X- and R-Charts 227
4.3.2.1 The Many Uses of the Charts 227
4.3.2.2 Selecting the Variable for Charting 228
C o n t en t s ix
C h a p t e r 5 Q ua l i t y i n P r o d u c t i o n — P r o c e ss C o n t r o l II 299
5.1 Derivation of Limits 299
5.1.1 Limits for the X-Chart 300
5.1.2 Limits for the R-Chart 303
5.1.3 Limits for the P-Chart 304
5.1.4 Limits for the C-Chart 305
5.2 Operating Characteristics of Control Charts 306
5.2.1 Operating Characteristics of an X-Chart 306
5.2.1.1 Computing the OC Curve of an X-Chart 307
5.2.2 OC Curve of an R-Chart 309
5.2.3 Average Run Length 311
5.2.4 OC Curve of a P-Chart 312
5.2.5 OC Curve of a C-Chart 315
5.3 Measurement Control Charts for Special Situations 316
5.3.1 X - and R-Charts When Standards for μ and/or σ Are Given 316
5.3.1.1 Case I: μ Given, σ Not Given 317
5.3.1.2 Case II: μ and σ Given 317
5.3.2 Control Charts for Slow Processes 321
5.3.2.1 Control Chart for Individuals (X-Chart) 321
5.3.2.2 Moving Average and Moving Range Charts 323
5.3.2.3 Notes on Moving Average and Moving Range Charts 326
5.3.3 The Exponentially Weighted Moving Average Chart 327
5.3.3.1 Limits for the EWMA Chart 332
5.3.4 Control Charts for Short Runs 334
5.3.4.1 The DNOM Chart 335
5.3.4.2 The Standardized DNOM Chart 337
5.4 Topics in Process Capability 340
5.4.1 The C pm Index 340
5.4.2 Comparison of C p, C pk, and C pm 341
5.4.3 Confidence Interval for Capability Indices 343
5.4.4 Motorola’s 6σ Capability 345
5.5 Topics in the Design of Experiments 350
5.5.1 Analysis of Variance 350
5.5.2 The General 2k Design 356
5.5.3 The 24 Design 356
5.5.4 2k Design with Single Trial 357
5.5.5 Fractional Factorials: One-Half Fractions 359
5.5.5.1 Generating the One-Half Fraction 361
5.5.5.2 Calculating the Effects 361
5.5.6 Resolution of a Design 362
5.6 Exercise 368
5.6.1 Practice Problems 368
5.6.2 Mini-Projects 371
Mini-Project 5.1 371
Mini-Project 5.2 372
References 373
C h a p t e r 6 M a n a g i n gf o r Q ua l i t y 375
6.1 Managing Human Resources 375
6.1.1 Importance of Human Resources 375
6.1.2 Organizations 376
6.1.2.1 Organization Structures 376
6.1.2.2 Organizational Culture 378
C o n t en t s xi
C h a p t e r 7 Q ua l i t y i n P r o c u r e m e n t 405
7.1 Importance of Quality in Supplies 405
7.2 Establishing a Good Supplier Relationship 406
7.2.1 Essentials of a Good Supplier Relationship 406
7.3 Choosing and Certifying Suppliers 407
7.3.1 Single vs. Multiple Suppliers 407
7.3.2 Choosing a Supplier 409
7.3.3 Certifying a Supplier 409
7.4 Specifying the Supplies Completely 410
7.5 Auditing the Supplier 411
7.6 Supply Chain Optimization 412
7.6.1 The Trilogy of Supplier Relationship 413
7.6.2 Planning 413
7.6.3 Control 413
7.6.4 Improvement 414
x ii C o n t en t s
A pp e n d i x 1: S tat i s t i c a l Ta b l e s 559
A pp e n d i x 2: A n s w e r s to S e l e c t e d E x e r c i s e s 571
I n d e x 579
http://taylorandfrancis.com
Preface to the Third Edition
We are gratified that the second edition of the book was well received by students
and teachers. Notably, since 2015, our text has been used in a Massive Open Online
Course offered online on edX.org by the School of Management of the Technical
University of Munich, Germany. At least a thousand participants each year take the
MOOC course and complete the requirements to receive a certificate. Reviews sug-
gest that the balanced treatment of statistical tools and management methods in qual-
ity is the strength of our book. We will continue to emphasize the importance of
learning the statistical tools along with management methods for quality for design-
ing and producing products and services that will satisfy customer needs. We will
also continue to stress the value in learning the theoretical basis of statistical tools for
process improvement.
The Third Edition improves on the strengths of the earlier editions both in content
and presentation. An important feature of the book is inclusion of examples from real
world to illustrate use of quality methods in solving quality problems. We are adding
several such examples in the new edition, in Chapter 4, from the healthcare industry,
to show the practical use of control charts in healthcare.
In the new edition, we have revised the text to make all the chapters suitable for
self-study. Wherever necessary, new examples have been added and additional expla-
nations have been provided for self-study. We hope we have succeeded in this effort.
The sections of Chapter 9 relating to Baldrige Award Criteria and ISO 9000 have
been fully revised to reflect the latest versions of these quality system documents. The
discussion on the Baldrige Award is based on 2017–18 Baldrige Excellence Framework;
the discussion on the ISO 9000 system is based on ISO 9000–2015.
x vii
x viii P refac e t o t he T hird Ed iti o n
Special thanks are due to Professor Dr. Holly Ott of the Technical University of
Munich for the many questions she raised that identified for us places where more
explanations were needed, or typos had to be corrected, or corrections were needed in
the mathematics.
KSK, Peoria, IL
VRK, Chicago, IL
AP, Iowa City, IA
Preface to the Second Edition
“The average Japanese worker has a more in-depth knowledge of statistical methods
than an average American engineer,” explained a U.S. business executive returning
from a visit to Japan, as a reason why the Asian rivals were able to produce better qual-
ity products than U.S. manufacturers. That statement, made almost 30 years ago, may
be true even today as Japanese cars are continuously sought by customers who care for
quality and reliability. Dr. Deming, recognized as the guru who taught the Japanese
how to make quality products, said: “Industry in America needs thousands of statisti-
cally minded engineers, chemists, doctors of medicine, purchasing agents, managers”
as a remedy to improve the quality of products and services produced in the U.S. He
insisted that engineers, and other professionals, should have the capacity for statistical
thinking, which comes from learning the statistical tools and the theory behind them.
The engineering accreditation agency in the U.S., ABET, a body made up of academ-
ics and industry leaders, stipulates that every engineering graduate should have “an
ability to design and conduct experiments, as well as to analyze data and interpret
results” as part of the accreditation criteria.
Yet, we see that most of the engineering majors from a typical college of engineer-
ing in the U.S. (at least 85% of them by our estimate) have no knowledge of qual-
ity methods or ability for statistical thinking when they graduate. Although some
improvements are visible in this regard, most engineering programs apart from indus-
trial engineering do not require formal classes in statistics or quality methodology.
The industry leaders have spoken; the engineering educators have not responded fully.
One of the objectives in writing this book was to make it available as a vehicle for
educating all engineering majors in statistics and quality methods; it can be used to
teach statistics and quality in one course.
The book can serve two different audiences: those who have prior education in sta-
tistics, and those who have no such prior education. For the former group, Chapter 2,
xix
xx P refac e t o t he Sec o n d Ed iti o n
In the 20-plus years that I have been teaching classes in quality methods for engi-
neering majors, my objective has been to provide students with the knowledge and
training that a typical quality manager would want of new recruits in his or her
department. Most quality managers would agree that a quality engineer should have
a good understanding of the important statistical tools for analyzing and resolving
quality problems. They would also agree that the engineer should have a good grasp
of management methods, such as those necessary for finding the needs of custom-
ers, organizing a quality system, and training and motivating people to participate in
quality efforts.
Many good textbooks are available that address the topics needed in a course on
quality methods for engineers. Most of these books, however, deal mainly with one or
the other of the two areas in the quality discipline—statistical tools or management
methods—but not both. Thus, we will find books on statistical methods with titles
such as Statistical Process Control and Introduction to Statistical Quality Assurance, and
we will find books on management methods with titles similar to Introduction to Total
Quality, or Total Quality Management. The former group will devote very little cover-
age to management topics; the latter will contain only a basic treatment of statistical
tools. A book with an adequate coverage of topics from both areas, directed toward
engineering majors, is hard to find. This book is an attempt to fill this need. The term
quality engineering, used in the title of this book, signifies the body of knowledge
comprising the theory and application of both statistical and management methods
employed in creating quality in goods and services.
When discussing the statistical methods in this book, one overarching goal has
been to provide the information in such a manner that students can see how the
methods are put to use in practice. They will then be able to recognize when and
where the different methods are appropriate to use, and they will use them effectively
xxi
x x ii P refac e t o t he Firs t Ed iti o n
to obtain quality results. For this reason, real-world examples are used to illustrate the
methods wherever possible, and background information on how the methods have
been derived is provided. The latter information on the theoretical background of the
methods is necessary for an engineer to be able to tackle the vast majority of real-world
quality problems that do not lend themselves to solution by simple, direct application
of the methods. Modification and improvisation of the methods then become neces-
sary to suit the situation at hand, and the ability to make such modifications comes
from a good understanding of the fundamentals involved in the creation of those
methods. It is also for this reason that a full chapter (Chapter 2) is devoted to the
fundamentals of probability and statistics for those who have not had sufficient prior
exposure to these topics. Chapter 2 can be skipped or quickly reviewed for those such
as industrial engineering majors who have already taken formal classes in engineering
statistics.
The use of computer software is indicated wherever such use facilitates problem solv-
ing through the use of statistical methods. The statistical software package Minitab
has been used to solve many problems. The student, however, should understand the
algorithm underlying any computer program before attempting to use it. Such under-
standing will help to avoid misuse of the programs or misinterpretation of the results.
It will also help in explaining and defending solutions before a manager, or a process
owner, when their approval is needed for implementing the solutions.
The management topics have been covered in a brief form, summarized from reli-
able references. Full-length books are available on topics such as supply chain man-
agement, customer surveys, and teamwork, and no attempt has been made here to
provide an exhaustive review of these topics. The objective is simply to expose the stu-
dent to the important topics, explain their relevance to quality efforts, indicate where
and how they are used, and point to pertinent literature for further study. Through
this exposure, a student should acquire a working knowledge of the methods and be
able to participate productively in their use.
This book is organized into nine chapters, which are arranged broadly along the lines
of the major segments of a quality system. The quality methods are discussed under
the segment of the system where they are most often employed. This arrangement
has been chosen in the hope that students will be better motivated when a method is
introduced in the context that it is used. The book can be covered in its entirety in a
typical one-semester course if students have had prior classes in engineering statistics.
If time must be spent covering material in Chapter 2 in detail, then Chapter 5, which
includes mostly advanced material on topics covered by other chapters, can be omitted
entirely, or in part, and the rest of the book completed in one semester.
At the end of each chapter, one to three mini-projects are given. Many of these are
real problems with real data and realistic constraints, and they do not have a unique
solution. These projects can be used to expose students to real-world problems and
help them to learn how to solve them.
P refac e t o t he Firs t Ed iti o n x x iii
In selecting the topics to include in this book, many judgment calls had to be made,
first regarding the subjects to be covered and then concerning the level of detail to be
included. Only experience will tell if the choice of topics and the level of detail are
adequate for meeting the objectives. Any feedback in this regard or suggestions in
general for improving the contents or their presentation will be appreciated.
I owe thanks to many colleagues and friends in the academic and industrial com-
munity for helping me to learn and teach the quality methods. Special thanks are due
to Dr. Warren H. Thomas, my thesis advisor and chairman of the IE Department at
SUNY at Buffalo when I did my graduate work there. He was mainly responsible for
my choice of teaching as a career. When he gave me my first independent teaching
assignment, he told me that “teaching could be fun” and that I could make it enjoyable
for both the students and myself. Ever since, I have had a lot of fun teaching, and I
have enjoyed every bit of it.
Baltasar Weiss and Spike Guidotti, both engineering managers at Caterpillar, Inc.,
Peoria, helped me learn how to work with people and protocols while trying to make
quality methods work in an industrial setting. Their consistent support and encour-
agement helped me to take on many challenges and achieve several successes. I am
deeply thankful to both.
The editorial and production staff at Prentice Hall have been extremely helpful in
bringing this book to its final shape. The assistance I received from Dorothy Marrero
in the early stages of developing the manuscript through the review/revision process
was extraordinary and beyond my expectations. In this connection, the assistance of
numerous reviewers is appreciated: Thomas B. Barker, Rochester Institute of Technology;
Joseph T. Emanuel, Bradley University; Jack Feng, Bradley University; Trevor S. Hale,
Ohio University; Jionghua (Judy) Jin, University of Arizona; Viliam Makis, University
of Toronto; Don T. Phillips, Texas A&M University; Phillip R. Rosenkrantz, California
State Polytechnic University; and Ed Stephens, McNeese State University.
Finally, I owe thanks to all my students in the class IME 522 at Bradley, who
helped me in testing earlier drafts of the manuscript. Many errors were found and
corrected through their help.
K. S. Krishnamoorthi
http://taylorandfrancis.com
Authors
xxv
x xvi A B O U T T HE AU T H O RS
funded by the National Science Foundation, the National Institutes of Health, the
U.S. Army Research Laboratory, and a host of industry sponsors. Dr. Pennathur has
been the Editor-in-Chief of International Journal of Industrial Engineering and has
served on numerous editorial boards. He has authored/co-authored over 150 publica-
tions including two edited books on industrial productivity and enhancing resource
effectiveness.
1
I ntroduction to Q ualit y
We begin this chapter with a historical overview of how quality became a major,
strategic, management and business tool in the United States and other parts of the
world. We then explain the meaning of the term “quality”—a consensus definition
of the term is necessary to discuss the topic and benefit from it. Then the need for a
total quality system and an associated management philosophy for achieving qual-
ity in products is explained. The chapter concludes with a discussion on how quality
impacts the economic performance of an enterprise, which includes a discussion on
quality cost analysis. Detailed definitions of quality costs are provided, along with an
explanation of how a quality cost study can be conducted. The usefulness of such a
study in evaluating the quality-health of a system, and for discovering opportunities
for quality improvement and reducing waste is explained.
Striving for quality, in the sense of seeking excellence in one’s activities, has always
been a part of human endeavor. History provides numerous examples of people
achieving the highest levels of excellence, or quality, in their individual or collective
pursuits. Shakespeare’s plays Beethoven’s music, the Great Pyramids of Egypt, and
the temples of southern India are but a few examples. Quality appeals to the human
mind and provides a sense of satisfaction, which is why most of us enjoy listening to
a good concert, watching a good play, observing a beautiful picture, or even riding in
a well-built car.
During the last 70 years, the term “quality” has come to be used in the marketplace
to indicate how free from defects a purchased product is and how well it meets the
needs of its user. After the Industrial Revolution in the early 1900s, mass-production
techniques were adopted for manufacturing large quantities of products to meet
the increasing demand for goods. Special efforts were needed to achieve quality in
the mass-produced products when they were assembled from mass-produced parts.
Variability or lack of uniformity in mass-produced parts created quality problems dur-
ing parts assembly. For example, suppose a bearing and a shaft are to be assembled,
chosen randomly from lots supplied by their respective suppliers. If the bearing had
one of the smallest bores among its lot and the shaft had one of the largest diameters
among its lot, they would not match and would be hard to assemble. If they were
assembled at all, the assembly would fail early in its operation due to lack of sufficient
1
2 A Firs t C o urse in Q ua lit y En gin eerin g
clearance for lubrication. The larger the variability among the parts in the lots, the
more severe this problem would be. Newer methods were therefore necessary to mini-
mize this variability and ensure uniformity in parts that were mass produced.
Dr. Walter A. Shewhart, working for the former Bell Laboratories in the early
1920s, pioneered the use of statistics to monitor and control the variability in manu-
factured parts and products, and invented control charts for this purpose. Drs. Harold
Dodge and Harry Romig developed sampling plans that used statistical principles
to ascertain the quality of a population of products from the quality observed in a
sample. These statistical methods, which were used mainly within the Bell telephone
companies during the 1930s, were also used in the early 1940s in the production of
goods and ammunition for the U.S. military in World War II. Some (Ishikawa 1985)
even speculated that this focus on quality in military goods provided the United States
and their allies an advantage that contributed to their eventual victory in the war.
After World War II, the U.S. War Department was concerned about the lack
of dependability of electronic parts and assemblies deployed in war missions. From
this concern grew the science of reliability, which deals with the failure-free per-
formance of products over time. Reliability science matured during the early 1950s
into a sophisticated discipline and contributed to improving the longevity, not only of
defense products, but also of many commercial products such as consumer electronics
and household appliances. The long life of refrigerators and washing machines, which
we take for granted today, is in large measure the result of using reliability science in
their design and manufacture.
During the early 1950s, leaders in the quality field, such as Drs. W. Edwards
Deming, Joseph M. Juran, and Armand V. Feigenbaum, redefined quality in several
important ways. First, they established that quality in a product exists only when a cus-
tomer finds the product satisfactory in its use. This was in contrast to earlier views that a
product had quality if it met the specifications set by the product designer, which might
have been chosen with or without reference to the needs of the customer. Second, these
gurus, as they were called, proposed that a quality product, in addition to meeting the
needs of the customer in its physical characteristics, should also be produced at mini-
mal cost. Finally, and most importantly, they claimed that, to create a product that will
satisfy the customer both in performance and cost, a “quality system” was needed. This
quality system would be made up of all the units of an organization that contribute to
the production of a product. The system would include, for example, the marketing
people who find out from the customers their needs, the designers who would design
the product to meet those needs, the manufacturing engineers who design the pro-
cesses to make the product according to the design, the workers who make the product,
the packaging and shipping personnel who plan the logistics for delivering the product
in proper condition, and after sale customer support who provide the customer help in
assembling and using the product. There are many other components of the system as
explained later in this chapter that contribute to producing a quality product. Every ele-
ment in the system should be focused on the goal of making and delivering a product
In t r o d u c ti o n t o Q ua lit y 3
that satisfies the customer. The leaders thus proposed the concept of a “total qual-
ity system” whose components would be defined along with their responsibilities and
interrelationships. Guidelines would be set so that the components working together
would optimize the system’s goal of satisfying the customer’s needs.
A management approach called the “total quality management philosophy” was
proposed. This defined how the people working with the total quality system would be
recruited, trained, motivated, and rewarded for achieving the system’s goals. The total
quality system along with the management philosophy needed to manage the people
involved in the system were together called the “Total Quality Management System.”
Although there was widespread acceptance of the need for a total quality manage-
ment system among the quality community in the United States and elsewhere, it
was the Japanese who quickly embraced it, adapted it to their industrial and cultural
environment, and implemented it to reap immense benefits. They called such a sys-
tem “total quality control” or “companywide quality control.” With the success of the
Japanese, the rest of the world eventually saw value in a systems approach to quality.
The creation of the Standards for Quality Assurance Systems by the International
Organization for Standardization (ISO 9000) in 1987 was the culmination of the
worldwide acceptance of the systems approach to producing quality.
Another major milestone in the development of the quality discipline was the
discovery, or rediscovery, of the value of designed experiments for establishing rela-
tionships between process variables and product characteristics. The use of designed
experiments to obtain information about a process was pioneered by the English
statistician Sir Ronald Fisher in the early 1920s, in the context of maximizing yield
from agricultural fields. Although the methods had been used successfully in manu-
facturing applications, especially in the chemical industry during the 1950s, it was
Dr. Genechi Taguchi, a Japanese engineer, who popularized the use of experiments
to improve product quality. He adapted the methods for use in product and process
design and provided simplified (some would say oversimplified) and efficient steps for
conducting experiments to discover the combination of product (or process) param-
eters to obtain the desired product (or process) performance. His methods became
popular among engineers, and experimentation became a frequently used method
during quality improvement projects in industry.
The birth of the quality control (QC) circles in Japan in 1962 was another impor-
tant landmark development in quality engineering and management. The Union of
Japanese Scientists and Engineers (JUSE), which took the leadership role in spread-
ing quality methods in Japan, had been offering classes in statistical quality control
to engineers, managers, and executives starting in 1949. They realized that the line
workers and foremen had much to offer in improving the quality of their production
and wanted to involve them in the quality efforts. As a first step, JUSE, under the
leadership of Dr. Kaoru Ishikawa, began training workers and foremen in statistical
methods through a new journal called Quality Control for the Foreman, which carried
lessons in statistical quality control.
4 A Firs t C o urse in Q ua lit y En gin eerin g
Foremen and workers assembled in groups to study from the journal and started
using what they learned for solving problems in their own processes. These groups,
which were generally made up of people from the same work area, were called “QC
circles,” and these circles had enormous success in solving quality problems and
improving the quality of what they produced. Companies encouraged and supported
such circles, and success stories were published in the journal for foremen. The QC
circle movement, which successfully harnessed the knowledge of people working close
to processes, gained momentum throughout Japan, and workers and foremen in large
numbers became registered participants in QC circles. The success of QC circles is
considered to be an important factor in Japanese successes in quality (Ishikawa 1985).
The details on how the QC circles are formed, trained, and operated are discussed in
Chapter 6 under the section on teamwork.
The 1970s were important years in the history of the quality movement in the
United States. Japanese industry had mastered the art and science of making qual-
ity products and won a large share of the U.S. market, especially in automobiles and
consumer electronics. Domestic producers in the United States, particularly the auto-
makers, lost a major share of their markets and were forced to close businesses and
lay off workers in large numbers. Those were very painful days for American workers
in the automobile industry. It took a few years for the industry leaders to understand
that quality was the differentiator between the Japanese competitors and the domestic
producers.
The NBC television network produced and aired a documentary in June 1980 titled
If Japan Can … Why Can’t We? The documentary highlighted the contributions of
Dr. W. Edwards Deming in training Japanese engineers and managers in statistical
quality control and in a new management philosophy for achieving quality. It is gener-
ally agreed that this documentary provided the rallying point for many U.S. industry
leaders to start learning from the Japanese the secrets of quality methodologies (Moen
and Norman 2016). Many corporate leaders such as Robert Galvin of Motorola,
Harold Page of Polaroid, Jack Welch of General Electric, and James Houghton of
Corning, led the way in spreading the message of quality among their ranks and gath-
ered instant followers. The quality philosophy spread across U.S. industry and quality
gurus, such as Dr. Deming, came home to provide training in the technical and man-
agerial aspects of quality. Several U.S. corporations that had lost business to foreign
competition began recovering using quality-focused management. Corporations such
as Motorola, Xerox, IBM, Ford, General Motors, Chrysler, Corning, and Hewlett-
Packard are examples of companies that began to regain market share with their new
focus on quality. A “quality revolution” had begun in the United States. The 1980s
saw this recovery spreading across a wide industrial spectrum, from automobiles and
electronics to steel and power to hotels and healthcare.
During the early 1980s, Motorola launched a quality drive within its corporation
using what they referred to as the “Six Sigma process.” The major thrust of this pro-
cess was to reduce the variability in every component characteristic to levels at which
In t r o d u c ti o n t o Q ua lit y 5
the nonconformity rates, or the proportion of characteristics falling outside the limits
that were acceptable to the customer, would not be more than 3.4 parts per million.
Such levels of uniformity, or quality, were needed at the component level, Motorola
claimed, in order to attain acceptable quality levels in the assembled product. The Six
Sigma process emphasized the use of statistical tools for process improvement and
process redesign, along with a systematic problem-solving approach (called DMAIC
or DMEDI depending on the context where it is used) to improve customer satisfac-
tion, reduce costs, and enhance financial performance. The statistical methods and the
problem-solving methodology were taught to engineers, supervisors, and operators
through a training program.
Motorola’s successes in achieving quality and profitability through the implemen-
tation of the Six Sigma process attracted the attention of other corporations, such as
General Electric, Allied Signal, and DuPont, who in turn gained enormously from its
application. The list of Six Sigma adopters grew, and the Six Sigma movement spread
in the United States and abroad, helped by the formation of organizations such as the
Six Sigma Academy in 1994, which provided training and advice on Six Sigma imple-
mentation. The Six Sigma process is discussed in more detail in Chapter 9 as part of
the discussion about quality systems.
In 1988, the U.S. Congress established an award called the Malcolm Baldrige
National Quality Award (MBNQA), named after President Reagan’s secretary of
commerce. The award was established to reward U.S. businesses that showed the most
progress in achieving business excellence through the use of modern methods for
improving quality and customer satisfaction. This was an expression of recognition by
the U.S. government of the need for U.S. industries to focus on the quality of products
and services so that the U.S. economy could stay healthy and competitive.
The most recent development in the quality field has been the acceptance of quality
as a key strategic parameter for business planning along with traditional marketing
and financial measures. Organizations that accept quality as a planning parameter
will include quality goals in their strategic plans. They will set goals, for example, for
reducing the proportion of defectives that are shipped to customers, or for improving
the level of customer satisfaction, just as they set goals for increasing sales, improving
profit margins, or winning additional market share.
Table 1.1 shows, at a glance, the major milestones in the progression of quality
toward becoming an important factor in economic activities in the United States.
Although important activities relative to quality have been taking place in many parts
of the world, especially in Japan, England, and other parts of Europe, nowhere else
did quality come into such dramatic focus as it did in the United States.
The quality revolution, which has taken root in the United States and many
other developed and developing countries, is an important phenomenon in the new
global economy. Consumers are increasingly aware of the value of quality and will
continue to demand it. Legislative bodies will require it because quality in products
has come to be associated with safety and health of citizens. Industrial managers
6 A Firs t C o urse in Q ua lit y En gin eerin g
Table 1.1 Major Events Related to the Quality Movement in the United States
1900 Post-Industrial Revolution era. Goods are mass produced to meet rising demands.
1924 Statistical control charts are introduced by Dr. Shewhart at Bell Labs.
1928 Acceptance sampling plans are developed by Drs. Dodge and Romig at Bell Labs.
1940 The U.S. War Department uses statistical methods and publishes a guide for using control charts.
1942 Several quality-control organizations are formed. Training classes are offered.
1946 American Society for Quality Control is organized.
1946 Dr. Deming is invited to Japan to help in their national census.
1949 JUSE begins offering classes in quality control to engineers and managers.
1950 Dr. Deming offers classes in quality methods to Japanese engineers, managers, and executives.
1950s Designed experiments are used in manufacturing (chemical industry).
1950s Study of reliability begins as a separate discipline.
1951 Dr. A.V. Feigenbaum proposes a systems approach to quality and publishes the book Total Quality Control.
1955 Beginning of quantity control concept (lean manufacturing) as part of the Toyota Production System (TPS)
in Japan.
1960s Academic programs in industrial engineering begin offering courses in statistical quality control.
1962 The quality control circle movement begins in Japan: Workers and foremen become involved in statistical
quality control.
1970s Many segments of American industry lose to Japanese competition.
1979 NBC broadcasts the documentary If Japan Can … Why Can’t We?
1980s American industry, led by the automobile companies, makes the recovery.
1981 Motorola introduces the Six Sigma process for quality improvement.
1987 The Malcolm Baldrige National Quality Award is established.
The first edition of IS0 9000 is issued.
1990s ISO 9000 standards gain acceptance in the United States.
2000s Quality is becoming one of the strategic parameters in business planning.
The pursuit of quality has significantly impacted the healthcare industry in recent
times. The Institute of Medicine (IOM), one of the United States National Academies
charged with providing advice to the nation on medicine and health, said: “The U.S.
healthcare delivery system does not provide consistent, high quality medical care to
all people…. Healthcare harms patients too frequently and routinely fails to deliver its
potential benefits. Indeed, between the healthcare that we now have and the health-
care that we could have lies not just a gap, but a chasm.” This conclusion was contained
in their report titled “Crossing the Quality Chasm: A New Health System for the 21st
Century,” published in March 2001. In an earlier report titled “To Err Is Human,”
the IOM said: “at least 44,000 people, and perhaps as many as 98,000 die in hospitals
each year as a result of medical errors that could have been prevented,” making medi-
cal errors a worse cause for fatalities than car wrecks, breast cancer, and AIDS. The
reports point to the lack of timely delivery of quality healthcare for people who need it.
There are several examples where healthcare organizations have successfully
implemented quality methods to improve the quality of care. In one example, the
Pittsburgh Regional Health Initiative (PRHI), whose mission is to improve health-
care in Southwestern Pennsylvania, achieved “numerous” successes in their effort to
continuously improve operations and standardize their practices and eliminate errors.
In one such success, they reduced catheter-related bloodstream infections at Allegheny
General Hospital by 95% between 2003 and 2006 and reduced the number of deaths
by such infections to zero. In another instance, they helped the Pittsburgh Health
System Veteran Affairs reduce the rate of methicillin-resistant staphylococcus aureus
infection from 0.97 per 1000 bed-days of care in 2002 to 0.27 in 2004 (Krzykowski
2009). These instances, however, seem to be exceptions in the healthcare industry.
Opportunities seem to be abundant for improving healthcare delivery in the United
States—both at the micro process levels in hospitals and at the macro levels of policy-
making—through the use of quality methodologies.
The experience of the past few decades has shown that quality is achievable through a
well-defined set of methods used during the design, production, and delivery of prod-
ucts. The collection of these methods and the theoretical concepts behind them can
be viewed as falling into an engineering discipline, which some have already called
“quality engineering.” The American Society for Quality, a premier organization of
quality professionals, uses this name to signify the body of knowledge contributing to
the creation of quality in products and services that leads to customer satisfaction.
They even offer training programs in quality engineering and certify those who pass
a written examination and acquire a certain level of experience in the quality field.
The term “quality engineering” has been used in quality literature to denote many
things. Some authors have used the term to refer to the process of improving product
quality using improvement tools. Many have used it to signify the process of selecting
8 A Firs t C o urse in Q ua lit y En gin eerin g
targets and tolerances for process parameters through designed experiments. Some have
used it to mean the selection of product characteristics that will satisfy customer needs.
The term is used here, however, with a much broader meaning. In this book, quality
engineering refers to the discipline that includes the technical methods, management
approaches, costing procedures, statistical problem-solving tools, training and motiva-
tional methods, computer information systems, and all the sciences needed for designing,
producing, and delivering quality products and services to satisfy customer needs.
The body of knowledge needed to make quality products has assumed different
names at different times based mainly on the available set of tools at those times. It
was called “quality control” when final inspection before the product was shipped to
the customer was the only tool employed to achieve product quality. When statisti-
cal principles were used to create control charts and sampling plans, it assumed the
name “statistical quality control.” It also took the name “quality assurance” at this time
because the control charts were used to control the process upstream of final inspec-
tion and prevent defectives from being produced so as to assure defect-free shipments
to customer. “Statistical process control” (SPC) was another term used at this time
because of the control charts used for process control which were designed using sta-
tistical principles. Then came the addition of elements such as drawing control, pro-
curement control, instrument control, and other components of a total quality system
when people recognized that a system was necessary to achieve quality. When a new
management philosophy became necessary to deal with the quality system, the body
of knowledge was called “total quality management” (TQM). It was also known as
“company-wide quality control” (CWQC). Quality engineering has come to mean
that the body of knowledge needed for making quality products includes the science,
mathematics, systems thinking, psychology, human relations, organization theory,
and the numerous methods created from them that are used during the design, pro-
duction, and delivery of the product. It may be worth mentioning that the Six Sigma
methodology that has become so popular in recent years, as a means of improving
quality and reducing waste, encompasses almost the same set of knowledge that we
refer to here as “quality engineering.”
1.2 Defining Quality
We have used the term quality frequently in the above discussions without stopping
to explain its meaning. It may seem that there is no need to do so, as the word is
used liberally in both casual and professional contexts with tacit understanding of its
definition. Yet, if we asked a sample of the public or the professionals for a definition,
we may find as many definitions as the number of people asked. A uniform defini-
tion is however necessary when people of varying backgrounds in an organization
are engaged in pursuit of quality. If a clear, consistent definition of what is meant by
quality for a given organization and the products they create is not available, misun-
derstanding and confusion might result.
In t r o d u c ti o n t o Q ua lit y 9
idea that it is not enough for a business to make the product with the necessary char-
acteristics to satisfy the customer’s needs; the business should also be able to profit by
selling it and succeed as an enterprise.
Furthermore, these definitions do not provide enough clarity when we want to be
able to measure quality so that it can be monitored and improved. For this purpose, a
clearer, more precise definition is necessary.
A complete definition of a product’s quality begins with identifying who the cus-
tomers are and determining their needs and expectations. A product designer takes
these needs and expectations into account and selects features of the product to create
a design that is responsive to the expressed needs. The designer also selects the targets
and limits of variation (called “tolerances”) for these features so that the product can
be produced at a reasonable cost while meeting the customer’s needs. These product
features, which must be measurable, are called the “quality characteristics.” The tar-
gets and tolerances together are called the “specifications” (or specs) for these charac-
teristics. The collection of the quality characteristics and their specifications define the
quality of the product.
When the product is created, the production personnel verify if the product’s char-
acteristics meet the targets and specifications chosen by the designer, which in turn
will assure that the product, when delivered to the customer, will meet their needs.
If the needs of the customer have been properly assessed, and if the quality charac-
teristics and their specifications have been chosen by the designer to respond to those
needs, and if the production team produces the product to conform to those speci-
fications, then the product will meet the needs of the customer when delivered. The
product is then considered a quality product.
Most of the discussions in this book regarding quality methods are made in the con-
text of producing a physical product, as it is easier to illustrate concepts of quality in
this manner. However, we recognize that services, such as mail delivery, dry cleaning,
equipment maintenance, and security, must also be “produced” to meet the needs
of the customers who use them. Even for these services, customer needs must be
assessed; the service features must be chosen to meet those needs; and production and
delivery must be performed in a manner to meet those needs. Only then will those
services have quality.
Services can be divided into two categories: primary services and secondary services.
Services such as mail delivery by the U.S. Postal Service, financial services by a bank,
and instructional services by a university are examples of primary services, because
the services mentioned are the major “products” of the respective organizations. On
the other hand, services provided by manufacturers of garage doors and lawnmowers
that help customers to install and use such products correctly are examples of second-
ary services. Similarly, the treatment received by a patient from doctors and nurses
In t r o d u c ti o n t o Q ua lit y 11
in a hospital for a health problem is a primary service while the pre-treatment recep-
tion and post-treatment counseling are secondary services. Often, secondary services
(which are also simply called “customer services”) are important and are needed for
creating customer satisfaction in the primary product or service. The emergency road
service provided by a company that produces and sells cars is an example of this type
of customer service.
Methodologies that have been developed to create quality in physical products are
mostly applicable, sometimes with some modifications, to quality in services as well.
In the discussions of quality methods in this book, wherever the term “product” is
used while describing a quality method, it can generally be taken to mean both a tan-
gible product and an intangible service.
We made a brief reference to the Total Quality System and the Total Quality
Management System earlier and we want to add some details.
Many activities have to be performed by many people in an organization in the
design and production of a product so as to meet the needs of the customer. The
marketing department usually obtains the information on customer needs and pref-
erences. Design engineers take into account these needs and select product features
(characteristics) that would meet those needs. They also choose the target values and
tolerances for those product features. Manufacturing engineers determine what mate-
rials to use and what processes to employ in order to make the product to meet the
chosen specifications. The production team follows the set of instructions generated
by the manufacturing engineers and converts the raw materials into products with
characteristics within the stated specifications. The packaging department designs
and produces packaging such that the product will reach the customer safely and
without damage. All these are known as “line activities,” because they are directly
responsible for creating and delivering the product.
Several other supporting activities are also needed for producing a quality product.
Instruments must be properly chosen with required accuracy and precision to measure
the product characteristics, and they must be maintained so that they retain their
accuracy and precision. Training must be provided for all personnel who are involved
in productive functions so that they know not only how to make the product but also
how to make it a quality product. Equipment must be maintained so that they not only
keep running but also remain capable of meeting the required tolerances. Computer
hardware and software have to be installed and maintained to gather data and gener-
ate information to facilitate decision making. These are referred to as “infrastructure
activities.” For all these quality-related activities, including the line and infrastruc-
ture activities, responsibilities must be clearly assigned to various people in the orga-
nization. Rules are needed as to who has the primary and who has the supporting
responsibility when multiple agencies are performing a function. The rules should also
12 A Firs t C o urse in Q ua lit y En gin eerin g
specify how differences, should they arise, be resolved. In other words, a system must
be established in which several component agencies with assigned responsibilities and
defined relationships will work together to meet the common goal of producing and
delivering a product that will meet the customer’s needs. Such a system is called a
“total quality system” (TQS).
Figure 1.1 shows the components of a TQS enveloped by a band that shows fea-
tures of quality management philosophy. Notice that the system includes in addition
to production processes, marketing, design engineering, process engineering, qual-
ity engineering, information systems, packaging and shipping, metrology, safety and
environment protection, human resources, plant engineering, materials management,
The customer
–Clarity in requirements
–Timely feedback
Customer Open
satisfaction communication
Leadership
measures participative mgt.
Quality eng.
–Process control planning
–Quality inf. systems
–Improved projects Information systems
Marketing
–Cust. needs
–Cust. eduacating Design eng.
–Product specs.
–Perform. tests Process eng.
Motivation & Cultural
–Tolerancing
change
Culture of continuous
improvement
Metrology
Material mgt.
–Procurement –Gage studies
control –Inst.
–Vendor relations calibration
–Stores control Human resources
–Job requirements
–Job training
Suppliers Community
–Quality system –Clean environment
–Quality supplies –Happy people
and customer service. Figure 1.1 shows all these functions with their main contribu-
tion to quality indicated below each. Each component in the system must perform its
function well, and the functions must be controlled and coordinated so that they all
work to optimize the system’s common objective of satisfying the customer. Many
of the functions are dependent on others performing their work satisfactorily. For
example, the production function depends on procurement to bring parts and materi-
als of the required quality. The procurement function depends on design activity for
describing the specifications and quality requirements of the parts and materials to
be procured. It also depends on stores to provide feedback as to how well suppliers
are adhering to the agreed standards for supplies. Similarly, the inspection function
depends on metrology for providing accurate and capable instruments for proper veri-
fication of quality characteristics.
The control and coordination of the components in the TQS are accomplished by
laying down the overall policies and procedures for the system, individual responsi-
bilities, interrelationships, and control procedures in written documents. These writ-
ten policies and procedures that define the TQS and help in preserving and improving
it should be effectively implemented and periodically audited to verify compliance.
Organizations use internal audit groups or external auditors to do such system audits.
A new management philosophy became necessary to manage the new total quality
system. In earlier days, when quality control was thought to be the sole responsibility
of the inspection department, functions such as marketing, product development, and
process development paid little attention to the quality of the finished product. Under
the new management system, a cultural change becomes necessary in order to create
awareness among all the employees of an organization of the needs of the customers
and to emphasize that each employee has a part to play in satisfying the customer. The
message that “quality is everyone’s job” needs to be delivered across the organization—
from the chief executive to those who put the products together on the shop floor.
Under the new philosophy, the top-level leadership has to first commit itself to pro-
ducing and delivering quality products and must then demand a similar commitment
from employees at all levels in the organization. They have to incorporate quality goals
in their planning processes. The management process has to change to one in which
everyone becomes empowered and involved in decision making rather than using an
older model of ruling by fiat with decisions handed down through a hierarchical super-
visory structure. Decisions have to be made based on factual information rather than
on feelings and beliefs. All levels of employees must be trained in the new principles
of business, with the customer as the main focus, and be trained to work in teams that
strive for the common good of the business. There should be a free flow of information
and ideas within the organization, and rewards must be set up to recognize significant
contributions made toward satisfying the customer. Finally, it is necessary to impress
14 A Firs t C o urse in Q ua lit y En gin eerin g
on everyone that the effort to improve products and to make customers happier is a
continuous process. For one thing, the needs of customers keep changing because of
changes in technology or changes in a competitor’s strategies; then there are always
opportunities to make a product better and more suited to the customer.
Thus, the TQM philosophy has the following components:
1. The commitment of top management to making quality products and satisfy-
ing the customer.
2. Focusing the attention of the entire organization on the needs of the customer.
3. Incorporating quality as a parameter in planning for the organization’s goals.
4. Creating a system that will define the responsibilities and relationships of
various components toward customer satisfaction.
5. Decision making based on factual information in a participative environment.
6. Training all employees in TQM, involving them, and empowering them as
participants in the process.
7. Creating a culture in which continuous improvement will be a constant goal
for everyone and contributions toward this goal will receive adequate reward.
Figure 1.1 shows the components of the TQS complemented by the components of
the TQM philosophy. The entire system, the combination of the technology-oriented
TQS and the human-oriented TQM philosophy, is often referred to as the “Total
Quality Management System,” or simply the “quality management system.” Several
models for the quality management system, such as the ISO 9000 standards and the
MBNQA criteria, provide guidelines on how a quality management system should be
designed, organized, and maintained. These are discussed in Chapter 9.
1.5 Economics of Quality
After Japan’s defeat in World War II, its economy was in total disarray. The country
was faced with the immense task of reconstructing its economic and civil systems. Since
Japan is not endowed with many natural resources, the revival was based on importing
raw materials and energy, making products using these, and then selling these products
back to the foreign markets. Japan was, at the time, also handicapped by the prevailing
perception in the world markets that the country’s products were of notoriously poor
quality. Hence, the revival of the economy was predicated on improving the quality of
their products and gaining a new reputation for this improved quality. Japanese industrial
leaders, with help from their government, sought and received assistance from a few well-
known U.S. consultants, Dr. W. Edwards Deming being foremost among them. The
Japanese learned, adapted, and implemented statistical methods to suit their needs. They
learned how to organize TQM systems and produce quality products. They stormed the
world markets with high-quality goods, including automobiles, electronics, steel, and
chemicals. Japan became one of the economic powers in the world. Its economic success
was achieved through a competitive advantage derived from the quality of its products.
In t r o d u c ti o n t o Q ua lit y 15
Meanwhile, in the United States, the Detroit-based automobile industry lost a dif-
ferent kind of war during the late 1970s and early 1980s. Imports, especially those
from Japan and Germany, were eating into the market share of U.S. automakers.
American consumers were impressed by the quality of the imports, which had better
features, ran longer without failures, and were cheaper to buy. U.S. automakers lost a
sizable portion of their market share, closed many plants, and laid off many workers.
“All three major U.S. automakers were awash in red ink; GM’s $762.5 million loss was
its first since 1921. By virtue of its size, GM was not hit as hard as Ford and Chrysler,
both of which hovered on the brink of ruin” (Gabor 1990). It took U.S. automakers
almost 10 years before they learned their lessons on quality. However, they did learn
how to listen to the needs of the customer, how to design the customer’s needs into the
cars, how to establish quality systems, and how to produce quality cars. They listened
to the counsel of quality gurus such as Dr. Deming and embraced new approaches to
managing people with trust and partnership. They then won back a major portion of
their lost customers. Lack of quality was responsible for their economic downfall, and
it was through embracing quality that they made the economic recovery.
In the days when quality control was done by inspecting all of the units produced and
then passing the acceptable products after rejecting those that were not, improvement
in quality was accomplished by rejecting products, which resulted in a decrease in sal-
able products (see Old Model in Figure 1.2). This had given rise to the impression that
quality improvement was possible only with a loss in output.
Old model
Ship to customer if good
Discard if scrap
New model
Personnel training
Supplier partnership
Survey market Design product Produce product Final inspection Ship to customer
Concurrent
engineering
Process control
Measurement control
On the other hand, when product quality is achieved through a quality system and
through the use of methods that will prevent the production of unacceptable products,
thus making the product “right the first time,” then every product produced is a salable
unit (see New Model in Figure 1.2). In this new approach, improvement of quality
is accompanied by improved productivity. That is, with the same amount of available
resources—material, machinery, and manpower—more salable units are produced.
When overhead is spread over a larger number of units sold, the production cost per
unit decreases. When a part of the cost reduction is passed on to the customer, it
will result in better satisfaction for them, as they will be receiving a quality product
at a reduced price. Improved customer satisfaction will create repeat as well as new
customers and will lead to better market share. The improved market share improves
profitability for the business, which in turn contributes to secure and lasting jobs for
employees.
The above message that quality improvement through the prevention of defectives
would result in increased productivity, improvement in profits, and an increase in mar-
ket share—and hence more secure jobs for employees—was delivered by Dr. Deming
to his Japanese audience as part of his new management philosophy. He delivered the
same message to the American managers and executives through his book Out of the
Crisis (Deming 1986).
The transformation of an organization from the old model to a new TQM orga-
nization has happened in many instances apart from the dramatic turnaround that
occurred within the U.S. automobile industry in the early 1980s. And it is happening
even now in the 2000s, as seen in several examples that have been documented in the
literature. For example, the website of the National Quality Program (http://www
.baldrige.nist.gov/Contacts_Profiles.htm) contains many such examples where U.S.
organizations have excelled in quality and achieved business success through the use
of modern approaches to producing quality products and services.
1.7 Quality Costs
There were times, in the 1980s and before, when a company president or a business
executive would ask the question: “What will quality buy?” when asked to fund a qual-
ity improvement project or hire a new quality engineer. One only has to examine the
articles (e.g., McBride 1987) in the proceedings of the Annual Quality Congresses of
the American Society for Quality during those years to understand the predicament
of quality engineers and quality managers in trying to convince business executives of
the need for investment in quality improvement. They were trying to “sell” quality to
their bosses who would understand its value only if explained in terms of dollars and
cents. Even today, one should not be surprised if they come across a business owner or
a company executive who has questions about the value of quality to a business. The
concept of “quality costs,” which helps in expressing the value of quality in terms of
dollars and cents, helps in answering these questions.
In t r o d u c ti o n t o Q ua lit y 17
1. Prevention cost
Costs of producing quality
2. Appraisal cost
The four categories of quality costs and their possible sources in a typical productive
organization are explained below. It should be noted that these quality cost catego-
ries only include annual operating costs and do not include any capital expenditure,
capital expenditure being defined as expenditure on equipment that has a lifespan of
more than one year. These quality costs are therefore referred to as “operating quality
costs.”
18 A Firs t C o urse in Q ua lit y En gin eerin g
a. Quality Planning
This is the cost incurred in preproduction activities such as a review of
drawings, specifications and test procedures, and selection of process
parameters and control procedures to assure the production of quality
products. Evaluating the producibility of the product with available equip-
ment and technology, selecting instruments and verifying their capabili-
ties, and validating production processes before the product is put into
production are other examples of quality-planning activity.
b. Training
This relates to training workers, supervisors, and managers in the fun-
damentals of quality methodology. All training costs do not fall in this
category. The cost of training workers to make a product, for example,
is not a quality cost. The additional cost involved in training workers to
make a quality product, however, is a quality cost. For example, the cost
of training a machinist in the use of a new CNC lathe does not belong in
the quality cost category; the cost of training in the use of measuring tools,
data collection techniques, and data analysis methods for studying quality
problems does belong in this category.
c. Process Control
The cost incurred in the use of procedures that help in identifying when a
process is in control and when it is not, so as to operate the process without
producing defectives, would be part of this category. This would include
the cost of samples taken, the cost of time for testing, and the cost of
maintaining the control methods. If process control is part of the produc-
tion operator’s job, then the time that he or she spends for control purposes
must be separated from the time spent in making the product in order to
determine this cost.
d. Quality Information System
Gathering information on quality, such as data on quality characteristics,
the number and types of defectives produced, the quality performance of
suppliers, complaints from customers, quality cost data, and so on, require
computer time and the time of computer professionals. Additionally, cost
is incurred in the analysis of the information and in the distribution of
the results to the appropriate recipients. All these expenditures would be
included in the prevention cost. It should be noted that the cost of a com-
puter, which is a capital expenditure, would not be included in the quality
cost. However, consumables, such as paper, storage disks, and minor soft-
ware, or royalties paid, would be included in this category.
In t r o d u c ti o n t o Q ua lit y 19
e. Improvement Projects
The cost of special projects initiated to improve the quality of products
falls under prevention. For example, if a project is undertaken to make a
quality cost study, the expenditure incurred in making the study will be
considered part of prevention quality cost.
f. System Development
The amount of time and material expended in creating a quality system,
including the documentation of policies, procedures, and work instruc-
tions, would be part of the prevention category. Preparation for a system
audit—for example, against ISO 9000—would be part of the prevention
category.
The items in this section are some examples of activities where preven-
tion quality costs are incurred. There may be many other places where pre-
vention costs may be incurred depending on the type of industry and the
size of operations. The above list is a good starting point, and other sources
that generate prevention costs should be included when they are identified.
1.7.1.2 Appraisal Cost Appraisal cost is the cost incurred in appraising the condition
of a product or material with reference to requirements or specifications. The follow-
ing are the major components of this cost:
a. Cost of inspection and testing of incoming material or parts at receiving, or
at the supplier; material in stock; or finished product in the plant or at the
customer.
b. Cost of maintaining the integrity of measuring instruments, gages, and
fixtures.
c. Cost of materials and supplies used in inspection.
The inspection or verification of a product’s condition at the end of a production
process would be counted in the appraisal category. Inspection carried out on the
product within the production process for purposes of process control, however, would
be counted as prevention cost.
1.7.1.3 Internal Failure Cost Internal failure cost is the cost arising from defective
units produced but getting detected before being shipped to the customer. This is the
cost arising from the lack, or failure, of process control methods, resulting in defec-
tives being produced.
a. Scrap
This is the value of parts or finished products that do not meet specifica-
tions and cannot be corrected by rework. This includes the cost of labor
and material invested in the part or product up to the point where it is
rejected.
20 A Firs t C o urse in Q ua lit y En gin eerin g
b. Rework or Salvage
This is the expense incurred in labor and material to return a rejected
product to an acceptable condition. The cost of storage, including heating
and security, while the product is waiting for rework is included in this
cost category.
c. Retest
If a reworked product has to be retested, the cost of retesting and
re-inspection should be added to the internal failure cost.
d. Penalty for Not Meeting Schedules
When schedules cannot be met because part of the production is rejected
due to poor quality, any penalty arising from such a failure to meet sched-
ules is an internal failure cost. Often, overproduction is scheduled to cover
possible rejections. The cost of any overproduction that cannot be sold
should be included in the internal failure category.
1.7.1.4 External Failure Cost External failure cost is the cost arising from defective
products reaching the customer. Making a bad product causes waste, and if it is allowed
to reach the customer, it causes much more waste. The external failure cost includes:
a. Complaint Adjustment
This is the cost arising from price discounts, reimbursements, and repairs
or replacements offered to a customer who has received a defective
product.
b. Product Returns
When products are returned because they are not acceptable, then, in
addition to the cost of replacement or reimbursement, the costs of han-
dling, shipping, storage, and disposal would be incurred. Often, when the
product includes hazardous material that needs special care for disposal,
the cost of disposal of rejects can become very high. All these costs belong
in the external failure costs.
c. Warranty Charges
This quality cost category includes costs that result from failures during
the warranty period. Any loss incurred beyond the original warranty is not
included in quality costs.
not know the losses they sustain in not making quality products. Based on our experi-
ence, a quality cost study followed by appropriate improvement projects could lead to
substantial savings—a 30% to 50% reduction in quality costs—within a year or two.
These savings, almost all, usually go directly to improve the profit margin.
First, it is necessary to convince upper management of the necessity of a study and
obtain their approval. Because financial resources are needed and help from people
across the entire organization will be required to gather data, the study should have
the backing of the people at the top.
The literature contains numerous examples (e.g., Naidish 1992; Ponte 1992;
Shepherd 2000; Robinson 2000) that discuss successes achieved through quality cost
analysis. Such examples can be used to convince upper management about the value
of such a study. The case study that is described at the end of this chapter, which is a
report on a real quality cost study, revealed that the estimated failure costs in a year
amounted to about 94% of the company’s profit from the previous year. In this case,
a 50% reduction in failure costs would boost the company’s profit by about 50%, an
attractive proposition to any company CEO. In almost any situation the cost of mak-
ing a quality cost study can easily be justified by the benefits to be derived from it
especially if no such studies have been made previously.
Step 2: Organize for the Study and Collect Data
A quality cost study must be performed by a team with representation from various seg-
ments of the organization. The more these segments are represented on the team, the bet-
ter would be the cooperation and information flow from the segments. Typically, the team
will include members from the quality department, manufacturing, sales, customer ser-
vice, engineering, inspection, and accounting. The quality engineer acts as the secretary of
the team and the accountant provides the authenticity needed in dealing with cost figures.
Before collecting the data, certain decisions have to be made on how the analysis
will be performed. Specifically, a decision must be made on whether the study should
be on an annual, semi-annual, or quarterly basis. Similarly, a decision must be made
on whether it is to be for the whole plant, a department, or a certain product line. For
a large plant, it may be more practical to make the analysis on a quarterly basis. For
small units, an annual study may be more appropriate. Similarly, if many products are
produced in a plant, it may be more feasible to start the study with one major product
and then expand it to other product lines. In smaller plants, analysis by individual
products may not generate enough data for a study.
Collecting the data for a quality cost study is by no means an easy task, especially
if it is being done for the first time. The data may not be readily available, because
they are not normally flagged and captured in accounting information systems. They
may be hidden in a manager’s expense accounts, salaries and wage reports, scrap and
salvage reports, customer service expenses, and several other such accounts. A large
amount of data may have to be estimated by asking the concerned individuals to make
educated guesses. However, most of the time, reasonable estimates are good enough
22 A Firs t C o urse in Q ua lit y En gin eerin g
for initial analysis. After a successful initial study, when the value of the quality cost
study has been established and general agreement exists that the study should be
repeated periodically, it may be possible to include additional codes in the accounting
system so as to generate data that would be readily available for future studies.
The data collected can be very revealing, even in raw form. Many managers will see
for the first time just how much money is lost in making bad products or how much
expense is incurred in doing inspections to maintain a quality image. Studies showed
(e.g., Naidish 1992) that many company executives are unaware of the magnitude of
losses incurred by their organization due to the production and delivery of poor quality
products. The study reported by Naidish indicated that when company executives were
asked about the loss incurred by their organizations due to poor quality, about 66%
either did not know anything or thought it was a small fraction—less than 5%—when
the true state of affairs indicated a loss in the range of 20% to 40% of sales. In one
instance, one of the authors was a witness to a scene in which the owner of a printing
company was told that the annual cost of scrapped art prints in the plant were more
than the annual profit. The owner was shocked, jumped from his seat, and announced
that he wanted to fire every one of his employees in the room, which included his
VP-operations, the plant superintendent, the chief inspector, and the chief accountant.
The owner had been totally unaware of the magnitude of the waste until then.
If the data are analyzed in the manner discussed below, then information can be
generated in a more useful form.
Step 3: Analyze the Data
There are two ways of analyzing quality cost data once they have been accumulated
in the appropriate categories. Incidentally, in many situations, we may find that the
category to which a particular quality-related expense belongs is not clear. We should
be aware that the subcategories making up the total quality cost are not exhaustive.
There are some other quality-related costs some companies would add. Then there are
some industries that may have their own unique expenditures belonging to quality
costs but may not quite fit into the definition of the categories given above. In such sit-
uations, the basic definitions of prevention, appraisal, and failure costs must be kept in
mind, and the quality-related expenditures allocated to the most appropriate category.
Analysis Method 1
Express the TQC as a percentage of some basis, such as net sales billed. When
the TQC is expressed as a percentage of such a basis, it can be used to monitor the
quality performance of the organization over time. It would enable a comparison with
another organization of similar size or one with the best-in-class performance. Other
bases, such as total manufacturing cost, direct labor, or profit, can also be used if they
would help in highlighting the magnitude of the quality costs in relation to the total
operations. A basis that would normalize quality costs over changes occurring in the
business volume would be preferred. Net sales billed is the commonly used basis.
In t r o d u c ti o n t o Q ua lit y 23
What is the optimal level for quality costs as a percentage of sales? The answer
depends on the nature of the business, the maturity level of the quality system, how
cost categories have been defined, and the management’s attitude toward quality. The
TQC cannot be reduced to zero because it includes prevention and appraisal costs,
which always exist at some positive level. Some believe that failure costs must be
reduced to zero or near zero no matter how much it costs in prevention and appraisal.
Dr. Deming (1986), for example, believed that quality should not be judged by what it
returns in dollars in the immediate future. Good quality earns customer satisfaction,
which brings more customers, whereas poor quality spreads customer dissatisfaction
and produces a loss of market share in the long term. According to Dr. Deming, con-
tinuously improving quality to reach excellence should be the goal, regardless of how
much it costs. His message was: quality must be achieved at any cost.
On the other hand, others believe that expenditure in quality efforts must be made
only to the extent it is justified by the immediate economic returns it brings. The
relationship between quality level in a production system and the quality costs can
be studied using the graphics shown in Figure 1.4. The figure shows that increases
in quality level is achieved by increases in prevention and appraisal costs and when
increases in quality levels occur the failure costs decrease. The TQC, which is the
sum of these two, changes with quality level as shown by the third graph. This curve
indicates that the TQC decreases with increase in quality level up to a point and
then starts increasing with further increase in quality. It conveys the message that
the TQC reaches a minimum at a particular quality level, beyond which the cost of
control becomes so expensive that it overwhelms the returns achieved by improvement
in quality. It seems to justify a commonly held view that quality has an optimum level
based on the economics of a given situation, and that striving for quality beyond that
optimum may not be economically rewarding. “Perfectionism” and “gold plating” are
some of the terms used to refer to attempts to improve quality beyond this economic
level.
st
co
sal
ty
rai
ali
Quality cost in output ($)
qu
app
tal
n&
To
tio
ven
Pre
Juran and Gryna (1993) have commented that this optimum moves to higher levels
of quality when more efficient and automated methods of inspection are used, which
goes to decrease the cost of prevention and appraisal. This means that by employing
efficient methods of inspection, better quality can be achieved at optimal costs. Yet,
there is the message that quality has an economical optimum limit, and quality levels
beyond this limit results in net losses.
Neither one of the above views can be the right policy for all situations. The policy
applicable depends on the given situation. If, for example, the product is an automo-
bile brake system, where a defective product could cause enormous property damage,
injury to people, and loss of reputation for the company, then “quality at any cost” may
be a proper goal. The failure costs in the TQC cannot adequately represent the real
cost of failures. On the other hand, if the cost of marginal improvements in quality is
high and the improvement will not produce commensurate customer satisfaction, then
“quality at the most economic level” may be a justifiable policy. Thus, the “optimal”
level of the TQC as a percentage of sales depends on the particular philosophy that an
organization adopts according to the requirements in its business environment.
We have seen quality systems where the failure quality costs were reduced to levels
as low as 2% of sales, with potential for even further reduction. The failure costs can be
driven down to very low levels with the use of problem-solving tools available to a quality
engineer. The TQC may be at a higher percentage of sales for high-precision industries
such as precision machine shops and instrument makers in comparison to low-precision
industries, such as wire and nail manufacturers and furniture makers. In each situation,
the TQC should be judged whether it is appropriate for the situation, possibly in compar-
ison with some comparable bench marks. If the TQC is more than 10% to 15% of sales
at this day and age, it should raise some red flags (Duffy 2013). Then, sources of exces-
sive costs must be searched and improvement solutions must be implemented. We would
venture to say that, in any system, there will always be improvement opportunities that
can produce a sizable return for a small investment. Such opportunities should be con-
tinuously sought, and projects should be completed to improve quality and reduce waste.
Analysis Method 2
In this method, the costs under individual categories are expressed as percentages
of the TQC, giving the distribution of the total cost among the four categories. Such
a distribution usually reveals information on what is happening within the quality
system. The following two examples illustrate the idea:
Example 1 Example 2
Prevention 3% Prevention 4%
Appraisal 20% Appraisal 87%
Internal Failure 9% Internal Failure 9%
External Failure 68% External Failure 0%
Total 100% Total 100%
In t r o d u c ti o n t o Q ua lit y 25
While reviewing the results from a quality cost study, it may sometimes be obvious as
to what needs to be done in order to correct a quality problem and reduce costs. For
example, in one case in which the external failure costs were found to be high, it was
discovered that the shipping department was sending products made for one customer
to another customer! The problem was quickly corrected by installing a simple ship-
ping control procedure. In another case, the inspection department was testing an
electric product under one set of conditions (110 V), whereas the customer was testing
the product under a different set of conditions (240 V). This type of misinterpreta-
tion of a customer’s specifications is, unfortunately, a common cause of product rejec-
tion and customer dissatisfaction. Again here, the problem was corrected by a simple
change to the testing procedure.
In other situations, however, the reasons for large failure costs, internal or external,
may not be so obvious. Further investigation may be necessary to determine the root
causes of the problem and take corrective action. In other words, projects must be
undertaken, maybe by a team, to gather data, make analysis, discover solutions, and
implement changes to effect quality improvement. Then, problem-solving tools, such
as Pareto analysis, fishbone diagram, regression analysis, and designed experiments
become necessary. These problem-solving tools, along with the systematic procedures
for problem solving and project completion, are discussed in Chapter 8.
26 A Firs t C o urse in Q ua lit y En gin eerin g
After the initial study has been completed and a few quality-improvement projects have
been successfully implemented, new codes can be created within an existing accounting
system. The monograph by Morse et al. (1987), published by the Institute of Management
Accountants, provides useful guidance on creating such a system. Quality cost data can
then be accumulated on a routine basis so that future studies can be made without much
labor. Periodic analysis and reporting would help in monitoring the progress made in
100%
Total quality cost
50%
0 1 2 3 4 5 6 7 Yr.
Figure 1.5 Typical progression of total quality costs over time. (Adapted and reproduced from Noz et al., Transactions
of the 37th Annual Quality Congress, pp. 303. Milwaukee: American Society for Quality Control, 1983. With permission.)
100%
External failure
Internal failure
50%
Appraisal
Prevention
0%
0 1 2 3 4 5 6 7
Years
Figure 1.6 Typical changes in quality cost categories over time. (Adapted and reproduced from Noz et al., Transactions
of the 37th Annual Quality Congress, pp. 303. Milwaukee: American Society for Quality Control, 1983. With permission.)
In t r o d u c ti o n t o Q ua lit y 27
reducing these costs through identifying newer opportunities for improvement. When
quality cost studies are made on a continuing basis, a scoreboard can be generated to
provide a pictorial perspective on how quality costs have been changing over time.
Figures 1.5 and 1.6 show the typical results of a quality cost study performed over
a period of several years. Figure 1.5 shows the behavior of the TQC over time, and
Figure 1.6 shows how the distribution of individual cost categories changes over time.
Table 1.2 describes the conditions expected in organizations while on the quality
improvement journey, and shows the appropriate responses to those conditions needed
to achieve success in the improvement program.
A collection of the best advice for people who want to use a quality cost study to
pursue a quality improvement program comes from Ponte (1992). His advice can be
summarized in the following sequence of steps, which he calls the “cost of quality”
(COQ ) program:
1. Obtain a commitment from the highest level of management in the organiza-
tion.
2. Choose a “focal person” to be responsible for promoting the COQ program.
3. Write a COQ procedure, which will include the definitions, responsibilities,
training material, analyses to be made, reports to be generated, and the
follow-up action needed.
4. Make the COQ procedure a dynamic document, requiring review and reas-
sessment every six to 12 months.
5. Require a monthly COQ program review meeting, chaired by the CEO, as a
mandatory requirement of the COQ procedure.
6. Train all involved employees in the principles and procedures of the COQ
program.
TABLE 1.2 Progression of Activities during a Quality Improvement Program Guided by Quality Costs
YEAR CONDITION OF THE SYSTEM APPROPRIATE ACTION
1 Large failure cost, most of it external. Improve final inspection and stop defectives from leaving the plant.
Low appraisal costs. Prevention Develop quality data collection. Identify projects, and begin
almost nonexistent. problem solving.
2 Failure costs still high. Prevention Establish process control. Improve data collection, data analysis,
effort low. and problem solving.
3 Process control installed. Some Improve process control on critical variables based on capability
process capabilities known. External data. Begin using experiments to select process parameter levels.
failures at low levels.
4 Failure costs 50% of total quality Continue using designed experiments for process improvements to
costs. reduce variability. Continue process control on critical variables.
5 Improved processes. Continue process control. Look for project opportunities for further
variability and cycle time reduction. Improve quality of service
processes: maintenance, safety, scheduling, and customer service.
6 and 7 Process in stable condition. Same as above.
Source: Adapted and reproduced from Noz et al., Transactions of the 37th Annual Quality Congress, pp. 303. Milwaukee:
American Society for Quality Control, 1983. With permission.
28 A Firs t C o urse in Q ua lit y En gin eerin g
7. Develop visibility for the program by creating a trend chart that shows the
gains achieved. (Use manufacturing cost as the base rather than sales dollars,
because the former will show the COQ as a larger percentage.)
8. Demand root-cause analysis for determining the causes of failures.
9. Demand closed-loop, positive corrective action response.
sales, future customers, and general market share. These costs are very difficult to
quantify although they exist.
Liability costs are losses from liability actions by product users. It would include costs
arising from exposure to liability, even when no actual judgment is issued against the
company, such as expenses incurred for preparing to defend against liability actions
(e.g., cost of legal fees, expert witnesses, data collection, etc.).
Equipment quality costs are the costs arising from capital equipment that have direct
impact on quality, such as automated data collection systems, automatic test equip-
ment (ATE), measuring equipment, and computer hardware and software meant spe-
cifically for processing quality information. The cost of space, heat, security, and other
appurtenances needed for equipment is part of this cost. The cost of equipment, which
have life beyond one year, as previously mentioned, is not normally included in quality
costs. However, in some operations where ATE or the modern computerized measur-
ing machines known as coordinate measuring machines (CMMs) are used, this cost
can be quite considerable, and so is included in the quality costs to fairly reflect the
expenditure involved in creating quality.
Life-cycle quality costs are the costs arising from service and repair done to a product
over a reasonable time beyond the warranty period. Normally, only service charges
incurred within the warranty period are included as part of the external failure cost.
There could be a plausible argument that the entire life-cycle cost of maintenance must
be included as part of quality costs because they reflect the original quality that was
designed and built into the product. These may be relevant to some industry although
collection of data for these costs may be difficult because they may occur during the
life of the product, possibly long after they had left the producer.
All these quality costs discussed above exist in varying degrees in different systems.
Presumably, the reason that they were not included in the TQC index defined by the
ASQ is that they are very difficult to collect and quantify and many of them have a
long lag time between their occurrence and their actual showing up in company books.
As mentioned earlier, some companies use only the failure costs as the index to mon-
itor the quality performance of a system. Others add some of the indirect costs to the
TQC if they seem to be significant to their operations. It does not matter how the eco-
nomic index is defined for a company as long as the same definition is used consistently
over years so that comparisons from year to year are valid. However, if a comparison
is sought between one company’s TQC with another’s—for example, a quality leader
or a competitor—then the TQC needs to be made up of the same components in both
systems in order for the comparison to be valid. That is when a standard definition for
the four categories and the TQC, as recommended by ASQ , becomes relevant.
Often a quality professional has a question during his or her improvement journey:
“How much more should I invest in prevention or appraisal in order to bring down
30 A Firs t C o urse in Q ua lit y En gin eerin g
the current failure costs by, say, 50%?” To answer such a question the interrelationship
among the quality cost categories—the relationship that shows how a change in one
category produces changes in others—is needed.
Several researchers have attempted to answer this question, yet no satisfactory
answers seem yet available. Statements, such as “an ounce of prevention is worth a
pound of failure” (Campanella and Corcoran 1982), describe the relationship in a
qualitative manner and indicate the leverage that prevention has in reducing fail-
ure costs. Krishnamoorthi (1989) attempted to obtain a relationship among the cat-
egories using regression analysis, treating the percentages in the four cost categories
as random variables taking different values in different quality systems. Cost data
were obtained from different quality systems, and the relationships among the cat-
egories were obtained using regression analysis. The following two relationships were
obtained where E, I, P, and A represent the percentages of external failure, internal
failure, prevention, and appraisal costs of the TQC, respectively:
5.9 298
E= + <1.1>
P A
121
I= + 0.213 A . <1.2>
P
A quality cost study reveals quite a bit of information about the health of a quality
system. It indicates whether quality is produced by the system at the right price. When
the system health is not good, the study points to where deficiencies can be found so
that action can be taken to rectify them.
In t r o d u c ti o n t o Q ua lit y 31
The quality costs, which are the costs associated with control activities devoted to
producing quality products plus the losses incurred in not producing quality products,
are categorized into four categories: prevention, appraisal, internal failure, and exter-
nal failure. The data collected from various sources that generate quality costs can be
analyzed in two ways. The TQC, which is the sum of the costs in the four categories,
expressed as a percentage of some chosen base that represents the volume of business
activity, can be used to follow the progress in quality performance of an organization
over time. It can also be used for comparing one company’s performance with that of
another, a leader or a competitor in the same industry. When the four cost categories
are expressed as a percentage of the TQC, such a distribution will reveal where defi-
ciencies exist. The value of a quality cost study lies in pursuing such opportunities,
discovering root causes, and implementing changes for improvement.
Many other quality costs, such as indirect costs, liability exposure costs, and so on,
are not included in the TQC index because these are difficult to obtain and have a
long lag time in coming. However, the TQC as described above, as recommended by
the ASQ’s Quality Cost Committee, is a good index to reflect the economic benefits
of implementing a quality assurance system.
Most models for quality systems, such as the ISO 9000, the MBNQA criteria,
or the Six Sigma process (discussed in Chapter 9), recommend use of a performance
measure to monitor the overall performance of an organization in quality. The TQC
is a good measure to use for this purpose.
The case study provided below relates to ABCD Inc., a company that packages mate-
rials such as liquid soap, deodorant, and cooking oil for large customers who ship the
material to ABCD in tank cars. The company then fills the liquids into bottles or cans
in their filling lines, seals and labels them, packages them in cartons and skids, and
ships them to the customers’ warehouses. Sometimes, they do some mixing operations
before or during filling the liquids into the containers. Customers’ quality require-
ments include the appearance and strength characteristics of bottles and cans, and a
very demanding set of specifications for fill volume or fill weight of the contents of the
bottles and cans.
The study made by D.H., their quality engineer, is called a “preliminary study,”
because information regarding several aspects of quality costs were either not avail-
able or were incomplete when this study was made. Many of the figures were good
estimates. This study was the starting point of several improvement projects, which
resulted in considerable improvements in the operation of the packaging company and
were responsible for considerable increase in operating profits of the company.
32 A Firs t C o urse in Q ua lit y En gin eerin g
ABCD INC.
QUALITY IMPROVEMENT PROGRAM
PRELIMINARY QUALITY COST STUDY
MADE BY: D.H.
Important Notes
1. This study is a preliminary study in that several costs, which are unavail-
able or difficult to estimate, have not been included. For instance, this
study does not attempt to estimate the proportion of management sala-
ries that are part of quality costs.
2. Based on Note 1 above, one should realize that our total quality cost is
actually much greater, probably 25% to 50% higher than the estimate
presented here.
3. However, even considering Notes 1 and 2 above, this study still gives
a good picture of the magnitude of our quality problems, and it pro-
vides an excellent measure of effectiveness for monitoring our quality
improvement program.
4. Costs have been determined for the DDD plant only.
5. This study presents quality costs incurred in the period January to June
XXXX, a six-month period, and extrapolates these costs to provide fig-
ures on a “per year” basis.
6. Descriptions of all quality costs included in this preliminary study are
given on the following pages. These quality costs have been put into
three categories: failure costs (internal and external), appraisal costs, and
prevention costs.
7. A summary and conclusion have been provided.
8. The assistance of L.M. and F.P. is greatly appreciated.
Failure Costs (Internal and External)
1. ABCD Material Loss
This category includes the costs of any material (chemical or component) that is
lost because of overfill, changeovers, spills, or any other cause that results in using
more material than should have used. This cost is calculated by taking the differ-
ence between the cost of the raw materials that theoretically should have been used
to produce the finished product and the cost of the raw materials that were actually
used to produce the finished product. No loss allowances are taken into account
here. This category includes only those costs associated with ABCD-owned mate-
rials (see Category 3 for the costs associated with customer-owned materials).
In t r o d u c ti o n t o Q ua lit y 33
7. Disposal
This category includes only the cost of disposing of the crushed cans. The figure
given here is based on a cost of $85 per trip and an average number of pickups
of 1.5 per week.
Total loss from material disposal for six months: $3300.
8. Re-Sampling
This category includes the cost of re-inspecting products after they have been
reworked. The figure for this cost is calculated by multiplying the total number
of re-sampling hours by an average hourly wage.
Total loss from re-sampling for six months: $3300.
9. Downtime
Although total downtime per month is available and somewhat categorized,
considerable difficulty is encountered when attempting to estimate the propor-
tion of downtime that results from quality problems. As a result, I have not
included the downtime caused by poor quality in this study.
10. Obsolete Material
I have not included loss caused by obsolescence in this study.
11. Customer Service
The percentage of the salaries in the customer service department equivalent to
the proportion of time dedicated to dealing with customer complaints, returned
material, allowances, and so on, should be included in our total quality cost.
However, since these figures are not readily available and would require consid-
erable estimation, I have not included them here.
12. Managerial Decision
One of the most costly quality costs of our organization is the cost associated
with the time that managers have to spend on quality-related problems. Not only
is this cost a big one, it also is a difficult one to estimate and monitor. Hence,
as stated at the beginning of this study, I have not attempted to estimate these
costs. We must realize, however, that this quality cost is very significant.
Total failure cost for six months: $941,500.
Appraisal Costs
1. Chemical QA
This category includes all labor and other expenses associated with the chemical
quality assurance (QA) department. These figures can be found in accounting
documentation under Department 632.
Total cost of chemical QA for six months: $164,600.
2. Physical QA
This category includes all labor and other expenses associated with the physi-
cal and component QA department. These figures can be found in accounting
In t r o d u c ti o n t o Q ua lit y 35
documentation under Department 633. Subtracted from these figures (and put
in prevention costs) are those costs associated with quality reporting.
Total cost of physical QA for six months: $162,800.
3. Inspection and Testing
This category includes all labor associated with inspecting and testing products
for the first time. Fifty percent of the cost of the line process control coordinators
(PCCs) was allocated to inspection and testing, and the other 50% was allocated
to the prevention costs. This cost was estimated by D.R.
Total inspection and testing cost for six months: $119,000.
4. Materials and Services
This category includes the cost of all products and materials consumed during
testing. This cost was estimated by D.W.
Total material and service cost for six months: $30,000.
5. Test Equipment
This category includes the cost of maintaining the accuracy of all test equipment.
Any cost involved in keeping instruments in calibration is included in this cost.
This cost was estimated by D.W.
Total cost due to test equipment for six months: $12,500.
Total appraisal cost for six months: $488,900.
Prevention Costs
1. Process Control
This category includes all labor associated with assuring that our products are
fit for use. Included in this cost is 50% of the cost of the line PCCs; the other
50% of the cost of the line PCCs was allocated to appraisal costs. This cost was
estimated by D.R.
Total cost for six months: $119,000.
2. Improvement Projects and New-Products Review
This category includes all labor associated with reviewing new products for
preparing bid proposals, evaluating potential problems, making quality
improvements, and so on. Included in this cost is the proportion of the sala-
ries of technical service and product development personnel corresponding to
the percentage of time spent on quality activities. After consultation with the
personnel involved, the following figures were obtained. The cost of service
to preparation of cost quotes, preparation of specifications, regulatory activi-
ties, preparation of batch packets, new-product review meetings, and so on is
about $28,700. The cost of technical services quality-related activities is about
$33,800.
Total cost of improvement projects for six months: $62,500.
36 A Firs t C o urse in Q ua lit y En gin eerin g
3. Quality Reporting
This category includes the cost of making quality reports to management. This
cost has been estimated and taken out of the physical QA.
Total cost of quality reporting for six months: $8500.
4. Training
The cost of all quality-related training that was undertaken by anyone in the
company during the six-month period should be included in this section. These
costs are not readily available and, hence, are not included in this study.
Total prevention cost for six months: $190,000.
SUMMARY
Failure costs $941,500
Appraisal costs $488,900
Prevention costs $190,000
Total quality cost for six months: $1,620,400
Sales for January–June, XXXX $24,287,000
Quality cost as percentage of sales 6.67%
Failure cost as percentage of sales 3.88%
Profit for January–June, XXXX $1,005,000
Total quality cost for the period $1,620,400
Quality cost as percentage of profit 161%
Failure cost as percentage of profit 94%
Interrelationship of quality cost categories:
Failure cost 58%
Appraisal cost 30%
Prevention cost 12%
Total quality cost 100%
Conclusions
1. The estimated total quality cost for ABCD Inc. is $3,240,800 per year.
The actual quality cost is probably 25% to 50% greater because of the
several costs that were not readily available.
2. The total quality cost of 6.67% of total sales should be compared with
other companies in our industry. We will attempt to find such compa-
rable values.
3. The total quality cost is much greater than our profit. Any quality cost
that is eliminated directly becomes profit. In other words, our profit over
the first six months of this year would have been $2,625,400 (instead of
$1,005,000) if no quality costs were incurred.
In t r o d u c ti o n t o Q ua lit y 37
4. Internal failure costs account for 58% of all quality costs. This percentage
is too high. More resources should be allocated to prevention.
5. The two highest failure cost categories—and hence the ones that should
be tackled first—are:
a. Material losses
b. Rework labor
6. This cost study will be repeated periodically to monitor our progress in
making quality improvements.
1.8 Success Stories
Many organizations have successfully made use of the methodologies available for
creating quality products and satisfying customers and have achieved business success.
Many stories can be found in the literature where failing organizations were revived
to reach profitability by using the systems approach to making and delivering quality
products to customers. Each year the Baldrige Performance Excellence Program pub-
lishes stories about winners of the Malcolm Baldrige National Quality Award on their
website: baldrige.nist.gov/Contacts_Profiles.htm. Anyone who reads a few of these
stories will become convinced about the value of the quality methodologies that have
been developed by the leaders and professionals in the quality field. These methods,
discussed in the following chapters of this book, can be put to use by anyone wishing
to make quality products, satisfy customers, and succeed in business.
1.9 Exercise
1.9.1 Practice Problems
1.1 Pick out the appraisal quality cost from the following:
a. Fees for an outside auditor to audit the quality management system
b. Time spent to review customers’ drawing before contract
c. Time spent in concurrent engineering meetings by a supplier
d. Salary of a metrology lab technician who calibrates instruments
e. None of the above
1.2 The cost of writing the operating procedures for inspection and testing as
part of preparing the quality management system documentation should be
charged to:
a. Appraisal costs
b. Internal failure costs
c. Prevention costs
d. External failure costs
e. None of the above
38 A Firs t C o urse in Q ua lit y En gin eerin g
– Appraisal: 23%
– Internal failure: 15%
– External failure: 60%
A recommendation for immediate improvement:
a. Increase external failure
b. Invest more money in prevention
c. Increase appraisal
d. Increase internal failure
e. Do nothing
1.9 A practical, simple definition for the quality of a product is:
a. It shines well
b. It lasts forever
c. It is the least expensive
d. It is fit for the intended use
e. It has a well-recognized brand name
1.10 According to the modern definition of quality, in addition to the product
meeting the customer’s needs, it should be:
a. Produced at the right price through efficient use of resources
b. Packaged in the most colorful boxes
c. Advertised among the widest audience possible
d. Produced by the most diverse group of people
e. Sold in the niche market
1.11 One of the following is not part of total quality management:
a. Commitment of top management to quality and customer satisfaction
b. The inspection department being in sole control of quality
c. The entire organization being focused on satisfying the needs of the
customer
d. Creating a culture where continuous improvement is a habit
e. Training all employees in quality methods
1.12 A batch of thermostats failed in the test for calibration at the end of the
production line. After being individually adjusted, they were retested. The
cost of this retest will be part of:
a. Prevention quality cost
b. Appraisal quality cost
c. Retest quality cost
d. Internal failure quality cost
e. External failure quality cost
1.13 Indirect quality costs are the costs incurred:
a. By vendors, which are then charged indirectly to the customer
b. In building additional capacity or inventory to make up for units lost due
to poor quality
c. In lost future sales because of poor quality image
40 A Firs t C o urse in Q ua lit y En gin eerin g
1.9.2 Mini-Projects
Mini-Project 1.4 This project deals with a “large” system where the people involved
are counted in millions, and the costs and benefits are counted in billions. It takes a
bit of experience to get used to thinking in millions and billions.
The U.S. Social Security system was created by an act of the U.S. Congress in
1935 and signed into law by President Franklin Roosevelt. The main part of the pro-
gram includes providing retirement benefits, survivor benefits, and disability insur-
ance (RSDI). In 2004, the U.S. Social Security system paid out almost $500 billion in
benefits. By dollars paid, the U.S. Social Security program is the largest government
program in the world and the single greatest expenditure in the U.S. federal budget.
More on the Social Security system can be obtained from their website: http://www
.ssa.gov/.
How would you define the quality of service provided by the Social Security sys-
tem? What are the “products,” and what are the characteristics and performance mea-
surements to be taken to monitor the service quality of the Social Security system?
References
Campanella, J., and F. J. Corcoran. 1982. “Principles of Quality Costs.” Transactions of the 36th
Annual Quality Congress, Milwaukee, WI: A.S.Q.C.
Deming, W. E. 1986. Out of the Crisis. Cambridge, MA: MIT Center for Advanced Engineering
Study.
Duffy, G. L. 2013. The ASQ Quality Improvement Pocket Guide: Basic History, Concepts, Tools, and
Relationships. Milwaukee, WI, ASQ Quality Press. 62–65.
Feigenbaum, A. V. 1956. “Total Quality Control.” Harvard Business Review 34 (6): 93–101.
Feigenbaum, A. V. 1983. Total Quality Control. 3rd ed. New York: McGraw-Hill.
Gabor, A. 1990. The Man Who Discovered Quality. New York: Time Books. Reprinted, New
York: Penguin Books.
Garvin, D. A. 1984. “What does Product Quality Really Mean?” Sloan Management Review
26: 25–43.
Godfrey, A. B. 2002. “What is Quality?” Quality Digest 22 (1): 16.
Gryna, F. M. 2000. Quality Planning and Analysis. 4th ed. New York: McGraw-Hill.
Harrington, H. J. 1987. Poor Quality Cost. Milwaukee, WI: A.S.Q.C. Quality Press.
Institute of Medicine (IOM). 1999. To Err is Human: Building a Safer Health System. Washington,
DC: National Academy Press. www.nap.edu/books/0309072808/html/.
Institute of Medicine (IOM). 2001. Crossing the Quality Chasm: A New Health System for the 21st
Century. Washington, DC: National Academy Press. www.nap.edu/books/0309072808
/html/.
Ishikawa, K. 1985. What is Total Quality Control? The Japanese Way. Translated by D. J. Lu.
Englewood Cliffs, NJ: Prentice-Hall.
Juran, J. M. (editor-in-chief). 1988. Quality Control Handbook. 4th ed. New York: McGraw-Hill.
Juran, J. M., and F. M. Gryna. 1993. Quality Planning and Analysis. 3rd ed. New York:
McGraw-Hill.
Krishnamoorthi, K. S. 1989. “Predict Quality Cost Changes Using Regression.” Quality Progress
22 (12): 52–55.
Krzykowski, B. 2009. “In a Perfect World.” Quality Progress 33 (11) 32–34.
Markus, M. 2000. “Failed Software Projects? Not Anymore.” Quality Progress 33 (11): 116–117.
44 A Firs t C o urse in Q ua lit y En gin eerin g
McBride, R. G. 1987. “The Selling of Quality.” Transactions of the 41st Annual Quality Congress.
Milwaukee, WI: A.S.Q.C.
Moen, R.D and C.L. Norman. 2016. “Always Applicable.” Quality Progress 49 (6): 47–53.
Morse, W. J., H. P. Roth, and K. M. Poston. 1987. Measuring, Planning, and Controlling Quality
Costs. Montvale, NJ: Institute of Management Accountants.
Naidish, N. L. 1992. “Going for the Baldridge Gold.” Transactions of the 46th Annual Quality
Congress. Milwaukee, WI: A.S.Q.C.
Noz Jr., W. C., B. F. Redding, and P. A. Ware. 1983. “The Quality Manager’s Job: Optimize
Cost.” Transactions of the 37th Annual Quality Congress, Milwaukee, WI: A.S.Q.C.
Ponte, A. G. 1992. “You Have Cost of Quality Program, So What!” Transactions of the 46th
Annual Quality Congress, Milwaukee, WI: A.S.Q.C.
Robinson, X. 2000. “Using Cost of Quality with Root Cause Analysis and Corrective Action
Systems.” Transactions of the 54th Annual Quality Congress, Milwaukee, WI: A.S.Q.C.
Shepherd, N. A. 2000. “Driving Organizational Improvement Using Cost of Quality: Success
Factors for Getting Started.” Transactions of the 54th Annual Quality Congress, Milwaukee,
WI: A.S.Q.C.
2
S tatistic s for Q ualit y
We begin this chapter with an explanation of why the science of statistics is needed
in quality engineering and why quality engineers need to know the fundamentals of
probability and statistics; we then discuss the fundamentals of probability and statis-
tics. This chapter is broadly divided into three sections; the first one covers empirical
methods for describing populations with variability in them, the second discusses the
mathematical models known as probability distributions used for describing popula-
tions, and the third covers methods used for inferring quality of populations from the
quality observed in samples.
For those who have already taken classes in engineering statistics, this chapter will
be a good quick review. For those who have no prior preparation in engineering sta-
tistics, this chapter provides the important basics in a nutshell.
2.1 Variability in Populations
45
46 A Firs t C o urse in Q ua lit y En gin eerin g
exists in every population we deal with in quality work and we have to deal with these
questions in almost every situation where the quality of a population is to be assessed
and verified.
The science of statistics provides answers to these questions. First it provides the
means for describing and quantifying the variability in a population and then offers
methods for “estimating” population quality from sample quality in the presence of
this variability. In addition, statisticians have created several other methods using
principles of statistics that are useful in creating, verifying, and ensuring quality
in products. The major statistical methods useful in quality are listed in the table
below.
These methods are discussed in the appropriate context in different chapters of this
book. To develop a good understanding of these methods, however, a good under-
standing of the fundamentals of probability and statistics is essential, since such
understanding can lead to their effective use. A basic discussion of the principles of
probability and statistics is undertaken in this chapter with the objective of providing
the student with sufficient knowledge of the fundamentals so that he/she will be able
to appreciate fully the statistical tools of quality and use them effectively.
In Section 2.4 of this chapter we discuss the empirical methods wherein the anal-
ysis is based on data gathered from populations, which include some graphical and
some numerical methods. These do not make use of any mathematics beyond simple
arithmetic. In Section 2.5, the discussion focuses on the mathematical approach
to modeling such populations using probability distributions. These are idealized
mathematical functions to describe the shape of population frequency distributions.
The concepts of probability, distribution, mean, and variance of a distribution are
needed here. The three main distributions, the binomial, Poisson and normal distri-
butions, are covered here. The mathematical methods used for estimating popula-
tion quality from sample quality viz., confidence intervals, and hypothesis testing
are discussed in Section 2.6. These methods make use of the knowledge of prob-
ability and distributions learned in the previous section. First, a few terms need to
be defined.
S tatis ti c s f o r Q ua lit y 47
2.2 Some Definitions
2.2.1 The Population and a Sample
The term “population” refers to the collection of all items that are of interest in a given
situation. All engines assembled during night shift in January 2016 in a plant, all stu-
dents graduating from a college of engineering in a year, and the net weight of sugar
in one-pound bags filled in a filling line are examples of populations. The term sample
refers to a subset chosen from the population.
Whenever we use the term sample, we really mean a random sample. A sample is
considered a random sample if it is taken in such a manner that each item in the popu-
lation has an equal chance of being included in the sample.
A brief discussion of how a random sample is selected in practice may be in order
here. When we have to select a random sample, it is important to keep in mind the
above definition of a random sample. Suppose we need a sample of 30 from a popula-
tion that has 1000 units. The best way to assure randomness in the sample is to, first,
identify all members of the population with serial numbers: 1 to 1000. Then, 30 ran-
dom numbers are chosen using a random number table or using the random number
generator in a computer or a calculator, in the range of 1 to 1000. Those items in the
population that bear the 30 selected random numbers as their IDs constitute the ran-
dom sample.
This would be the ideal way to select a random sample, but there are situations
where it may not be possible to use the ideal approach. In some instances, it may not
be practical to identify all the items in a population using serial numbers, or it may not
be possible to remove from the lot those identified to be in the sample. Compromises
may be necessary to accommodate practical problems that would prevent selection of
a truly random sample. In those situations, it is necessary to select the sample units
making sure that the procedure for picking the units will satisfy at least approximately
the principles of randomness—all units in the population having equal chance of
being included in the sample. If the procedure does not follow a pattern, such as tak-
ing all units from the top row, from the corners, from the center or middle row, the
procedure can still yield reasonably good random samples. Whenever possible, the
rigorous approach that will not compromise the randomness of the sample should be
used, as the randomness of the sample is critical to obtaining satisfactory results from
most of the statistical tools.
When samples are inspected, observations are generated and the collection of these
observations is called data. Because data come from samples, they are referred to as
sample data.
The data are classified into two main categories: measurement data and attribute
data. Measurement data, or variable data as they are sometimes called, come from
48 A Firs t C o urse in Q ua lit y En gin eerin g
As we said earlier, every population has some degree of variability in it. Whether it is the
diameter of bolts, the hardness of piston rings, or the amount of cereal in a box, all popu-
lations have variability in them. It is this variability—the difference from unit to unit—
that is the cause for poor quality, and therefore needs to be measured and minimized
to achieve quality in products. We will see several examples, in this chapter and later
chapters, that will illustrate the idea that excess variability is the cause of poor quality and
waste. One simple example is given now to show how excessive variability causes poor
quality and waste, while a smaller variability results in good quality and reduces waste.
Example 2.1
Figure 2.1a shows the plot of fill-weights in twenty 20-lb. boxes of nails filled in
an automatic filling line at a nail manufacturer. The graph shows that the average
22 22
21 21
Net weight
Net weight
20 20
18 18
5 10 15 20 5 10 15 20
(a) Line A (b) Line B
Figure 2.1 (a, b) Net weight of nails in twenty 20-lb. boxes from two filling lines.
S tatis ti c s f o r Q ua lit y 49
fill-weight in the boxes is about 20 lb. but some fill-weights fall below the lower
specification, indicating that those boxes are not only not acceptable to the customer
but will also violate state laws. Problems in this filling line, indicated as Line A,
need to be resolved.
Solution
In situations like in Line A, the usual response from a line supervisor will be to
raise the dial setting so that all fill-weights will be increased. Even as this would
bring those below the specification above it, it will also increase the fill-weights in
boxes that are already sufficiently full, resulting in the manufacturer giving away
more nails than necessary. On the other hand, if someone can recognize the excess
variability among the fill-weights, discover and eliminate the sources of the excess
variability, and make the fill-weights more uniform, then the fill-weights below
specification can be avoided without the need for overfilling those that are already
full. There will be savings for the manufacturer, and the customer will not receive
any box with a fill-weight less than the specification. The graph of Line B shown in
Figure 2.1b is, in fact, the graph of fill-weights of Line A after such steps were taken
to reduce variability. A leaky shutter in one of the filling chutes was responsible
for most of the variability in the filling in Line A. The leaky shutter was allowing
uncontrolled amount of nails, sometimes more and sometimes less, when it was
supposed to be shut.
Incidentally, the graphics of this example show one way to describe or obtain a
visual “reading” of the variability in a population. We will see a few more methods
of describing variability in populations.
The frequency distribution is the tool used to describe the variability in a population. It
shows how the units in a population are distributed over the range of possible values.
A frequency distribution can be drawn for an entire population by taking a mea-
surement on each unit of the population, though this will take enormous time and
resources. So, frequency distributions are usually made from data obtained from a
sample of the population. Such a frequency distribution from sample data is called a
histogram. The histogram can be taken to represent the frequency distribution of the
whole population if the sample had been taken randomly and the sample size had been
large (≥50). The method of drawing a histogram is described next.
2.4.1.1 The Histogram The histogram is a very useful tool in the tool box of a quality
engineer, as it helps in understanding the nature of the variability in a population.
Since many quality problems relating to populations of products arise from excessive
variability in process parameters and product characteristics, they can be resolved by
studying the histogram made for them. Many computer programs can prepare a his-
togram given the data; however, an understanding of how the histogram is made is
50 A Firs t C o urse in Q ua lit y En gin eerin g
Example 2.2
The data below represent the amount of deodorant, in grams, in aerosol cans filled
in a filling line of a packaging firm. Draw the histogram of the data.
Solution
Step 1: The data are in Table 2.1. n = 100.
Step 2: Largest value = 346. Smallest value = 175. Range = 346 – 175 = 171.
Using the formula for number of cells: k = 1 + 3.3(log 100) = 7.6≃8.
Next, cell width = 171/8 = 21.375. This is an inconvenient cell width, so we
will make the cell width = 20 and readjust the number of cells
k = 10.
Steps 3 and 4: Table 2.2 shows the cell limits and the tally of the data. Figure 2.2
shows the histogram. In this histogram, the bars represent the percentage of
data that lie in each of the cells. Histograms are also drawn with the bars rep-
resenting the frequency of occurrence or number of values falling in each cell.
Both histograms will look alike except for the scale on the y-axis.
Table 2.1 Data on Fill-Weight in Deodorant Cans in Grams
265 197 346 280 265 200 221 265 261 278 234 265 187 258 235 269 265 253 254 280
205 286 317 242 254 235 175 262 248 250 299 214 264 267 283 235 272 287 274 269
263 274 242 260 281 246 248 271 260 265 268 267 300 250 260 276 334 280 250 257
307 243 258 321 294 328 263 245 274 270 260 281 208 299 308 264 280 274 278 210
220 231 276 228 223 296 231 301 337 298 215 318 271 293 277 290 283 258 275 251
4 221–240 ||||||||| 9 9 19
5 241–260 |||||||||||||||||||||| 22 22 41
6 261–280 |||||||||||||||||||||||||||||||||| 34 34 75
7 281–300 ||||||||||||||| 15 15 90
8 301–320 ||||| 5 5 95
9 321–340 |||| 4 4 99
10 341–360 | 1 1 100
Total 100 100
51
52 A Firs t C o urse in Q ua lit y En gin eerin g
30
20
%
10
160 180 200 220 240 260 280 300 320 340 360
Fill-weight
2.4.1.2 The Cumulative Frequency Distribution Another graph of the data that provides
some very useful information is the cumulative frequency distribution. This is a graph
of the cumulative percentages of frequencies shown in Table 2.2. Figure 2.3 shows
the percentage cumulative frequency distribution of the data in Table 2.1. With this
graph, we can readily read some useful information. For example, from Figure 2.3, we
can read that approximately 40% of fill-weights are below 250 g and about 90% of the
fill-weights are below 290 g. The value below which 25% fill-weights fall is 240 and
the value above which 10% of the fill-weights lie is 300. (To facilitate making these
readings, imagine a smooth curve passing through the mid points of the bars repre-
senting cumulative percent frequencies.) Note that although the graph is drawn from
sample data, the conclusions that we draw can be stated as conclusions for the entire
population, as the size of the sample is large.
100
Cumulative %
50
160 180 200 220 240 260 280 300 320 340 360
Fill-weight
Quite a bit can be learned about a population by studying the histogram drawn
for it. The histogram gives an idea as to where on the x-axis, or around which central
point, the population is distributed. It also gives an idea of the dispersion, or vari-
ability, in the population. By comparing the histogram with the applicable specifica-
tions, questions such as “Are all the measurements in the population falling within
specification?” or “Is the variability within an acceptable level?” can be answered.
Furthermore, the histogram can point to directions for solving any problems that may
exist. Two examples are given below to illustrate how making a histogram can help in
discovering causes of problems in a population and how corrective action can be taken
to solve the problems.
Example 2.3a
Solution
A sample of 100 shafts was taken from the hopper of one of the milling machines
and the widths were measured. The histogram made from these measurements is
shown in Figure 2.4, where the lower and upper specification limits are indicated as
LSL and USL, respectively. The histogram typifies a bimodal distribution, which is
a frequency distribution with two modes, or two peaks, indicating that two differ-
ent distributions are mixed together in this population—one almost all within the
specification, and another almost all outside the specification.
LSL USL
15
10
%
It was easy to guess that the difference in the two distributions might be caused
by the difference in the way that the two cutting heads were cutting the key-ways.
Fifty shafts were taken separately from each of the two cutting heads, and sepa-
rate histograms were drawn. It was easy then to see which cutter was producing
key-ways distributed within specification and which was producing the distribution
that was outside the specification. Once this was explained to the mechanic, he
understood the situation and set out to discover the physical differences between the
two cutter heads. The difference was found; the deviant cutter was adjusted follow-
ing the example of the good cutter. After adjustment, the key-way widths were all
within specification, and the calibration problem was almost eliminated.
Incidentally, we note from the above example that the histogram, like many other
statistical methods, only points to the direction; knowledge of the technology and the
mechanics of the process is necessary in order to propose theories for possible causes,
test the theories, discover solutions, and implement them.
Example 2.3b
In an iron foundry that makes large castings, molten iron is carried from the fur-
nace where iron is melted to where it is poured into molds over a distance of about
150 yards. This travel, as well as some time spent on checking iron chemistry and
de-slagging, causes cooling of the molten iron. The process engineer had estimated
that the temperature drop to be about 20°F due to this cooling and, accordingly, had
chosen the target temperature at which the iron is to be tapped out at the furnace as
2570°F so that the iron would be at the required temperature of 2550°F at pouring.
The final castings, however, showed burn-in defects that were attributed to the iron
being too hot at pouring. The castings with such defects were creating problems in
machining and were absolutely not acceptable to the customer.
Solution
The temperatures of iron at the tap-out at the furnace and at pouring the molds were
taken over several castings, and histograms were made as shown in Figure 2.5a and
2.5b. Figure 2.5b shows that many pouring temperatures are out of specification on
the high side, confirming the fact that many molds were poured too hot. The center
of the pouring temperatures is at about 2560°F. The center of the tap-out tempera-
tures in Figure 2.5a is about 2570°F, indicating that the average drop in temperature
between tap-out and pouring is only about 10°F. The process engineer had assumed
it to be 20°F. Since the actual drop in temperature based on data is smaller than
expected, the iron remained hotter when poured. When this was brought to the
attention of the process engineer, the tap-out target was adjusted down to 2560°F,
and the pouring temperatures started falling within the specification. The histo-
gram of pouring temperatures also showed more variability in the actual pouring
temperatures than was allowed by the specifications. This was addressed by taking
steps to make the tap-out temperatures more consistent, and by reducing the vari-
ability in waiting times; some of which were easily avoidable. As a consequence, the
casting defects attributed to hot iron were almost eliminated.
S tatis ti c s f o r Q ua lit y 55
20
% 10
2537 2543 2549 2555 2561 2567 2573 2579 2585 2591 2597
(a) Tap-out temperature
Target
LSL USL
20
%
10
2495 2505 2515 2525 2535 2545 2555 2565 2575 2585 2595
(b) Pouring temperature
Figure 2.5 (a) Tap-out temperature of iron. (b) Pouring temperature of iron.
This is another example where the histograms not only helped in proposing theories
and discovering the causes of a problem, but also helped in communicating informa-
tion to the people concerned in a clear manner so that everyone could understand the
problem and agree on solutions.
[A disclaimer: This, as well as many other examples and exercises in this book, is
taken from real processes, and the scenarios described represent true situations. Some
of the names of product characteristics and process parameters, however, have been
changed so as to protect the identification of the source of the data. The numbers
that represent targets and specifications have also been altered to protect proprietary
information.]
56 A Firs t C o urse in Q ua lit y En gin eerin g
When histograms are prepared to study the distributions of populations, several dif-
ferent types of distributions with various shapes are encountered. A few of these types
are shown in Figure 2.6. Of all the different types of distributions, the one that is most
commonly encountered has a symmetrical bell shape and is called the “normal dis-
tribution.” Many measurements that we come across in quality work, such as length,
weight, and strength, are known to follow this distribution. When the distribution of
a population has this shape, just two measures—the average and the standard devia-
tion of the distribution—are adequate to describe the entire distribution. (How it is
so will be explained later while discussing the normal distribution.) The average and
the standard deviation are calculated from sample data using the following formulas,
where Xi is an observation in the sample and n is the sample size.
Σi X i
Average: X =
n
Σ i ( X i − X )2
Standard deviation: S =
n −1
The average represents the location of the distribution in the x-axis, or the center
point around which the data are distributed. The standard deviation represents the
amount of dispersion or variability in the data about the center. Because these mea-
sures are computed from sample data, X is called the “sample average” and S is called
the “sample standard deviation.”
Normal distribution
When the sample is large, such as n ≥ 50, the sample average and sample standard
deviation can be considered the average and standard deviation of the entire popula-
tion from which the sample was taken.
If the sample is not large, then the sample measures should not be equated to popu-
lation measures, and the methods of inference, confidence interval, and hypothesis
testing, discussed later in this chapter, must be used to “estimate” the population
measures.
2.4.2.1 Calculating the Average and Standard Deviation The average X is simply the
arithmetic average or sum of all observations in the data divided by the number of
observations. For the data shown in Table 2.1, X = 264.05. The standard deviation
is the square root of the “average” of the squared deviations of the individual values
from the average of the data. Note that the formula uses (n − 1) rather than n in the
denominator for finding the “average” of the squared deviations. This is done for a
good reason, which will be explained later when discussing the unbiasedness as a
desirable property of a good estimator. The standard deviation can also be calculated
by using another formula, which is mathematically equivalent to the one given above:
( )
2
nΣX i2 − ΣX i
S=
n(n − 1)
This formula involves a bit less arithmetic work and is said to be computationally
more efficient. The standard deviation of the fill-weights of the deodorant cans is cal-
culated in the following steps:
are the values branching from the stems. For example, in Figure 2.7, observations 231
and 234 are included in the branch with stem 23 and are denoted by leaves 1 and 4,
respectively. Figure 2.7 shows the S&L diagram drawn for the data in Table 2.1 using
the Minitab software.
Note that the data points within a stem are arranged in an ascending order and the
diagram gives the cumulative counts of observations against each stem at the leftmost
column. The cumulative counts in the stems with smaller values than the central stem
should be interpreted a bit differently from those that have larger values than the central
stem. For example, there are six values in stems 20 and below; that is, there are six values
≤208. Similarly, there are 26 values in stems 24 and below, which means that there are
26 values ≤248. On the other hand, there are five values in stems 32 and above; that is,
there are five values ≥321. Similarly, there are 18 values in stems 29 and above, which
means that there are 18 values ≥290. Finally, the count given against the central cell,
in parenthesis, is not a cumulative count; it is the number of counts in the central stem.
The S&L diagram drawn as above helps in identifying the ordered rank of values
in the data, the rank of a value when the data are ordered in an ascending order and
makes it easy to compute the percentiles of the distribution. A percentile is the value
below which a certain percentage of the data lies. We denote the p-th percentile by Xp
to indicate that p% of the data lies below Xp. Thus, X 25 and X75 represent the 25th and
75th percentiles, respectively, below which 25% and 75% of the data lie. The following
names and notations are also used for these percentiles:
X 25 is called the “first quartile” and is denoted as Q1.
X50 is called the “second quartile” or “median” and is denoted as Q2 or X .
X75 is called the “third quartile” and is denoted as Q3.
The percentiles are computed as follows. Suppose there are n observations in the
data and the p-th percentile is required. The data are first arranged in an ascending
order as in the S&L diagram, and the value at the (n + 1)p/100th location is identi-
fied. If (n + 1)p/100 is an integer, then the percentile is one of the values in the data.
If (n + 1)p/100 is a fraction, then the percentile is calculated as the average of the two
values straddling the percentile. If for example, we want the 30th percentile in the
S&L diagram above, (n + 1)p/100 = 101/3.33 = 30.33. Then the 30th percentile is the
average of the 30th and 31st ordered values in the data and is = (251 + 253)/2 = 253. If
we needed the 25th percentile (n + 1)p/100 = 101/4 = 25.25. Then the 25th percentile
is (248 + 248)/2 = 248. Similarly, to find the 75th percentile, (n + 1)75/100 = 75.75, so
the 75th percentile is the average of the 75th and the 76th ordered values and is equal
to (280 + 280)/2 = 280, i.e., X75 = 280.
Sometime, a more elaborate procedure is used (as the Minitab software does) to
find the location of the percentile by interpolating between the values that straddle
the percentile as shown below.
Suppose we want the 30th percentile of the data in the above example, where n = 100.
Then, (n + 1)30/100 = 30.3. Then, the 30th percentile lies between the 30th and the 31st
ordered values in the data, one-third of the way above the 30th value. The 30th and 31st
ordered values in the example are 251 and 253, respectively. Hence, the 30th percentile
is 251.67. Suppose we want the 25th percentile of the data in the above example. Then,
(n + 1)25/100 = 25.25. Therefore, the 25th percentile lies between the 25th and the 26th
ordered values, one-quarter of the way above the 25th value. In this example, the 25th
and the 26th ordered values are both 248, so X25 = 248.
The simpler method using simple average is often sufficient for practical purposes.
X0 X100
2600
2577
2569.35 2571
Temperature
2558.51
2550 2554
2547
*
2500
(b) Tap-out Pouring
Figure 2.8 (a) A box-and-whisker plot. (b) Box-and-whisker plots of data from pouring and tap-out temperatures.
The plot made by Minitab shows the existence of an extreme value, an “outlier,”
indicated by a plot (*) separated from the main B&W diagram in the pouring-plot.
An outlier is a value that lies beyond three standard deviations from the median.
Besides the average X and standard deviation S, a few other measures obtained from
sample data are also used to describe the location and dispersion of data.
Although the median and mode, and to a lesser extent the midrange, are used to
indicate the location of a distribution, the average X is the most commonly used mea-
sure of location for distributions encountered in quality engineering.
S tatis ti c s f o r Q ua lit y 61
2.4.4.2 Measures of Dispersion The range, denoted by R, is the difference between the
largest value and the smallest value in the data:
R = X max − X min
IQR = X 75 − X 25 = (Q 3 − Q1)
Although the standard deviation S is the most often used measure of variability in
data, the range R is a popular measure of variability in quality engineering because
of the simplicity of its calculation. It also has an appeal on the shop floor, because
how it represents the variability in data can be easily explained and understood. The
IQR represents variability in the data in the same way as the range does, but the IQR
eliminates the influence of extreme values, or outliers, by trimming the data at the
two ends.
For the example data in Table 2.1:
Median = 265
Mode = 265 (From the S&L diagram in Figure 2.7)
Midrange = (346 + 175)/2 = 260.5
R = 346 – 175 = 171
IQR = 280 – 248 = 32
Numerical measures
ΣX
1. Average X = i i
n
Σ i ( Xi − X )2
2. Standard deviation S =
n −1
3. Range (R = X max – X min)
4. Percentiles (Xp)
5. The Median (X50)
6. The Mode (The value with largest frequency of occurrence)
6. Midrange ((Xmax + Xmin)/2)
7. The Inter Quartile Range (IQR = X75 – X 25)
We discussed how these quantities are calculated and how they can be useful.
2.1 The following represents the game time, in minutes, for the Major League
Baseball games played in the United States between March 30 and April
9, 2003, sorted by league. Compare the times between the two leagues in
terms of the average and the variability. A TV producer wants to know, for
purposes of scheduling her crew, what the chances are that a game will end
in 180 min. in each of the leagues. She also wants to know the time before
which 95% of the games will end in each league? (Data based on report by
Garrett 2003.)
American League
174 125 130 155 170 225 159 315 143 147 160 173 178 178 194 120 202
143 146 149 179 181 183 185 175 161 169 179 188 193 200 156 161 197
145 162 167 172 179 181 134 174 203 222 174 188 176 231 176 131 126
National League
164 171 174 180 183 197 209 149 150 153 214 146 142 155 136 150 154 170
155 162 164 173 179 208 156 164 166 169 161 169 172 177 201 278 135 172
171 173 188 133 162 163 163 190 195 197 169 172 178 209 182 237 151 178
2.2 A project for improving the whiteness of a PVC resin produced the following
two sets of data, one taken before and the other taken after the improvement
project. The whiteness numbers came from samples taken from bags of PVC
at the end of the line. Compute the average, median, and other quantities
used to describe the two data sets and make the B&W plots of both data sets.
S tatis ti c s f o r Q ua lit y 63
If larger values for whiteness show improvement, has the whiteness really
improved? (Data based on report by Guyer and Lalwani 1994.)
Before Improvement
149 147 148 155 163 154 152 151 155 166 153 153 151 152 174 154 152 150
154 150 149 168 153 153 152 151 153 152 149 153 155 151 168 165 151 153
154 153 149 148 155 154 150 149 150 152 152 153 155 162 151 152 149 153
151 154 163 152 151 154 155 149 170 155 154 155 152 155 150 152 154 151
After Improvement
155 166 168 167 169 167 168 170 166 158 165 157 166 168 165 158 169 170
170 167 164 166 171 156 169 167 167 158 170 167 167 169 170 168 168 165
165 165 166 169 165 170 166 167 165 167 170 168 164 167 165 164 169 165
166 169 170 170 158 167 166 166 168 169 168 155 151 169 165 165 168 168
2.3 Each observation in the following set of data represents the weight of 15 candy
bars in grams recorded on a candy production line. The lower and upper speci-
fications for the weights are 1872 and 1891 g, respectively. Draw, both manu-
ally and using computer software, a histogram, an S&L diagram, and a B&W
plot for the data. Calculate, both manually and using computer software, the
X , S, median, mode, range, and IQR. Is the population of candy bar weights
in specification? (Data based on report by Bilgin and Frey 1999.)
1894 1890 1893 1892 1889 1900 1889 1891 1901 1891 1889 1890 1881 1889
1895 1891 1891 1891 1889 1901 1890 1891 1881 1896 1890 1889 1893 1890
1883 1895 1890 1890 1889 1891 1886 1890 1900 1890 1889 1891 1880 1888
1893 1893 1898 1889 1890 1893 1889 1891 1889 1895 1891 1890 1891 1894
1892 1891 1890 1890 1891 1897 1892 1890 1890 1892 1892 1891 1878 1891
1894 1890 1896 1890 1891 1899 1891 1891 1891 1892 1890 1890 1891 1892
1890 1895 1876 1889 1891 1891 1892 1890 1890 1891 1897 1892 1891 1890
2.4 The following table contains data on the number of cycles a mold-spring,
which supports molds in a forge press, lasts before breaking. Prepare a histo-
gram, calculate the average and standard deviation, and find the proportion
of springs failing before the average. (Based on McGinty 2000.)
The histogram gives a picture of where a population is located on the x-axis and how
much it is dispersed. The numerical measures, the average, and the standard deviation
give a quantitative evaluation of the location and dispersion of a population. Often,
when we have to make predictions about the proportion of a population below a given
value, above a given value, or between two given values, it is convenient to use a math-
ematical model to represent the form of the distribution. There are several such models
available for this purpose. Such models are idealized mathematical functions to repre-
sent the graph of the frequency distribution. These models are called the “probability
distribution functions.” Many such functions have been developed by mathematicians
to model various forms of frequency distributions encountered in the real world. The
normal, Poisson, binomial, exponential, gamma, and Weibull are some of the names
given to such mathematical models. Many of these are useful as models for popula-
tions encountered in quality engineering. An understanding of what these models are,
which model would be suitable to represent what population or process, how they are
used to make predictions about process conditions, and how they are helpful in solving
quality-related problems are the topics discussed next.
To discuss these models of distributions, we need to define several terms, including
probability, random variable, and probability distribution function. We begin with the
definition of probability.
2.5.1 Probability
Whenever we have to deal with events that have uncertainty associated with their
occurrence, we need to use probability to express the uncertainty. For example, will
the stock purchase I made this morning be a success? When I am speeding on the
highway will I be stopped by the cop? There is a chance the stock purchase will be a
success and there is a chance it will not be. Similarly there is a chance or risk that I
will be stopped by the cop and there is a chance I will not be. The probability defini-
tion helps in quantifying this chance or risk by assigning a number to the chance or
risk in the occurrence of the event. In order to have a consistent understanding of what
probability means, we need to follow the definition the mathematicians have given to
it and follow their methods of calculating the probability.
Before the term “probability” can be defined, a few preliminary definitions are
needed. An experiment is a clearly defined procedure that results in observations. A
single performance of an experiment is called a trial, and each trial results in an out-
come, or observation. The experiments we deal with here are called random experi-
ments, because the outcome in any one trial of the experiment cannot be predicted with
certainty, but all possible outcomes of the experiment are known. The set of all pos-
sible outcomes of a random experiment is called the sample space and is denoted by S.
S tatis ti c s f o r Q ua lit y 65
An event is a subset of the sample space such that all the elements in it share a
common property. An event can be specified by the common property or by enu-
merating all the elements in it. The events are labeled using the capital letters A,
B, C, and so on. Given below are some examples of events defined in some sample
spaces.
2.5.1.1 Definition of Probability The term probability is always used with regards to
the occurrence of an event. The probability of an event is a number between 0 and 1
that indicates the likelihood of occurrence of the event when the associated experi-
ment is performed. The probability of an event that cannot occur is 0, and the prob-
ability of an event that is certain to occur is 1.0.
We use the notation P(A) to denote the probability of the event A. Thus, the defini-
tion of probability in notations is:
0 ≤ P( A) ≤ 1
P(Φ) = 0 (where Φ denotes the null event, i.e., the event that cannot occur.)
P(S) = 1 (because when an experiment is performed, any one of the outcomes in
the sample space must occur.)
This is just the definition of the term probability. Next, we will see how to compute
the probability of events when experiments are performed.
66 A Firs t C o urse in Q ua lit y En gin eerin g
2.5.1.2 Computing the Probability of an Event There are two basic methods for comput-
ing the probability of events:
1. Method of analysis
2. Method of relative frequency
The first method is used when we know, from an understanding of the experiment,
the relative chance of occurrence of the possible outcomes. The second method is
used when we do not have the information on the relative chance of occurrence of the
outcomes. Use of these methods is illustrated in the examples below. A third method,
which involves subjectively assigning probability values to events based on the experi-
menter’s prior experience with such events, is also available. Such probabilities are
used in a branch of statistics known as “Bayesian statistics.” Bayesian methods are not
discussed in this book.
Example 2.4
Solution
Step 1: Formulate the sample space:
There are only two possible outcomes: S: {H, T}
Step 2: Assign weights to each of the elements in the sample space. Let wH and
wT be the weights assigned to each event. Since it is a fair coin, wH = wT and
since wH + wT = 1.0, wH = wT = 0.5.
Step 3: Calculate probability of the event H.
P(H) = the sum of the weights of the elements in H.
Since there is only one element in the event {H}, P(H) = wH = 0.5
Although in this example the solution would have been considered obvi-
ous, the example shows how in fact the method of analysis has been used in
arriving at the answer making use of the information that the coin is a fair
coin and the outcomes are equally likely.
S tatis ti c s f o r Q ua lit y 67
Example 2.5
A card is drawn from a deck. What is the probability that the card has a number
and not a picture?
Solution
Step 1: Formulate sample space:
S = {hearts number, hearts picture, clubs number, clubs picture, diamond
number, diamond picture, spade number, spade picture}
Step 2: Assign weights to the elements. We could assign the following weights to
the outcomes from our knowledge of the relative occurrence of the outcomes:
{9/52, 4/52, 9/52, 4/52, 9/52, 4/52, 9/52, 4/52}
S = {HN, HP, CN, CP, DN, DP, SN, SP}
Step 3: Calculate P(A):
Event A: {the card is a number} = {HN, CN, DN, SN}
Therefore, P(A) = 9/52 + 9/52 + 9/52 + 9/52 = 36/52 = 9/13.
Example 2.6
A loaded die has even numbers that are twice as likely to show as the odd numbers.
What is the probability that the number that shows is less than 5 when such a die
is thrown?
Solution
Step 1: S = {1, 2, 3, 4, 5, 6}
Step 2: Let w be the weight that represents the chance of an odd number. Then,
2w is the weight of an even number. The total for the weights of all the ele-
ments is 9w, and this should be equated to 1.0, which means that w = 1/9.
Therefore, the weights for the six outcomes are:
{1/ 9, 2 / 9, 1/ 9, 2 / 9, 1/ 9, 2 / 9}
Step 3: Event A: (Number is less than 5) = {1, 2, 3, 4}:
P ( A ) = 1/ 9 + 2 / 9 + 1/ 9 + 2 / 9 = 6 / 9 = 2 / 3
It is easy to see how this result is true; it follows from the analysis method above. If
there are k outcomes in the sample space, all equally likely, then each element has a
weight equal to 1/k. If event A has a number of elements in it, then P(A) = a/k, which
is the ratio of number of elements in A to number of elements in S.
68 A Firs t C o urse in Q ua lit y En gin eerin g
Example 2.7
When a fair coin is tossed three times, what is the probability that all tosses will
show the same face?
Solution
The sample space S: {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT}.
If A is the event that all tosses have the same face, then A = {HHH, TTT}.
Are the elements in S all equally likely? Yes. Just take the example of tossing a
fair coin two times, which has the sample space S = {HH, HT, TH, TT}. A little bit
of reflection will show that if the two outcomes in one trial are equally likely, then
the four outcomes of the two trials are also all equally likely. This can be extended
to simultaneous performance of any number of trials, so the eight outcomes in the
above sample space are all equally likely. Therefore, P(A) = 2/8 = 1/4.
Note: For any experiment, the sample space can be written in a few different ways.
Whenever possible, it is advisable to write the sample space as made up of elements
that are all equally likely. The probability of events in that sample space can then be
computed using this simple formula.
n
fA = . Then, P ( A ) = lim f A
N N →∞
Example 2.8
A coin that is not a fair coin (the sides are loaded that they are not equally likely to
occur) is tossed once. What is the probability the head will show?
Solution
There is not enough information to use the analysis method. We have to do an
experiment.
Step 1: Toss the coin a large number of times, say, 60 times. Find the number of
times head showed. Suppose head showed 20 times.
Step 2: Calculate the relative frequency for head: f H = 20/60 = 1/3
S tatis ti c s f o r Q ua lit y 69
We can call this relative frequency of H as P(H) provided the number of trials in
the experiment had been large. A question arises: how large is large?
When an experiment is repeated a number of times, the relative frequency of an
event tends to become a constant after a certain number of repetitions. When the
repetitions begin yielding constant relative frequencies, we say that the number of
trials is large enough. Then, the relative frequency can be used as the probability of
the event.
Yes, in this case, the number of trials 60 seems large enough (based on a stat-
istician’s experience), and the relative frequency can be used as the probability.
Therefore, P(H) = 1/3.
Example 2.9
FROM ILLINOIS
OUTSIDE CHICAGO FROM CHICAGO FROM OTHER AREAS TOTAL
Boys 13 55 10 78
Girls 10 10 2 22
Total 23 65 12 100
What is the probability that a student (picked at random) from this college comes
from Chicago? What is the probability that a randomly selected student is a girl and
is from Illinois outside Chicago?
Solution
Here, the experiment has been performed 100 times, and the outcomes recorded.
The relative frequency of any event can be calculated out of the 100 trials. For exam-
ple, the relative frequency of a student being from Chicago is 65/100. Here, the
number of trials (N = 100) can be considered large enough for the relative frequency
to be used as the probability of the events. Therefore,
The two methods described above would help in computing the probability of sim-
ple events of the kind seen in the examples. However, we often have to deal with more
complex events. Consider for example this problem:
A box contains 20 pencils of which 4 are defective and rest are good. If a sample
of 4 pencils is drawn from this box, what is the probability that there is no more than
one defective in the sample? The methods discussed above are not adequate to find
an answer to this; we need to know some additional theorems in order to be able to
compute the probability of such complex events.
70 A Firs t C o urse in Q ua lit y En gin eerin g
2.5.1.3 Theorems on Probability
2.5.1.3.1 Addition Theorem of Probability If A and B are any two events in a sample
space, (i.e., A and B are two possible events when an experiment is performed), then
the probability of A or B occurring:
P ( A ∪ B ) = P ( A ) + P (B ) − P ( A ∩ B )
A
S
To see how the theorem is true, consider the following. The Venn diagram shown
in the sketch above shows the relationship of the events with respect to each other and
to the sample space. With reference to the diagram, P(A ∪ B) is given by the sum of
probabilities of elements inside A or B (cross-hatched area). We can get the P(A ∪ B)
from the sum P(A) + P(B), but this sum includes the probability of the elements in
(A ∩ B) twice. Therefore, if we subtract P(A ∩ B) once from P(A) + P(B), we will get
P(A ∪ B).
Corollar y
If A and B are mutually exclusive—that is, there are no common elements between
them—then P(A ∩ B) = 0, and P(A ∪ B) = P(A) + P(B).
Example 2.10
When a pair of dice is thrown, what is the probability that numbers 5 or 6 will show
on either of the dice?
Solution
We first construct the sample space of the experiment as in the sketch above:
11
P (A) = P ( B ) = 11 / 36 P ( A ∩ B ) = 2 / 36
36
11 11 2 20 5
P ( A ∪ B ) = P ( A ) + P (B ) − P ( A ∩ B ) = + − = =
36 36 36 36 9
Example 2.11
When two dice are thrown, what is the probability that the total is less than 4 or
that one of the numbers is 4?
Solution
A and B are mutually exclusive—that is, there are no common elements between A
and B. Hence,
3 11 14
P( A ∪ B) = + =
36 36 36
2.5.1.3.2 The Extension of the Addition Theorem When there are three events—A, B,
and C, as shown in the sketch below—the addition theorem becomes:
A
S
72 A Firs t C o urse in Q ua lit y En gin eerin g
P ( A ∪ B ∪ C ) = P ( A ) + P ( B ) + P (C ) − P ( A ∩ B ) − P ( A ∩ C )
−P (B ∩ C ) + P ( A ∩ B ∩ C )
Similar extension can be made for four, five, or any number of events; and the
expression on the right-hand side becomes more and more complex. When the events
are all mutually exclusive, however, the extension is simple for any number of events:
If A1, A2, …, Ak are all mutually exclusive events, then:
P ( A1 ∪ A2 ∪ ∪ Ak ) = P ( A1 ) + P ( A2 ) + + P ( Ak )
Ac
P( Ac ) = 1 − P( A)
Example 2.12
When a coin is tossed six times, what is the probability that at least one head appears?
Solution
2.5.1.3.4 Theorems on the Joint Occurrence of Events When we have to find the probabil-
ity of the joint occurrence of two events—that is, the probability of A and B occurring
together—we need to use one of the multiplication theorems given below, depending
on whether the events are independent or not. First, we need to define the concept of
independence of events, for which we should start with defining conditional probability.
A
S
So, this conditional probability is obtained as the ratio of the probability of A and B
occurring to the probability of event B occurring alone. In terms of notations,
P( A ∩ B)
P( A | B) =
P (B )
Example 2.13
Find the probability that when two dice are thrown, the total equals 6, given that
one of the numbers is 3.
Solution
A
1,1 1,2 1,3 1,4 1,5 1,6
2,1 2,2 2,3 2,4 2,5 2,6
3,1, 3,2 3,3 3,4 3,5 3,6 B
4,1 4,2 4,3 4,4 4,5 4,6
5,1 5,2 5,3 5,4 5,5 5,6
6,1 6,2 6,3 6,4 6,5 6,6
S
74 A Firs t C o urse in Q ua lit y En gin eerin g
A : { total equals 6}
B : {one of the numbers is 3}
P ( A ∩ B ) 1 / 36
P( A | B) = = = 1 /11
P (B ) 11 / 36
Notice that P(A ∩ B) and P(B) are calculated with respect to the original sample
space of the experiment. Notice also, that out of the 11 sample points in B, only one
is also in A. Thus, P(A|B) can be obtained as the proportion of sample points in B
that are also in A.
2.5.1.3.6 Independent Events Two events in a sample space are said to be indepen-
dent if the occurrence of one does not affect the probability of occurrence of the other.
Independence of events can be defined using conditional probabilities as follows:
If A and B are such that P(A|B) = P(A), then A and B are said to be independent.
We can show that if P(A|B) = P(A), then P(B|A) = P(B).
Sometimes, the independence of events will be obvious; at other times, indepen-
dence has to be verified. When independence is not obvious, we can use the above
definition to verify if independence exists.
Example 2.14
Toss a pair of dice. Let E1 be the event that the sum of the numbers is 6 and E2 be
the event that the sum is 7. Let F be the event when the first number is 3.
The events are identified in the sample space shown in the sketch below.
E1 E2
1,1 1,2 1,3 1,4 1,5 1,6
2,1 2,2 2,3 2,4 2,5 2,6
Solution
1 5
P ( E1 | F ) = , P ( E1 ) =
6 36
1 1
P ( E2 | F ) = , P ( E2 ) =
6 6
2.5.1.3.7 The Multiplication Theorems of Probability If A and B are any two events in
a sample space, then:
P( A ∩ B ) = P( A | B )P( B ) = P( B | A )P( A )
This result follows from the definition of conditional probabilities P(A|B) and P(B|A).
If A and B are independent, P(A|B) = P(A), therefore:
P ( A ∩ B ) = P ( A )P ( B )
This follows from the definition of independence. The following two examples illus-
trate the use of these theorems.
76 A Firs t C o urse in Q ua lit y En gin eerin g
Example 2.15
A box contains seven black balls and five white balls. If two balls are drawn with
replacement—that is, the ball drawn first is put back after observing its color
before the second ball is drawn—what is the probability that both balls drawn
are black?
Solution
Let B1 be the event that the first ball is black. Then, P(B1) = 7/12.
Let B2 be the event that the second ball is black. Because the first ball is replaced,
P(B2) = 7/12.
It is easy to see that B1 and B2 are independent, because the outcome in the first
pick has no effect on the outcome in the second pick. Hence,
7 7 49
P ( B1 ∩ B2 ) = P ( B1 )P ( B2 ) = × =
12 12 144
Example 2.16
A box contains seven black balls and five white balls. If two balls are drawn without
replacement—that is, the ball drawn first is not put back after observing its color—
what is the probability that both balls drawn are black?
Solution
Let B1 be the event that the first ball is black. Then, P(B1) = 7/12.
Let B2 be the event that the second ball is black. B1 and B2 are not independent,
because P(B2) depends on what happens in the first pick. We have to use conditional
probability:
P ( B2 | B1 ) = 6 /11
6 7 42
P ( B1 ∩ B2 ) = P ( B2 | B1 ) × P ( B1 ) = × =
11 12 132
In some situations, the probability of an event of interest is known only when con-
ditioned on the occurrence of other events. The theorem of total probability that fol-
lows makes use of conditional probabilities and gives a very useful result when only
conditional information is available.
2.5.1.3.8 The Theorem of Total Probability Let B1, B2, …, Bk be partitions of a sample
space S such that (B1 ∪ B2 ∪ … ∪ Bk) = S and (Bi ∩ Bj) = ∅ (null set) for any pair i and j
(see diagram below). The partitions are mutually exclusive events that jointly make up
the sample space. If A is an event of interest in the same sample space, then:
B3 A
B1
B4
B2
B5
B6 S
The individual product terms on the right-hand side of the above equation gives the
joint probabilities of A with each of the partitions (see sketch above). When they are
added together, the sum gives the total unconditional probability of A.
Example 2.17
Solution
ME
CE
MfG IE EE
The majors form a partition of all the students in the college. The probabilities of the
partitions are (see sketch above):
Note that this probability represents the proportion of female students in the
whole college. We are able to find the proportion of female students in the college
from the information on their proportions within majors.
2.5.1.4 Counting the Sample Points in a Sample Space The reader would have noticed,
often, calculating the probability of an event involves first counting the number of
sample points in the sample space and also counting the number of sample points in
the event. Sometimes, counting the number of sample points in a sample space and
events may offer challenges. The following methods provide help in those situations.
Operation 1 Operation 2
1
2
1 3
2
.
3 .
.
. n2
.
.
n1
Example 2.18
There are five boys and four girls in a group. Teams of two are to be formed with one
boy and one girl. How many such teams are possible?
Solution
This problem can be looked at as filling two boxes, one with a boy and the other
with a girl. Then, we want to find the number of ways in which both boxes can be
filled together. There are five ways of filling the first box with a boy and four ways
of filling the second with a girl:
B G
5 4
S tatis ti c s f o r Q ua lit y 79
Both boxes can be filled together in 5 × 4 = 20 ways. Thus, there are 20 ways of
choosing a team of two consisting of 1 boy and 1 girl, out of the 5 boys and 4 girls.
Example 2.19
How many four-lettered words are possible from five letters, E, F, G, H, and I, if
each letter is used only once? How many of them will end with a vowel?
Solution
This problem again can be looked at as filling boxes—four of them:
1 2 3 4
2 × 3 × 4 × 5 = 120
Starting with the fourth box, there are 5 possible ways of filling this box. After
the fourth box is filled, there are 4 possible ways of filling the third box, and so
on. There are 120 ways of filling the four boxes together and, therefore, 120 four-
lettered words are possible from the 5 letters.
If the word has to end with a vowel, then the last box can be filled in only 2 ways
with E or I. Thus, the number of ways in which the four boxes can be filled with
this restriction is 48.
1 2 3 4
2 × 3 × 4 × 2 = 48
Similarly, there are six permutations of the three objects taken two at a time:
In both of these examples, we wrote down all of the permutations and counted them.
Suppose there are a large number of objects—say, 20—and it is not possible to write
down all of the possible permutations and count them. The following theorem helps.
When r = n, nPn = n!
80 A Firs t C o urse in Q ua lit y En gin eerin g
r r −1 2 1 No. of boxes
(n − r + 1) × (n − r + 2) × × (n − 1) × n No. of ways of filling boxes
Starting at the last box on the right-hand side, Box 1 can be filled in n different
ways. After filling Box 1, there will be (n – 1) ways of filling Box 2 and so on. The rth
box can be filled in (n – r + 1) ways.
So, the number of ways of filling all the r boxes together is:
n!
(n − r + 1)(n − r + 2)(n − 1)n =
(n − r )!
Example 2.20
How many starting lineups are possible with a team of 10 basketball players?
Solution
Because the arrangement (or position) of the players is important in a lineup, the
number of lineups is given by:
10 ! 10 !
10 P5 = = = 10 × 9 × 8 × 7 × 6 = 30, 240
(10 − 5)! 5 !
This result follows from the result for the number of permutations. Each group of r
objects, when permuted within the group, can produce r! permutations, so the num-
ber of combinations multiplied by r! should be equal to the number of permutations.
That is,
nPr = n r !
r
n nPr n!
Therefore, = =
r r ! (n − r )! r !
Example 2.21
How many different teams of five can be formed from a group of 10 players?
Solution
10 !
Number of teams of 5 out of 10 = 10 = = 252
5 5! 5!
So, we see that 252 teams (combinations) of 5 can be formed out of 10 players,
and 30,240 lineups (permutations) are possible out of these teams.
Example 2.22
How many different committees of three can be formed with 2 women and 1 man
out of a group of 4 women and 6 men?
Solution
Look at this problem as having to fill two boxes, one with 2 women and the other
with 1 man:
W M
2 1
6
The box with women can be filled in 4 ways and the box with men in ways.
2 1
4 6 4! 6!
The two boxes together can be filled in: = =× = 6 × 6 = 36 ways
2 1 2 ! 2 ! 1 ! 5!
82 A Firs t C o urse in Q ua lit y En gin eerin g
Example 2.23
Solution
20 !
a. Number of samples of 4 out of 20 = 20 = 4845
4 4 !16 !
b. Number of samples with exactly 1 defective = 16 × 4
3 1
16 !
= × 4 = 2240
3 !13 !
16 4
3 1 560 × 4
c.
P(1 defective) = = = 0.462
20 485
4
d. P(No more than 1 defective) = P(0 defective or 1 defective)
= P( 0 defective) + P(1 defective)
Summary on Probability
Whenever we have to deal with events that have uncertainty associated with their
occurrence, we need to use probability, which quantifies the chance of their occur-
rence by assigning a number to that chance.
S tatis ti c s f o r Q ua lit y 83
2.5.2 Exercises in Probability
2.20
a. In how many different ways can a 9-question true-or-false examination be
answered without regard to being right or wrong?
b. If a student answers an examination as above at random, what is the prob-
ability that he or she will have all 8 correct?
2.21 In how many ways can 4 boys and 3 girls be seated in a row if no 2 boys or
girls should sit next to each other?
2.22 An urn contains 15 green balls and 12 blue balls. What is the probability that
two balls drawn from the urn with replacement will both be green?
2.23 An urn contains 15 green balls and 12 blue balls. What is the probability that
two balls drawn from the urn without replacement will both be green?
2.24 Three defective items are known to be in a container containing 30 items. A
sample of 4 items is selected at random without replacement.
a. What is the probability that the sample will contain no defectives?
b. What is the probability that the sample will contain exactly 2 defectives?
c. What is the probability that the number of defectives in the sample will be
2 or less?
2.25 A high-rise tower built by a developer contains 200 condominium units. The
QA department of the developer has to approve the tower before turning it
over to sales. The QA department will select a random sample of 8 units and
inspect them. If more than three major defects are found in any unit, they will
reject the unit as defective. If more than 2 of the 8 inspected units are defective,
the entire tower will be rejected. If 10 of the 200 units are known to have more
than 3 major defects, what is the probability that the tower will be rejected?
2.26 A manufacturer receives a certain part from four vendors in the following per-
centages: Vendor A = 28%, Vendor B = 32%, Vendor C = 18%, and Vendor D =
22%. Inspection of incoming parts reveals that 2% from Vendor A, 1.5% from
Vendor B, 2.5% from Vendor C, and 1% from Vendor D are defective. What
percentage of the total supplies received by the manufacturer is defective?
2.27 A candidate running for U.S. Senate in the state of Illinois found in a survey
that 75% of registered Democrats, 5% of the registered Republicans, and
60% of independents would vote for him. If the voters in the state are 54%
registered Democrats, 26% registered Republicans, and the rest indepen-
dents, what percentage of the overall vote can the candidate expect to get?
2.5.3 Probability Distributions
2.5.3.1 Random Variable A random variable is a variable that assumes for its values the
outcomes of a random experiment. A random variable takes only real numbers for its
values. If an experiment produces outcomes that are not in numbers—that is, its out-
comes are in notations such as HHH or TTTH—then a random variable is used to
produce numbers to represent the outcomes. Random variables are named or labeled
86 A Firs t C o urse in Q ua lit y En gin eerin g
using capital letters X, Y, Z, and so on. The set of all possible values of a random vari-
able X in an experiment is called its “range space” and is denoted by R X. As in the case
of the sample space, events can be defined in the range space, and their probabilities
can be determined. A few examples of random variables are given below.
A discrete random variable is one that takes a finite (or countably infinite) number of
possible values. A continuous random variable is one that takes an infinite number of
possible values; it takes values in an interval. The first three of the above examples are
discrete random variables, the fourth example is a discrete random variable that takes a
countably infinite number of values, and the last three are continuous random variables.
Random variables are used to represent populations with variability in them. For
example, we say: let X be the height of students in a university to mean that the values
of heights in the population are the values of the random variable X. The random vari-
able thus makes the population with variability in it a mathematical entity.
We come across several random variables in the natural and man-made worlds.
Often, decisions have to be made involving these variables, and therefore knowledge
of how they behave is necessary. For example, a car dealer would want to know how
many cars of a certain model to order for the next month. It depends on the demand
for the model car. The demand for the cars in any month is a random variable and the
dealer would want to know how the demand behaves and would want to be able to
predict the value of the demand for the next month. In other words, the dealer would
like to know the possible values of the random variable and how likely each value is to
occur. The dealer needs the probability distribution of the random variable “demand,”
which would enable prediction of the demand.
Many examples used in this chapter and later chapters will show how the outcomes
of many processes we deal with in quality engineering are random variables and how
knowledge of their behavior is necessary for predicting the output of the process. That
knowledge is obtained by modeling the process outputs using distributions appropri-
ate to those processes. Predictions thus made would enable evaluation of the capability
of the processes to meet customer requirements. If the predictions presage inadequacy,
S tatis ti c s f o r Q ua lit y 87
the same models would provide directions for making changes so that adequate capa-
bilities can be realized.
Although a random variable may seem to behave in a haphazard or chaotic manner
to an ordinary observer, statisticians have found that they follow certain patterns and
that the patterns can be captured in mathematical models. These models, represented
by formulas, graphs, or tables, are called “probability distributions.” Because the two
types of random variables mentioned above are mathematically different, one being
continuous and the other non-continuous, they need two different types of models.
The model used to describe a discrete random variable is called the “probability mass
function,” and that used to describe a continuous random variable is called the “prob-
ability density function.” First, we will discuss the probability distribution of a dis-
crete random variable: the probability mass function.
[Note: The uppercase letter X is used to represent the name of the random variable and
the lowercase letter x is used to represent a value assumed by X.]
In words, a nonnegative function, denoted by p(.) is used to describe a discrete
random variable. The function value of p(.) at any possible value of the random vari-
able gives the probability that the random variable takes that particular value, and
the function values add to 1.0 when summed over all possible values of the random
variable. Such a function is called the probability mass function (pmf ) of the discrete
random variable.
Example 2.24
A random variable X denotes the number of tails when a coin is tossed three times.
Find its probability mass function.
Solution
The possible values of the random variable, or the range space of X, R X : (0, 1, 2, 3).
We can find the probability of events in R X by identifying their equivalent events
in the sample space. An equivalent event in the sample space is made up of all those
elements in the sample space that will be mapped by the random variable onto the
event of interest in the range space. The probability of the event in R X is the same as
the probability of the equivalent event in sample space.
88 A Firs t C o urse in Q ua lit y En gin eerin g
The probabilities of the random variable taking different values in the example
are found as follows:
Figure 2.9a shows the elements of the range space, their equivalent events in the
sample space, and how the probabilities of the events in the range space are obtained.
We have obtained the probability mass function of the random variable X. The func-
tion p(x) satisfies all the required properties, and the probability distribution is presented
in a table as part of Figure 2.9a. It can also be represented in a graph, as shown in Figure
2.9b, which is called a “probability histogram.” Or, it can be represented in a closed form:
3
x
p( x ) = , x = 0, 1, 2, 3
8
The closed form expression is concise and convenient. By any form, however, p(x)
gives the probability of the random variable taking the possible values and, thus,
describes the behavior of the random variable.
x 0 1 2 3
p(x) 1/8 3/8 3/8 1/8
(a)
p(x)
3/8
1/8
0 1 2 3 x
(b)
Figure 2.9 (a) Calculating the probabilities of events in a range space. (b) Graphical representation of a probability
mass function (pmf) of Example 2.24.
S tatis ti c s f o r Q ua lit y 89
2.5.3.3 Probability Density Function The method used above to describe a discrete
random variable will not work in the case of a continuous random variable, because
the continuous random variable takes an infinite number of possible values. The prob-
ability that a continuous random variable equals exactly any one of the infinite possi-
ble values is zero, so the probabilities of individual values cannot be tabulated. Hence,
we must consider the probability of a continuous random variable taking values in an
interval.
If X is a continuous random variable, then a function f (x) is defined with the fol-
lowing properties and is called the “probability density function” (pdf) of X:
1.
f(x) ≥ 0 for all values of x
∫
2. f ( x ) dx = 1
x
b
P (a ≤ X ≤ b ) =
3.
∫ a
f ( x ) dx
Example 2.25
0.01x 0 ≤ x < 10
f ( x ) = 0.01( 20 − x ) 10 ≤ x ≤ 20
0, otherwisse
Solution
Figure 2.10 shows the graph of the function.
20
∫0
f ( x ) dx = area under the curve over all possible vaalues of X
0.10
0.05
0 5 10 20
10
b. P ( 5 ≤ X ≤ 10 ) =
∫ 5
f ( x ) dx = area under the “curve” between 5 and 10
In this problem, the areas could be computed easily from basic geometry. If the
graph of the function is not a simple geometric figure, then integration has to be
used in order to compute the areas.
F (x ) = P ( X ≤ x )
F(x) for any x gives the probability that the random variable takes values starting from
the lowest possible value up to and including the given value x.
If X is a discrete random variable with pmf p(x), then
F ( x ) = P ( X ≤ x ) = Σ t ≤ x p(t )
If X is a continuous random variable with pdf f (x), then
F (x ) = P ( X ≤ x ) =
∫ t ≤x
f (t ) dt
The CDF F(x) is a step function for a discrete random variable, which increases in
steps with the increases occurring at the values assumed by X. If X is a continuous
random variable, then F(x) increases continuously. Figure 2.11 shows the shapes of
the CDF of discrete and continuous random variables.
As will be seen later, the availability of F(x) for various probability distributions,
e.g., normal, Poisson, often conveniently tabulated, is very helpful when proportions
S tatis ti c s f o r Q ua lit y 91
F(x) F(x)
1.0 1.0
x x
in populations below a given value, above a given value, or between two given values
are to be computed. With the help of such tables, we can avoid having to do repeated
summation and integration of probability distribution functions.
µ X = Σ x xp( x )
If X is a continuous random variable with pdf f(x), then the mean is defined
as:
µX =
∫ xf (x ) dx
x
2. Variance:
If X is a discrete random variable with pmf p(x), then the variance is defined
as:
σ 2X = ∑ (x − µ
x
X ) 2 p( x )
92 A Firs t C o urse in Q ua lit y En gin eerin g
σ 2X =
∫ (x − µ
x
X )2 f ( x ) dx
From the definition of the mean μX, we can see that the mean is a weighted average of
the values of the random variable, which represents the center of gravity of the distribu-
tion and indicates where a distribution is located on the x-axis. The variance σ 2X is the
weighted average of the squared deviations of the values of the variable from its mean.
It says how dispersed the distribution is about the mean; the larger the value of σ 2X , the
more variability there is in the distribution. The standard deviation of a distribution is
the (positive) square root of the variance. Thus, the standard deviation σX is also a mea-
sure of the variability of a distribution. The standard deviation is commonly used as the
measure of variability in quality engineering because its unit is the same as that of the
variable whereas the unit of the variance will be the square of the unit of the variable.
Example 2.26
If X represents the number of heads when a coin is tossed three times, find the
following:
μX
a.
σ 2X
b.
c.
F(x)
d.
P(X ≤ 1)
e.
P(1 < X ≤ 3)
Solution
From an earlier example, the distribution of X is given by:
x 0 1 2 3
p(x) 1/8 3/8 3/8 1/8
1 3 3 1 12 3
µ X = Σ x xp( x ) = 0 ×
a. +1× + 2× + 3× = =
8 8 8 8 8 2
2 2 2
3 1 3 3 3 3
b. σ 2X = Σ x ( x − µ x )2 p( x ) = 0 − × + 1 − × + 2 − ×
2 8 2 8 2 8
2
3 1 9 1 1 3 1 3 9 1 3
+3− × = × + × + × + × =
2 8 4 8 4 8 4 8 4 8 4
c.
F(x):
x x<0 0≤x<1 1≤x<2 2≤x<3 X≥3
F(x) 0 1/8 4/8 7/8 8/8 = 1.0
S tatis ti c s f o r Q ua lit y 93
The F(x) is a step function for this discrete random variable. For example,
F(x) = 1/8 from x = 0 until just before x = 1. At x = 1, it jumps to 4/8 and
remains at that value until just before x = 2. At x = 2, it jumps to 7/8, and
so on.
d.
P(X ≤ 1) can be read off the table as F(1) = 4/8
e.
P(1 < X ≤ 3) = P(X ≤ 3) − P(X ≤ 1) = F(3) − F(1) = 1.0 − 0.5 = 0.5
Example 2.27
2x , 0 ≤ x ≤ 1
A random variable X has pdf. f ( x ) =
Find the following: 0, otherwise
μX
a.
σ 2X
b.
c.
F(x)
d.
P(X ≤ 0.3)
e.
P(0.1 < X ≤ 0.3)
Solution
1
1
x3 2
a.
0 ∫
µ X = x( 2x ) dx = 2
3
=
3
0
1
1
σ =
b. 2
X
∫ (x − µ
0
X
2
) ( 2x ) dx =
18
x
F (x ) =
c.
∫ 0
2tdt = t 2|x0 = x 2 , 0 ≤ x ≤ 1
Therefore,
F ( x ) = 0, x < 0
= x2, 0 ≤ x ≤ 1
= 1, x > 1
d.
P(X ≤ 0.3) = F(0.3) = 0.32 = 0.09
P(0.1 < X ≤ 0.3) = F(0.3) − F(0.1) = 0.32 − 0.12 = 0.09 − 0.01 = 0.08
e.
We just defined the generic probability distribution functions, one for describing a
discrete random variable and another to describe a continuous random variable. These
are idealized mathematical functions satisfying certain properties. Many such math-
ematical functions have been proposed to describe random variables encountered in
various situations. A few that are most useful in describing random variables encoun-
tered in quality engineering are discussed below. The nature of these distributions,
94 A Firs t C o urse in Q ua lit y En gin eerin g
the context in which they are useful, and their important characteristics are examined.
Specifically, we will look next at:
1. The binomial distribution;
2. The Poisson distribution; and
3. The normal distribution.
The binomial and Poisson are discrete distributions, whereas the normal is a continuous
distribution. The chi-squared distribution and the t-distribution are two continuous distri-
butions that will be discussed later in this chapter while discussing inference methods. The
exponential distribution, which is used to describe random variables that represent the life
of products, will be discussed in Chapter 3 when discussing reliability methods.
p( x ) = n p x (1 − p )n − x , x = 0, 1,…, n,
x
where p is the probability of success in any one trial and (1 − p) is the probability of
failure in one trial. This is called the binomial distribution, and it has two parameters.
The parameters of a distribution are the quantities that need to be specified in order
to complete the description of the distribution. The above expression for p(x) contains
two unknown quantities, n and p, which when specified completes the specification
of the distribution. We will use the notation X ~ Bi(n, p) to indicate that the random
variable X is binomially distributed with parameters n and p.
We do not attempt to prove here that the above expression is, indeed, the distribu-
tion of the random variable representing the number of successes in n independent
trials. That derivation is available in textbooks on probability and statistics, such as
Hines and Montgomery (1990) or Hogg and Craig (1965). We also will not show the
proof or derivation of any other distribution in this chapter. We will show how they
are used as models of random variables of interest to us. A few examples of binomial
random variables are given below.
1.
X: the number of heads when a fair coin is tossed 10 times: (A fair coin is one
that has P(H) = P(T) = 0.5)
X ∼ Bi (10, 0.5)
S tatis ti c s f o r Q ua lit y 95
10
and p( x ) = ( 0.5)x ( 0.5)10 − x, x = 0, 1, 2,, 10
x
2.
Y: the number of baskets a ballplayer makes in 12 free throws if her average
is 0.4:
Y ∼ Bi (12, 0.4 )
12
and p( y ) = ( 0.4 ) y ( 0.6)12− y, y = 0, 1, 2,, 12
y
3.
W: the number of defectives in a sample of 20 taken from a (large) lot having
2% defectives:
W ∼ Bi ( 20, 0.02)
and p(w ) = 20 ( 0.02)w ( 0.98)20 − w, w = 0, 1, 2,, 20
w
The reader should satisfy himself/herself about the choice of the values for the
parameters and the possible values of the variable in each of the above examples.
Another issue the reader should note is about the independence of the trials in each of
the experiments. Independence, one may recall, implies that what happens in one trial
does not affect what happens in other trials. Only when this independence among the
trials exists the binomial distribution can be used as a model for the random variable
“number of successes.”
In the first example, the trials are obviously independent because the outcome in
one toss of the coin will not affect the outcomes in other tosses. Thus, the number
of heads out of n tosses of a coin is a perfect example of a binomial random variable.
In the second example, it might be reasonable to assume that the trials are indepen-
dent, although they may not be strictly so. If the assumption of independence can
be made, the binomial model can be used in this case. In the third example, the
independence requirement will not be met if the sample is chosen from a small lot.
The reader can see, when the sample units are picked without replacement, what
happens in one pick does affect what happens in subsequent picks, and so inde-
pendence between picks cannot be assumed. However, when the lot size becomes
larger and larger compared to the sample size, the assumption of independence
becomes more and more valid. Hence, the binomial model can be used in the third
example only when the sampling is done from a large lot. For practical purposes,
a lot is considered to be “large” if its size is 30 or larger. The following example
shows the usefulness of the binomial distribution as a model for random variables
of this kind.
96 A Firs t C o urse in Q ua lit y En gin eerin g
Example 2.28
A sample of 12 bolts is picked from a production line and inspected. If the produc-
tion process is known to produce 2% defectives, what is the probability that the
sample will have exactly 1 defective? What is the probability that there will be no
more than 1 defective?
Solution
Let X represent the random variable representing the number of defectives in the
sample of 12. Then, assuming the process produces a large population,
X ∼ Bi (12, 0.02)
p( x ) = 12 (.02)x (.98)12− x , x = 0, 1,… , 12
x
p( X = 1) = p(1) = 12 ( 0.02)1 ( 0.98)11 = 0.192
1
P ( no more than 1 defective) = P ( X ≤ 1) = p( 0 ) + p(1)
= 12 ( 0.02)0 ( 0.98)2 + 12 ( 0.02)1 ( 0.98)11
0 1
The probability is 0.976 that a sample of 12 drawn from a 2% lot will have 1 or less
defectives.
2.5.4.1.1 The Mean and Variance of a Binomial Variable If X ∼ Bi(n, p), it can be
shown, using the definition for mean and variance, that:
n
µX = ∑ x nx p (1 − p)
x=0
x n−x
= np
n
n
2
σ =
X ∑ (x − µ
x=0
X )2 p x (1 − p )n − x = np(1 − p )
x
Thus, the mean of a binomial random variable with parameters n and p equals
np, and the variance is np(1 − p). The standard deviation of the binomial variable is
np(1 − p ) .
Example 2.29
If samples of 12 bolts are drawn repeatedly from a production line having 2% defec-
tives, what will be the average number of defectives in the samples? What will be
the standard deviation of the number of defectives in the samples?
S tatis ti c s f o r Q ua lit y 97
Solution
X ∼ Bi (12, 0.02)
µ X = 12 × 0.02 = 0.24
σ X = np(1 − p ) = 12 × 0.02 × 0.98 = 0.485
This means that if samples of 12 are repeatedly taken from the 2% lot, the number
of defectives per sample, in the long run, will average at 0.24. The variability in the
number of defectives will be given by σ = 0.485.
2.5.4.2 The Poisson Distribution The Poisson distribution has been found to be a good
model to describe random variables that represent counts that can take values anywhere
from zero to infinity, the following being a few examples of such random variables.
1. Number of knots per sheet of plywood
2. Number of blemishes per shirt
3. Number of pinholes per square-foot of galvanized steel sheet
4. Number of accidents per month in a factory
5. Number of potholes per mile of a city road
The probability distribution (probability mass function) of such a random variable X
is given by:
e −λ λ x
p( x ) = , x = 0, 1, 2,…
x!
This distribution is called the Poisson distribution and has one parameter denoted by
λ. We will use the notation X ~ Po(λ) to denote that the random variable has a Poisson
distribution with the parameter λ.
Note that the Poisson variable takes values from zero to infinity, which is often
used as a clue to recognizing Poisson variables. The possible values of the variable are
countable (as 0, 1, 2, …) and could be anywhere from zero to a very large value.
2.5.4.2.1 The Mean and Variance of the Poisson Distribution If X ~ Po(λ), then the
mean of X can be shown to be
∞
e−λλx
µX = ∑ x
x!
=λ
x=0
Notice that the mean and the variance of a Poisson random variable are equal. This is
a unique property of the Poisson variable. Also, the mean of a Poisson distribution is
98 A Firs t C o urse in Q ua lit y En gin eerin g
its parameter. This means that if the average value of a Poisson variable is known, then
its distribution is immediately defined, and the probability of various events relating to
the random variable can be found.
Example 2.30
A typist makes, on average, three mistakes per page. What is the probability that
the page he types for a typing test will have no more than one mistake?
Solution
Let X be the number of mistakes per page. Then,
X ∼ Po( 3)
e −3 3x
⇒ p( x ) = , x = 0, 1,…
x!
30 31
P ( no more than 1 mistake) = P ( X ≤ 1) = p( 0 ) + p(1) = e −3 +
0 ! 1!
−3
= e [ 4 ] = 0.199
Example 2.31
If a typist makes, on average, three mistakes per page, what would be the standard
deviation of mistakes per page typed by this typist in the long run?
Solution
X ∼ Po( 3)
σ X = 3 = 1.732
2.5.4.3 The Normal Distribution The normal distribution has been found to be the model
for many random variables found in the natural and man-made world. Many measure-
ments such as amount of rainfall in a place, weight of newborn babies in a hospital,
diameter of bolts, and thickness of sheet steel coming out of a production line are known
to fit the normal distribution. The normal distribution is a continuous distribution.
A random variable X is said to have a normal distribution if its probability distribu-
tion (probability density function) is given by:
2
1 x −µ
1 −
2 σ
f (x ) = e −∞ < x < ∞, σ > 0
σ 2π
S tatis ti c s f o r Q ua lit y 99
where the two quantities, μ and σ, are the parameters of the distribution. We use the
notation X ∼ N(μ, σ2) to denote that the random variable X has normal distribution
with parameters μ, and σ2. By specifying values for μ and σ2, the distribution of a
normal random variable can be completely described. The graph of the density func-
tion is as shown in Figure 2.12a. (We usually graph the shape of a pdf to gain a visual
understanding of how it behaves.) The graph of the normal pdf, which is known as the
“normal curve,” has certain special properties:
1. It is symmetric with respect to a vertical line at x = μ.
2. It is asymptotic with respect to the x-axis on both sides of the vertical line.
3. The maximum value of f(x) occurs at x = μ.
4. The two points of inflexion occur at σ distances on each side of μ.
It can be shown using calculus that:
∫ f (x ) dx = 1
x
[ Area under the curve = 1.0 ]
∫ xf (x ) dx = µ
x
[ Mean of the distribution = µ ]
∫ (x − µ)
x
2
f ( x ) dx = σ 2 [ Variance of the distribution = σ 2 ]
Notice that of the two parameters of the normal distribution, one equals its mean and
the other its variance.
We come across many normally distributed random variables in quality engineering.
Variables such as the length of bolts, diameter of bores, strength of wire, and weight of
parcels are examples of measurements that can be expected to follow the normal distribu-
tion. Often, we would know that a random variable (a population) has the normal distri-
bution with a known average and a known variance. We will be interested in finding the
proportion of the population that lies below a value, above a value or in a given interval.
As an example, we may know that the net weight of nails in 20-lb. boxes is nor-
mally distributed with a mean of 20 lbs. and a variance of 9 lbs2. That is, if the random
µ
(a)
variable X represents the weights, then X ~ N (20, 9). We may want to find the pro-
portion of the boxes that have a net weight of less than 15 lbs., the lower specification
limit for net weights; that is, we want P(X ≤ 15). This probability is given by the area
below 15 under the curve defined by the function
2
1 x − 20
1 −
2 3
f (x ) = e
3 2π
Thus,
15 2
1 x − 20
1
∫3
−
2 3
P ( X ≤ 15) = e dx. (See Figure 2.12 b))
2π
−∞
15 20
(b)
It is not easy to find this area using calculus unless numerical methods of integration
(with the help of a computer program) are used. A different approach is taken to find
the required probability using the “standard normal distribution,” which is defined
below.
2.5.4.3.1 The Standard Normal Distribution A random variable that is normally dis-
tributed with μ = 0 and σ2 = 1 is called the “standard normal variable.” The standard
normal variable is denoted by a unique label, Z. Its distribution is called the “standard
normal distribution.” Thus,
Z ~ N ( 0, 1)
Φ( z ) =
∫
−∞
φ(t ) dt
The notations ϕ(z) and Φ(z) are specifically assigned to the pdf and the CDF, respec-
tively, of the standard normal distribution. The CDF, Φ(z), which represents the area
under the standard normal curve from −∞ up to any z (see Figure 2.12c), has been
tabulated for many z values, as in Table A.1 in the Appendix. This table is called the
“standard normal table,” or simply the “normal table.”
N(0,1)
1
Φ(z)
z
0
(c)
Figure 2.12 (c) The cumulative probabilities in the standard normal distribution.
There is a relationship that relates any normal distribution to the standard normal
distribution. This relationship can be used to find the areas under any normal distri-
bution by converting the problem into one of the standard normal distribution. First,
we will solve some examples to see how the standard normal table is used to find areas
under the standard normal distribution.
Example 2.32
If Z ~ N(0, 1):
Solution
Sketches, in Figure 2.13a to g, have been made to help identify the areas under the
standard normal curve.
N(0,1) N(0,1)
2.62 –1.45
(a) 0
(b) 0
Figure 2.13 (a) Area in the standard normal distribution for Example 2.32a. (b) Area in the standard normal distribution
for Example 2.32b.
N(0,1)
N(0,1)
Figure 2.13 (c) Area in the standard normal distribution for Example 2.32c. (d) Area in the standard normal distribution
for Example 2.32d.
N(0,1)
N(0,1)
0.0281
0.0771
t=?
(e) 0 (f) 0 s
Figure 2.13 (e) Area in the standard normal distribution for Example 2.32e. (f) Area in the standard normal distribution
for Example 2.32f.
N(0,1)
0.00135
0.00135
–k k
(g) 0
Figure 2.13 (g) Area in the standard normal distribution for Example 2.32g.
S tatis ti c s f o r Q ua lit y 10 3
The following theorem provides the relationship between any normal distribution
and the standard normal distribution.
Theorem
X −µ
If X ∼ N (µ, σ 2 ), then ∼ N ( 0, 1)
σ
Solution
2.2 − 2.0
b. P ( X > 2.2) = P Z > = P (Z > 4.0 ) = Φ( −4.0 ) = 0.0
0.05
[For all practical purposes, take P(Z ≤ −3.5) = Φ (−3.5) = 0.]
1.9 − 2.0 2.1 − 2.0
c. P (1.9 ≤ X ≤ 2.1) = P ≤Z ≤
0 .05 0.05
= Φ(( 2) − Φ( −2) = 0.9772 − 0.0228 = 0.9544
t − 2.0
d. We need to find t such that P Z ≤ = 0.05
0.05
t − 2.0
From normal tables, = −1.645
0.05
s − 2
e. We need to find s such that P Z > = 0.05
0.05
s−2
⇒ = 1.645 ⇒ s = 2.082
0.05
µ − kσ − µ µ + kσ − µ
⇒P ≤Z ≤ = 0.9973
σ σ
⇒ P ( − k ≤ Z ≤ k ) = 0.9973
⇒k=3
The result from Example 2.33f means that for any normal distribution, 99.73% of
the population falls within 3σ distance on either side of the mean. Using a similar
analysis, it can be shown that 95.44% of a normal population lies within 2σ distance
on either side of the mean, and 68.26% within 1σ distance on either side of the mean.
Example 2.34
The diameters of bolts that are mass produced are known to be normally distributed
with a mean of 0.25 in. and a standard deviation of 0.01 in. Bolt specifications call
for 0.24 ± 0.02 in.
Solution
Let D be the diameter of the bolts. Then, D ~ N(0.25, 0.012).
Note that the variance, not the standard deviation, is written as the parameter
of the distribution in the statement above. Note also the language we use. When
the random variable represents the population of bolt diameters, the probability the
random variable lies below a given value is used as the proportion of the population
that lies below the given value.
a. The upper and lower specifications are at 0.22 and 0.26, respectively. We need:
Spec Process
center center
0.01
0.01
Figure 2.14 (a, b) The process making diameters of bolts before and after centering.
b. When the process mean is adjusted to coincide with the specification mean
(i.e., when the process is centered):
D ~ N(0.24, 0.012) [assuming the standard deviation remains the same]
That is, 4.56% will be outside specification. Figure 2.14b shows the process
centered with respect to the specifications.
Example 2.34 brings home an important message for a quality engineer. Centering
a process with respect to the specifications will often lead to a considerable reduc-
tion in out-of-spec products. In many situations, centering may require only some
simple adjustment—such as adjusting a tool setting to raise or lower the process aver-
age. Further reduction in out-of-spec diameters may have to come from reducing the
variability.
Example 2.35
A battery manufacturer gives warranty to replace any battery that fails before four
years. The life of the manufacturer’s batteries is known to be normally distributed,
with a mean of 5 years and a standard deviation of 0.5 year.
10 6 A Firs t C o urse in Q ua lit y En gin eerin g
a. What percentage of the batteries will need replacement within the war-
ranty period?
b. What should be the standard deviation of the battery life if no more than
0.5% of the batteries should require replacement under warranty?
Solution
Let X be the random variable that denotes the life of the batteries in years. Then,
X ∼ N(5, 0.52).
a. The proportion of the batteries failing before four years is calculated as:
4 − 5
P ( X < 4) = P Z < = P (Z < −2.0 ) = 0.0228
0.5
That is, 2.28% of the batteries will have to be replaced during the warranty
period.
b. Let σ’ be the new standard deviation. Then, we need to find the value of σ’
such that P(X < 4) = 0.005
4 − 5
i.e., P Z < = 0.005
σ′
4−5
⇒ = −2.575( from normal tables)
σ′
1
⇒ σ′ = = 0.3883
2.575
The standard deviation should be reduced from 0.5 to 0.39; a 23% reduc-
tion in variability is needed.
Example 2.35 is a situation in which changing the center of the process, or increas-
ing the average life of the batteries, cannot be accomplished by a mere change of a
tool setting or some other simple means. Improving battery life may require research
and invention of a new process or new technology. However, improved results on bat-
tery life can be achieved by reducing the variability—in other words, by making the
batteries more uniform—possibly through controlling the process variables at more
consistent levels.
2.5.4.4 Distribution of the Sample Average X The sample average is a random variable.
Suppose, for instance, that we take samples of size 16 repeatedly from a population
and calculate the average from each sample, these averages will not all be the same.
Each time a sample is taken, the x obtained from the sample is a value assumed by
the random variable X . If X is a random variable, then it should have a distribution,
a mean, and a variance. The following result gives the distribution of X from samples
taken from a normal population.
S tatis ti c s f o r Q ua lit y 10 7
Theorem
σ2
If X ∼ N (µ, σ 2 ), then X n ∼ N µ,
n
This result says that if samples of size n are taken repeatedly from a population that
is normally distributed and the averages are computed from those samples, then the
averages will be normally distributed, with a mean same as the mean of the parent
population and a variance smaller than the population variance, depending on the
sample size. The variance of the average decreases with increase in sample size.
Example 2.36
Diameters of bolts are known to be normally distributed, with a mean of 2.0 in. and
a standard deviation of 0.15 in. If a sample of nine bolts is drawn, their diameters
measured, and the average calculated, what is the probability that the average will
be less than 2.1 in.?
Solution
Let X denote the diameter of bolts:
X ∼ N ( 2.0, 0.15 2 )
Then the sample averages from the samples of size n = 9 will be distributed as:
0.15 2
X 9 ∼ N 2.0,
9
That is,
µ X = 2.0
2
0.15 2 0.15
σ 2X = = = 0.05 2
9 3
σ X = 0.05
We need P ( X < 2.1). To convert this problem into one of the standard normal dis-
tribution, we have to subtract the mean of the random variable X , µ X from both
sides of the inequality and divide both sides by the standard deviation of the random
variable X , σ X . Therefore,
2.1 − 2.0
P ( X ≤ 2.1) = P Z ≤ = P (Z ≤ 2) = 0.9772
0.05
That is, 97.72% of the averages will be less than 2.1 in.
2.5.4.5 The Central Limit Theorem The previous theorem gave the distribution of X
when the population is known to be normally distributed. What if the population is
10 8 A Firs t C o urse in Q ua lit y En gin eerin g
not normal or its distribution is not known? The central limit theorem (CLT) gives
the answer.
Let a population have any distribution with a finite mean and a finite variance. That
is, let X ~ f (x), with mean = μ and variance = σ2. Then,
σ2
X n → N µ,
n→∞ n
The CLT says that no matter what the population distribution is, the sample averages
tend to be normally distributed if the sample size is large. It is known that this happens
even for sample sizes as small as four or five if the population distribution is not extremely
non-normal. This is the reason statistical methods based on X are known to be robust
with respect to the normality assumption for the population. In other words, if a method
is based on X , then even if an assumption of normality for the population is required, the
conclusions from the method will remain valid even if the assumption of normality for
the population is not quite valid. The confidence intervals and tests of hypotheses about
population means—which we will discuss in the next section—as well as the control
charts for measurements, discussed in Chapters 4 and 5, have this robustness.
Example 2.37
Samples of size four are taken from a population whose distribution is not known
but its mean μ and variance σ2 are known to be 2.0 and 0.0225, respectively. The X
values from the samples are plotted sequentially on a graph. Two limit lines are to
be drawn, with the population mean at the center and the limit lines on either side,
equidistant from the center, so that 99.73% of the plotted X values will fall within
these limits. Where will the limit lines be located?
Solution
That is,
( 0.0225) 0.15
µ X = 2.0 and σ X = = = 0.075
4 2
To include 99.73% of the X values, the limits must be located at 3-sigma distance from
the mean of X , where the “sigma” is σ X = 0.075. (Refer to Example 2.33f.) Therefore,
the limits must be at 2.0 ± 3(0.075)—that is, at 1.775 and 2.225, respectively.
experiment and is used to represent populations. Then the values it assumes are the
values in the population. So, a probability distribution is a model to describe a popula-
tion represented by a random variable.
The random variables are of two kinds—discrete and continuous. So, we need two
kinds of distributions to model the two kinds of random variables. These functions
have to satisfy certain properties:
p(x) = P(X = x) b
∫ f (x ) dx = P (a ≤ X ≤ b )
a
Mean: µ = Σ x x( p( x ) µ=
∫ xf (x ) dx
x
Variance: σ 2 = Σ x ( x − µ X )2 p( x ) σ2 =
∫ (x − µ
x
X )2 f ( x ) dx
The normal distribution is the model for a continuous random variable X that rep-
resents a measurement such as length, area, or thickness. Its probability density func-
tion is given by:
2
1 x −µ
1 −
2 σ
f (x ) = e , −∞ < x < ∞ , where the parameters μ and σ2 are, respectively,
σ 2π
the mean and variance of the random variable.
We discussed these distributions and their applications and saw how to use them
to make prediction about random variables. We will see them in later chapters used
as basis for constructing the control charts, the P-chart, the C-chart and the X and
R-charts for process control, and for designing sampling plans.
2.28 Suppose Y denotes the difference between two numbers when two dice are
thrown. Find its pmf, mean, and variance.
2.29 An urn contains two white balls and two black balls. Suppose X denotes
the number of black balls in a sample of three balls drawn from it without
replacement. Find the pmf, mean, and variance of X.
2.30 A random variable X has the following pdf. Find P(X < 4), as well as the
mean and variance of X.
2(1 + x )
, 2≤x≤5
f ( x ) = 27
0, otherwise
2.31 A random variable has the following pdf. Find the value of a so the function
is a valid pdf.
ax 3 , 0 ≤ x ≤ 1
f (x ) =
0, otherwise
Also, find F(x), F(1/2), F(3/4), P(l/2 ≤ X ≤ 3/4), and the mean and variance
of X.
2.32 A plane has four engines. Each engine has a probability of failing during a
flight of 0.3. What is the probability that no more than two engines will fail
during a flight? Assume that the engines are independent.
2.33 A consignment of 200 brake cylinders is known to contain 2% defectives. The
customer uses a sampling rule according to which a sample of eight units will be
taken and inspected. If the sample contains no defectives, the consignment will
be accepted; otherwise, the consignment will be rejected. What is the probability
that this consignment of 200 cylinders will be accepted using the sampling rule?
S tatis ti c s f o r Q ua lit y 111
2.34 A chemical plant experiences an average of three accidents per month. What
is the probability that there will be no more than two accidents next month?
2.35 In a foundry, the castings coming out of a mold line have an average of three
gas holes that can be salvaged. A customer will reject a casting if it has more
than six of such holes. If a lot of 200 castings is sent to the customer in a
month, approximately how many of them would you expect to be rejected?
2.36 Let X ∼ N(10, 25). Find P(X ≤ 15), P(X ≥ 12), and P(9 ≤ X ≤ 20).
2.37 Diameters of bolts are normally distributed with μ = 2.02 in. and σ = 0.02 in.
Suppose the specifications for the diameter are at 2.0 ± 0.06 in. Find:
a. The proportion of diameters below the lower specification.
b. The proportion above the upper specification.
c. The proportion within specification.
2.38 Let X ∼ N(5, 4). Find b such that P(X > b) = 0.20.
2.39 The thickness of gear blanks produced by an automatic lathe is known to be
normally distributed, with a mean of 0.5 in. and a standard deviation of 0.05
in. If 10% of the blanks are rejected for being too thin, where is the lower
specification located?
2.40 Let X ∼ N(5, 9). Find the values of a and b such that P(a ≤ X ≤ b) = 0.80 if
the interval (a, b) is symmetrical about the mean.
2.41 The life of a particular type of battery is normally distributed, with a mean of
600 days and a standard deviation of 49 days. What fraction of these batter-
ies will survive beyond 586 days?
2.42 Automatic fillers are used to fill cans of cooking oil, which have to meet a
minimum specification of 10 oz. To ensure that every can meets this mini-
mum, the company has set a target value for the process average at 11 oz. The
standard deviation of the amount of oil in a can is known to be 0.20 oz.
a. At the process average of 11 oz., what percentage of the cans will contain
less than 10 oz. of cooking oil? Assume that the amount of oil in cans is
normally distributed.
b. Assuming that virtually all values of a normal distribution fall within a dis-
tance of 3.5σ from the mean, find the minimum value to which the process
average can be lowered so that virtually no can will have less than 10 oz.
As mentioned earlier, we often have to draw conclusions about the quality of a popula-
tion from the observed quality in a sample. Speaking in the language of statistics, we
know the quality of a population if we know its distribution along with its parameter
values. If the population in question has the normal distribution, which we can expect
with regard to many populations, then the values of the two parameters, the mean μ and
the variance σ2, are the only two quantities we need to have full knowledge about the
112 A Firs t C o urse in Q ua lit y En gin eerin g
entire population. These values exist for any population but their values are never known
exactly. They can, however, be estimated from observations obtained from a sample.
If we can get a large size sample (≥50) from such a population, then, as we men-
tioned in Section 2.4, the X and S2 obtained from the sample can be used as the esti-
mates of μ and σ2, respectively. If, however, only a small size sample is available, then
the sampling error, or the variability in X and S2, inherited from the variability of the
population, will be too large, thus rendering them useless as estimates. For these situ-
ations, statisticians have devised methods for estimating the population parameters.
These are called “inference procedures.”
In this section, we discuss two such procedures for normally distributed populations:
the method of creating “confidence intervals” for the two parameters of the normal dis-
tribution, and the method of “hypothesis testing” to verify if the hypotheses, or claims,
made about the parameters are valid. In addition, we also discuss the methods for verify-
ing the distribution of a given population, especially when that population is expected to
follow the normal distribution. First, a few definitions relating to estimation are necessary.
2.6.1 Definitions
E( X ) = µ
E ( X ) = µ
E (S 2 ) = σ 2
2.6.2 Confidence Intervals
Suppose we want to estimate the mean μ of a normal population using sample average
X as the estimator. A sample from the population is taken, and the value of X is com-
puted. This observed value of X , denoted by x, is a point estimate for μ. Using this
point estimate, an interval (x − k, x + k) is created such that P ( x − k , ≤ µ ≤ x + k ) = 1 − α ,
where (1 − α) is called the confidence coefficient. The interval ( x − k , x + k ) is called a
(1 − α)100% confidence interval (CI) for μ. The value of k is determined as shown
below using the distribution of the estimator X .
2.6.2.1 CI for the μ of a Normal Population When σ Is Known The estimator is X . On the
assumption that the population has N(μ, σ2), as stated earlier in the chapter, the X has
the distribution N(μ, σ2/n), where n is the sample size. Then,
X −µ
∼ N ( 0, 1)
σ n
from which the following statement can be made:
114 A Firs t C o urse in Q ua lit y En gin eerin g
X −µ
P − zα / 2 ≤ ≤ zα / 2 = 1 − α
σ/ n
where zα/2 is a number such that P(Z > z α/2) = α/2 (see Figure 2.15).
The terms on the left-hand side of the above equation can be rearranged to give the
statement:
σ σ
P X − zα / 2 ≤ µ ≤ X + zα / 2 = 1−α
n n
(In making the rearrangement, we have used the principle that when both sides of
an inequality are multiplied by a constant or when the same quantity is added or
subtracted from both sides of an inequality, the inequality remains unchanged.) This
equation, indeed, provides the CI for μ with (1 − α) as the confidence coefficient.
Therefore, we can make the following statement: A(1−α)100% CI for μ of a population
that is normally distributed with a known standard deviation σ is given by:
σ σ
x − zα / 2 , x + zα / 2
n n
where zα/2 is the number that cuts off α/2 probability at the upper tail of the standard
normal distribution. In this model, we have assumed that the value of the population
standard deviation σ is known. We will build another model for the situation when σ
is not known using the same procedure.
Example 2.38
A random sample of four bottles of fabric softener was taken from a day’s production
and the amount of turbidity in them was measured as: 12.6, 13.4, 12.8, and 13.2
ppm. Find a 99% CI for the mean turbidity of the population of all bottles filled in
this line on that day. It is known that turbidity measurements are normally distrib-
uted with a standard deviation of σ = 0.3.
N(0,1)
α/2
–zα/2 0 zα/2
Solution
2.6.2.2 Interpretation of CI We should note first that a conclusion has been drawn
about the average of the entire day’s production from observations made on four sam-
ple observations. The CI is to be interpreted as follows: If 99% confidence intervals are
set up as above, from samples of size four 100 times, then on 99 out of the 100 times
the true mean that we are seeking will lie inside those intervals, and once, the true
mean will miss the interval.
2.6.2.3 CI for μ When σ Is Not Known In the above model of CI for μ, we assumed that
the population standard deviation σ was known. Suppose the population standard
deviation is not known; then, σ is replaced with S, the sample standard deviation, in
X −µ X −µ
the statistic to obtain a new statistic . This new statistic is known to have
σ n S n
the t distribution with (n − 1) degrees of freedom. The CI is created using this new
statistic and its distribution, the t distribution.
The t distribution is a symmetrical distribution with a mean of zero and a shape
that resembles the standard normal distribution except for heavier tails. It has one
parameter and is called the “degrees of freedom” (df ), and the shape of the distribu-
tion and the thickness of the tails depend on the value of the parameter df. When
the df is 30 or larger, the shape approaches that of the standard normal distribution.
Tables, such as Table A.2 in the Appendix, tabulate percentiles of the t distribution
for various degrees of freedom. The tabled values are tα,ν, where α is the probability the
tabled value cuts off at the upper tail of the t distribution with ν degrees of freedom
(see Figure 2.16). The formula for the CI is obtained following the same procedure as
was outlined for the previous model.
tν
α/2
tα/2, ν
s s
x − t α / 2,n − 1 , x + t α / 2,n − 1
n n
where x is the sample average and s is the sample standard deviation from a sample of
size n, and tα/2,n−1 is such that P(tn−1 > tα/2,n−1) = α/2. That is, tα/2,n−1 is the number that
cuts off α/2 probability at the upper tail of the t distribution with df = n − 1.
Example 2.39
( )
99% confidence interval 13.0 ± 5.841 0.366/ 4 = [11.93, 14.06]
Note that the CI in this example is wider compared to that in the σ-known case.
The wider a CI becomes, the less precise it is in estimating the parameter. In this
case, the loss of precision is due to a lack of information on the population standard
deviation.
2.6.2.4 CI for σ2 of a Normal Population When setting CI for the variance of the popu-
lation σ2, the sample variance S 2 = Σ( X i − X )2/n − 1 is used as the estimator. The fact
that the statistic (n −1)S2/σ2 has the χ2 (chi-squared) distribution with (n − 1) degrees
of freedom is made use of in obtaining the confidence interval for σ2.
The χ2 distribution is a positive-valued distribution with a single parameter, df. The
shape of a χ2 distribution depends on the value of the parameter df, and percentiles
of the χ2 distribution for various values of df are available in tables such as Table A.3
in the Appendix. The tabled values are χα2 , ν, where α is the probability the tabled
value cuts off at the upper tail of the χ2 distribution with ν degrees of freedom (see
Figure 2.17).
A (1 − α) 100% CI for the variance σ2 of a normal population is given by:
(n − 1)s 2 (n − 1)s 2
χ2 , 2
α / 2,n −1 χ1− α / 2,n −1
S tatis ti c s f o r Q ua lit y 117
χ2ν
α/2
2
χα/2,ν
2
where s2 is the sample variance, n the sample size, and χα,n−1 is such that
( )
P χn2−1 > χα2 ,n −1 = α.
Example 2.40
Set up a 99% CI for the standard deviation σ of the turbidity in bottles of cloth soft-
ener if a sample of four bottles gave the following measurements: 12.6, 13.4, 12.8,
and 13.2 ppm, respectively. Assume normality for turbidity in bottles.
Solution
First, we find the CI for σ2, and then obtain the CI for σ by taking the square root
of the two limits for σ2:
α/2 = 0.005 (n – 1) = 3 s 2 = 0.133 (calculated from the sample)
3( 0.133) 3( 0.133)
We have to calculate the limits: 2 , 2
χ 0.005,3 χ 0.995,3
2 2
From the χ 2 tables: χ 0.005,3 = 12.838 χ 0.995,3 = 0.0717
We see that the smaller the confidence coefficient, the narrower the CI becomes.
We have so far seen some examples of using confidence intervals to estimate popu-
lation parameters. The discussion was restricted to models for estimating the mean
and variance of a normal population—the two cases that are encountered most com-
monly in quality engineering work. Other models for estimating parameters of other
118 A Firs t C o urse in Q ua lit y En gin eerin g
distributions follow the same principles used in creating the models above. Models
exist to set up confidence intervals for the fraction defectives in a population, using
the fraction defectives obtained from a sample. Other models exist for the CI for the
difference between two population means when the standard deviations are known,
as well as when standard deviations are not known. These models are used to compare
the means of two populations. When such models are needed, the reader should refer
to the books on probability and statistics listed as references at the end of this chapter.
2.6.3 Hypothesis Testing
Two Types of Errors A statistical test could result in any one of the four possible events
shown in Figure 2.18. Of these four, two of the events lead to errors in conclusion.
These errors are designated as Type I and Type II errors, as shown in Figure 2.18, and
can be summarized as follows:
A Type I error occurs if H0 is declared not true when, in reality, it is true.
A Type II error occurs if H0 is declared true when, in reality, it is not true.
The probability of a Type I error is denoted by α and is called the level of significance.
The probability of a Type II error is denoted by β, and (1 − β) and is called the
power of the test.
S tatis ti c s f o r Q ua lit y 119
When a test is designed using a certain test statistic, the probability of these errors
occurring can be calculated from the knowledge of the distribution of the test statistic.
More importantly, the test can be designed such that the probability of the Type I
error does not exceed a specified value.
Designing a test involves setting up a critical region (CR) in the distribution of the
test statistic. The CR is the region in the distribution of the test statistic, such that if the
observed value of the test statistic (computed from a sample) falls in this region, it will
lead to rejection of H0 in favor of H1. Identification of the CR is the major part of test
design and we will see below how the CR is identified for different hypotheses scenarios.
The steps involved in hypothesis testing can be summarized as follows:
1. Set up H0 and H1.
2. Choose a level of significance.
3. Choose an appropriate test statistic.
4. Identify the CR.
5. Select a sample from the population and compute the observed value of the
test statistic.
6. If the observed value falls in the CR, reject H0 in favor of H1; otherwise, do
not reject H0.
7. Interpret the results.
Examples are provided below to illustrate the method of using hypothesis testing to
draw inference about population parameters.
2.6.3.1 Test Concerning the Mean µ of a Normal Population When σ Is Known The hypoth-
eses are:
H0: μ = μ0 (hypothesize that the mean equals a number μ0)
H1: μ > μ0 (if the mean is not equal to μ0, it must be greater than μ0)
It should be pointed out that in the above set up, although H0 has only the equality
sign (=), the inequality sign (≤) opposite to that in the H1 is implied because only then
will H0 and H1 be complement to each other.
The test statistic is:
X − µ0
~ N ( 0, 1)
σ/ n
12 0 A Firs t C o urse in Q ua lit y En gin eerin g
Note that the test statistic chosen relates the sample average X , the estimator, to the
population mean μ through a known distribution, the Z distribution [N(0, 1)].
The alternate hypothesis determines where the CR is located. In this case, the CR is
chosen as the set of all observed values of the test statistic that are greater than zα, where zα
is such that P(Z > zα ) = α. The following is the logic used in choosing this critical region.
If H0 is true, that is, M = M0 then the observed value of the test statistic will be a
value from the Z distribution, most probably a value near its mean, zero. If, on the
other hand, H1 is true, then the observed value will be a large value falling far to the
right of zero, on the higher side, as shown in Figure 2.19. We need to draw a line
somewhere to determine how large is too large in order to reject H0 in favor of H1.
Wherever the line is drawn, there will be a probability that H0 is rejected when, it is
true, because the random variable Z can assume very large values (or very small values)
even if its mean is zero. We draw the line at zα so that this probability does not exceed
α, a small number. The zα is called the critical value of the observed statistic, and the
values in the distribution beyond zα constitute the critical region.
The above is an explanation of how the critical region is chosen to determine
whether or not to reject the null hypothesis H0 while limiting the probability of Type I
error to a small value α. The example below shows how hypothesis testing is used for
making decisions in the real world.
Example 2.41
A supplier of rope claims that their new product has an average strength greater
than 10 kg. A sample of 16 rope pieces gave an average of 10.2 kg. If the standard
deviation of the strength is known to be 0.5 kg, test the hypothesis that μ = 10 vs.
μ > 10. Use α = 0.01.
Solution
H 0: µ = 10
H 1: µ > 10
N(0,1)
Critical region
0 zα (Critical value)
Critical region
Zcrit = 2.326
Zobs = 1.6
Figure 2.20 Critical region for testing claim about cotton rope strength.
The hypotheses have been set up in such way that the claim is included in the alter-
nate hypothesis.
The test statistic is:
X − µ0
~ N ( 0, 1)
σ/ n
2.6.3.2 Why Place the Claim Made about a Parameter in H1? We made a statement ear-
lier that it is preferable to include the statement we want to verify (e.g., the average
strength of nylon ropes >10 kg) in the alternate hypothesis H1. This is for the reason
that the conclusion drawn as a result of rejecting the null hypothesis is a more depend-
able conclusion than that derived from not rejecting the null. We explain this below.
The hypotheses for the above example, Example 2.41, could have been set up in two
possible ways, as in Sets 1 and 2 below:
Set 1 Set 2
H 0: µ ≤ 10 H 0: µ ≥ 10
H 1: µ > 10 H 1: µ < 10
12 2 A Firs t C o urse in Q ua lit y En gin eerin g
zα 0
0
–zα
Probability of Type I error Probability of Type II error
(probability of accepting claim (probability of accepting claim
when it is not valid) when it is not valid)
Set 1 Set 2
shown in Figure 2.22. The particular set of hypotheses appropriate for a given situ-
ation depends on where the experimenter believes the mean will be if it is not at the
hypothesized value.
The test statistic will be the same for all three situations, and the location of the
CR in each case will change according to the alternate hypothesis, as shown in
Figure 2.22.
In Case 1, the experimenter believes that if the true mean is not equal to the
hypothesized value μ0, it should be smaller than that value. The choice of H1: μ < μ0
reflects this belief. The H0 will be rejected if the observed value of the test statistic is
too small, and the CR determines how small is too small.
In Case 2, the experimenter has reasons to believe that if the true mean is not equal
to μ0, it should be larger than μ0. The choice of H1: μ > μ0 reflects this belief. The H0
will be rejected if the observed value of the test statistic is too large, and the CR deter-
mines how large a value of test statistic is too large.
In both cases, the risk of rejecting H0 when it is true, α, occurs on the side of the
distribution where the mean is expected to fall, if not at the hypothesized value.
In Case 3, the experimenter has no idea which side of the μ0 the true mean will
be if it is not at μ0. The H1: μ ≠ μ0 reflects this, and the CR determines how far away
on either side from zero the value of the test statistic should be before H0 is rejected.
Here, the risk of α is divided equally on either side of the distribution, because the
mean could fall on either side if not at the hypothesized value. Cases 1 and 2 are called
one-tailed tests, and Case 3 is called a two-tailed test.
Example 2.41 above is an example under Case 2. Examples of Cases 1 and 3 will
be found in the next section.
2.6.3.4 Test Concerning the Mean μ of a Normal Population When σ Is Not Known When σ
is not known, we cannot use it in the test statistic and use the sample standard devia-
tion S in its place. The new statistic with S replacing σ is known to have the t distribu-
tion with (n − 1) degrees of freedom. That is,
X − µ0
∼ t n −1
S/ n
12 4 A Firs t C o urse in Q ua lit y En gin eerin g
Figure 2.23 Critical regions for tests for μ when σ is not known.
There are, again, three possible alternate hypotheses. The critical regions correspond-
ing to the three alternate hypotheses are shown in Figure 2.23. An example of the
situation when σ is not known follows.
Example 2.42
The amount of ash in a box of sugar should be less than 2 g. according to a pro-
ducer’s claim. The lab analysis of a sample of five boxes gave the following results:
1.80, 1.92, 1.84, 2.02, and 1.76 g. Does the sample show evidence that the mean
ash content in the boxes is less than 2 g., as claimed by the producer? Use α = 0.05.
Solution
H 0: µ = 2 H 1: µ < 2 ( the claim is in the alternate hypotthesis)
x = 1.868 s = 0.104 (computed from sample observations)
X −2
The test statistic is: ∼ t4
S/ 5
The CR is made up of values of t4 < −t 0.05,4 = −2.132 (from the t table, Table A.2).
That is, reject H0 if the observed value of the test statistic is less than −2.132 (see
Figure 2.24).
1.868 − 2
The observed value of t4 from the sample is: t obs = = −2.838
0.104 / 5
Thus, tobs is in CR; therefore, reject H0. The mean ash content is less than 2 g; the
producer’s claim is valid.
t4
CR
0
–t0.05,4
= –2.132
tobs = –2.838
CR CR CR CR
Figure 2.25 Critical regions for hypotheses about difference of two means.
2.6.3.5 Test for Difference of Two Means When σs Are Known This test model is useful
when two populations have to be compared with regards to their mean values. For
example, the mean life of bulbs from one manufacturer may have to be compared with
the mean life from another manufacturer.
( X1 − X 2 )
Test Statistic : ∼Z
σ12 /n1 + σ 22 /n2
H0: μ1 − μ2 = 0 (no difference between the two population means)
Again, there are three possible alternate hypotheses, and the CR corresponding to
the three cases are shown in Figure 2.25. The following example illustrates the use of
this model.
Example 2.43
Solution
H 0: µ1 − µ 2 = 0
H 1: µ1 − µ 2 ≠ 0
where μ1 and μ 2 are, respectively, the means of male and female earnings.
The two-sided H1 is chosen because the experimenter has no reason to believe the
difference in average is positive or negative.
( X1 − X 2 )
Test statistic:
σ / n1 + σ 22 / n2
2
1
12 6 A Firs t C o urse in Q ua lit y En gin eerin g
400 400
Zobs = 1/ 2
= = 0.541
1200 2
1800 2 740
+
10 8
Zobs is not in the critical region (see Figure 2.26); hence, do not reject H 0.
Evidence from samples does not show a significant difference between the salaries
of male and female workers.
We have seen only some basic models in hypothesis testing to get an idea of how
this statistical procedure works and where it can be used. There are models available
that can be used to test hypotheses regarding the difference of two means when popu-
lation standard deviations are not known and sample sizes are small. We then use a
test statistic that has a t distribution. The model to test the hypothesis concerning
population variance uses a test statistic that has a χ2 distribution. The model to test the
hypothesis about the ratio of two variances uses a test statistic that has an F distribu-
tion. All these models assume that the populations concerned are normal. There are
testing procedures that do not require an assumption of normality for the population;
these are known as “distribution-free” or “nonparametric” tests. Any book in statistics
will give further details on these methods.
CR CR
–2.575 0 2.575
zobs = 0.541
Figure 2.26 Critical region for testing the equality of the salary of male and female workers.
S tatis ti c s f o r Q ua lit y 12 7
Tests exist for verifying the type of distribution a population follows based on the
sample data drawn from it. These tests known as goodness-of-fit tests examine how
well a set of data fits a hypothesized distribution. They can be used to verify any
distribution—such as exponential, normal, or Weibull. We will, however, describe
below the methods as they are used to verify the normality of populations.
A simple, informal method to verify the normality of a population is to take a large
sample (n ≥ 50) from the population and draw a histogram from the sample data. If the
histogram follows the symmetrical bell shape, then it is a verification that the population
in question follows the normal distribution. This, however, is a very subjective, approxi-
mate method, and it is not recommended when large samples are not available. Two
other, more formal methods, one graphical and the other analytical, are described below.
2.6.4.1 Use of the Normal Probability Plot Normal probability plotting involves plot-
ting the cumulative percentage distribution of the data on specially designed normal
probability paper (NPP). The NPP is designed such that if the data came from a
normal population, the cumulative distribution of the data will plot as a straight line.
Conversely, if the cumulative distribution of the data plots as a straight line on an
NPP, then we conclude that the data set comes from a normal population. The proce-
dure is described below using an example.
Example 2.44
Table 2.3 contains information taken from Table 2.2 on the fill-weights of deodor-
ant in cans. Figure 2.27 shows the cumulative percentage distribution from Table
2.3 plotted on a commercially available NPP (from www.Weibull.com). The cumu-
lative percentages have been plotted against the upper limit of each of the cells.
We see that the plotted points all seem to fall on a straight line, indicating that the
data come from a normal population. If a line is drawn approximately through “all”
the plotted points, the resulting line represents the cumulative distribution of the
99.90
99.00
95.00
90.00
80.00
70.00 σ
60.00
50.00 µ
40.00
30.00 σ
20.00
10.00
5.00
3.00
2.00
1.00
0.50
0.20
0.10
160 180 200 220 240 260 280 300 320 340 360
X16 = 232 X50 = 264 X84 = 298
Figure 2.27 Example of a normal probability plot using commercially available normal probability paper (From www
.Weibull.com).
population of fill-weights. From this line, the mean μ and the standard deviation σ
of the population can be estimated as follows:
1. Using the notation Xp for the p-th percentile of the population, the esti-
mate for the mean µ̂ = X 50. This is based on the fact that the mean is the
50th percentile for a normal distribution.
2. The estimate for standard deviation is σˆ = ( X 84 − X 16 )/ 2. This formula
for the standard deviation comes from the fact that there is a distance of
(approximately) 2σ between X16 and X84 in any normal distribution. The
paper used already has markings for X16 and X84.
S tatis ti c s f o r Q ua lit y 12 9
298 − 232
µˆ = 264 σˆ = = 33
2
Mean 264.05
99
StDev 31.8852
95
90
80
70
60
Percent
50
40
30
20
10
5
Figure 2.28 Normal probability plot produced by Minitab software for data in Table 2.1.
Table 2.5 Frequency Distribution, Cumulative Frequency Distribution, and Calculation of χ2 Statistic
%
% CUMULATIVE % EXPECTED (e i − ai )2
CELL LIMITS MIDPOINT FREQUENCY (AI) FREQUENCY FREQUENCY FREQUENCY (EI) ei
1 161–180 170.5 1 1 1 0.372
2 181–200 190.5 2 9 2 3 1.85 8,32 0.056
3 201–220 210.5 6 6 9 6.17
4 221–240 230.5 10 10 19 14.21 1.247
5 241–260 250.5 18 18 37 22.35 0.846
6 261–280 270.5 35 35 72 24.12 4.91
7 281–300 290.5 17 17 89 17.82 0.038
8 301–320 310.5 6 6 95 9.02
9 321–340 330.5 4 11 4 99 3.13 12.9 0.278
10 341–360 350.5 1 1 100 0.745
Total 100 100 100 7.375
[Note: We have used in the above calculation the half-step correction to properly
distribute the area under the normal curve lying between 240 and 241, and again
between 260 and 261. Such correction improves the probability calculations.]
Some of the cell frequencies had to be grouped to meet the requirement of the
χ2 test that the expected frequency in each cell is at least five. The total of the last
column—7.375, which is the observed value of the χ2 statistic—is compared with the
critical value χ 02.05,3 = 9.348. (The df for the χ2 statistic is three because there are p = 6
cells and q = 2 estimated parameters.) Because the observed value is not greater than
the critical value, H0 is not rejected, and we conclude that the population has the
hypothesized normal distribution.
2.6.5 The P-Value
If a computer program is used in any of the procedures for testing hypothesis, in addi-
tion to calculating the observed value of the test statistic, the computer program also
gives the P-value corresponding to the observed value. What is the P-value? How is
it to be interpreted?
It is easy to explain the meaning of the P-value using the one-tailed test as an
example. Referring to Figure 2.29a, the P-value is the probability in the distribution
of the test statistic that lies beyond the observed value, away from values favoring H0.
We can see from Figure 2.29a, if the P-value is smaller than the chosen α, it means
13 2 A Firs t C o urse in Q ua lit y En gin eerin g
f(t)
that the observed value of the test statistic lies in the critical region, and so H0 should
be rejected. If the P-value is larger than α, the observed value is not in the critical
region and the H0 is not to be rejected.
If the test is a two-tailed test, the probability in the distribution of the test statistic
lying beyond the observed value should be compared with α/2, as shown in Figure
2.29b. Hence, the computer program calculates the P-value as two times the prob-
ability in the distribution of the test statistic lying outside the observed value. The
P-value is then directly compared with α. So, the rules to use with the P-value are
as follows:
If P − value ≤ α, reject H 0
If P − value > α, do not rejejct H 0
The inference methods that enable making inference about population parameters from
sample observations help us “estimate” the population quality from sample quality.
Two methods were discussed: The method of confidence interval (CI) and the
method of hypothesis testing. In the case of CI, we create an interval such that the
probability the parameter that we want to estimate lies in that interval is (1 - α), where
α is a small number. We use the distribution of the estimator to create such an interval.
In the case of hypothesis testing, we verify if a claim made about a population is
valid using information from the sample. We discussed the method of selecting two
hypotheses H0 and H1, called the null hypothesis and alternate hypothesis, respec-
tively, choice of a test statistic, and the choice of the critical region in the distribution
of the test statistic to facilitate rejecting or not rejecting the null hypothesis. We also
saw how to interpret the results of the test in terms of the physical aspects of the
experiment.
2.6.6.1 Confidence Intervals
2.43 A random sample of 12 specimens of core sand in a foundry has a mean
tensile strength of X = 180 psi. Construct a 99% CI on the mean tensile
strength of the core sand under study if the standard deviation of the tensile
strength σ is 15 psi. Assume the strength is normally distributed.
2.44 A random sample of nine electric bulbs was chosen from a production pro-
cess and tested until failure. The lives in hours were 1366, 1372, 1430, 1246,
1449, 1268, 1408, 1468, and 1502, respectively. Give a 95% CI for the aver-
age life of the bulbs produced in the process if it is known that the life is
normally distributed with a standard deviation of 100 hours.
2.45 A random sample of 20 cans of hair spray was found to have an average
of X = 1.15 oz. of concentrate. The standard deviation of the concentrate
obtained from the sample was s = 0.25 oz. Find a 99% CI on the mean quan-
tity of concentrate in the cans.
2.46 A random sample of nine electric bulbs was chosen from a production pro-
cess and tested until failure. The lives in hours were 1366, 1372, 1430, 1246,
1449, 1268, 1408, 1468, and 1502, respectively. Give a 90% CI for the mean
life of the bulbs produced in the process if it is known that the life is normally
distributed but the population standard deviation is not known.
2.47 A random sample of 15 bronze rods gave the following measurements of
diameter in millimeters: 6.24, 6.23, 6.20, 6.21, 6.20, 6.28, 6.23, 6.26, 6.24,
6.25, 6.19, 6.25, 6.26, 6.23, and 6.24. Assuming that the shaft diameter is
normally distributed, construct a 99% CI for the mean and variance of the
shaft diameter. What is the 95% CI for the standard deviation?
13 4 A Firs t C o urse in Q ua lit y En gin eerin g
2.48 The following data come from strength of steel specimens in thousands of
psi: 77.8, 78.7, 58.6, 43.7, 67.6, 46.9, 82.1, 90.8, 67.9, 76.9, 56.6, 74.8, 72.0,
82.9, and 81.9. Find the 95% CI for the mean and standard deviation of
strength of steel in this population if the population is assumed to be normal.
2.6.6.2 Hypothesis Testing
2.49 The yield of a chemical process is being studied. The variance of the yield is
known from previous experience as σ2 = 5 (percentage2). The past six days of
plant operation have resulted in the following yields (in percentages): 92.5,
91.6, 88.75, 90.8, 89.95, and 91.3. Is there reason to believe that the yield is
less than 90%? Use α = 0.05. Assume that the yield is normal.
2.50 The shelf life of a battery is of interest to a manufacturer. A random sample
of 10 batteries gave the following shelf lives in days: 108, 122, 110, 138, 124,
163, 124, 159, 106, and 134. Is there evidence that the mean shelf life is
greater than 125 days? Assume that the shelf life is normal. Use α = 0.05.
2.51 Two machines are used for filling bottles with a type of liquid soap. The filled
volumes can be assumed to be normal, with standard deviations of σ1 = 0.015
and σ2 = 0.018. A random sample of 10 bottles is taken from the output of
each machine, and the following volume checks in milliliters were obtained.
Verify if both machines are filling equal volumes in the bottles. Use α = 0.10.
MACHINE 1 MACHINE 2
16.03 16.01 16.02 16.08
16.04 15.96 15.97 16.04
16.05 15.98 15.96 16.02
16.05 16.02 16.01 16.01
16.02 15.99 15.99 16.11
2.52 A survey of alumni of two engineering programs in a college gave the fol-
lowing data on their salaries after five years. Is there evidence to show that
the average salaries of the graduates from the two programs are different?
Assume that both the salaries are normally distributed and have a standard
deviation of $12,000. Use α = 0.10.
IE GRADUATES ME GRADUATES
66,500 64,600
58,200 58,900
80,400 76,900
48,500 45,800
64,500 70,100
68,300 69,700
77,300 77,900
84,900 52,300
S tatis ti c s f o r Q ua lit y 13 5
2.6.6.3 Goodness-of-Fit Test
2.53 Using the χ2 goodness-of-fit test, test the data in Exercise 2.1 for normal
distribution. Compare the results from a test for normality of the same data
using computer software.
2.54 Using the χ2 goodness-of-fit test, test the “Before Improvement” data in
Exercise 2.2 to verify if the data come from a normal distribution.
2.7 Mini-Projects
Mini-Project 2.1
Mini-Project 2.2
The data below represent the tensile strength of core sand (sand mixed with binders and
baked, from which foundry cores are made) obtained from two different testing machines
13 6 A Firs t C o urse in Q ua lit y En gin eerin g
in a foundry. Each test yields three readings from three specimens made from the same
sand. The foundry is currently using the Detroit (D) tester and is planning to purchase the
Thwing-Albert (TA) tester, because the D testers are getting old and cannot be repaired
or replaced. The foundry wants to know if the new TA tester will give the “same” results
as the old tester because, many of the current specifications for the many sands they use
are based on the results of the old D tester. The foundry is concerned if the specifica-
tions would hold good for the new tester or they have to be redone. In the absence of a
definitive answer, there is debate in the foundry as to whether to continue using the old
specification or redo all specifications with the new tester requiring enormous expense in
time and resources. An engineering statistician was brought into the debate.
An experiment was conducted to obtain test results over five types of sand, ran-
domly selected from many sand types, using both testers. The results were recorded as
shown in the table below. It is necessary to determine if the two testers give “equal”
measurements for all sand types in all ranges of strength.
Analyze the data and provide suitable answers to the company. Make any assump-
tions you need, and state them clearly. You can also verify the assumptions you
make. Remember, a good analysis includes use of both graphical and analytical tools.
Remember also that before we conclude that any two populations are equal or not
equal we have to make a significance test about their parameters. Use only the meth-
ods we have discussed in Chapter 2.
D TESTER TA TESTER
SAND
BATCH TRIAL SPECIMEN 1 SPECIMEN 2 SPECIMEN 3 SPECIMEN 1 SPECIMEN 2 SPECIMEN 3
1 1 340 360 375 359 351 345
2 350 335 345 348 329 348
2 1 225 210 250 254 255 247
2 255 245 230 244 253 202
3 1 220 250 245 243 251 235
2 240 225 230 241 251 204
4 1 230 230 255 246 247 237
2 220 235 245 246 248 241
5 1 430 445 450 441 445 441
2 400 435 435 445 442 443
Mini-Project 2.3
Given below in the tables are monthly statistics on maximum and minimum tempera-
tures in Peoria, Illinois, gathered from historical records maintained in the Weather
Underground website (www.wunderground.com). The data represent the monthly
maximum/minimum of the daily maximums/minimums. For example, in January
1950, the maximum among the daily maximums was 68°F and the minimum among
the daily maximums was 46.
S tatis ti c s f o r Q ua lit y 13 7
Based on these data, is there evidence that global warming is affecting the weather
in this Midwestern city in the United States? You can pick any stream(s) of data
from the tables that you think would help in answering the question. As always,
a good analysis would include the use of both graphical and analytical methods.
Any conclusion of the existence or nonexistence of difference between populations
should come out of a significance test, which will take into account both the aver-
age and variability in the sample data. Use only the methods you have learned from
Chapter 2.
1950 JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC
Max Max 68 46 73 80 90 93 91 88 84 86 80 53
Min 46 36 46 55 68 75 72 68 68 64 51 32
Min Max 16 10 19 36 54 60 66 66 59 55 7 12
Min 3 −9 3 21 35 45 52 44 39 34 0 −5
1960 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Max Max 59 45 70 82 82 88 91 91 93 80 71 64
Min 46 33 48 62 62 70 71 75 72 59 53 46
Min Max 16 12 10 36 41 66 73 73 60 44 30 5
Min 0 1 −9 21 32 52 52 54 46 21 16 −11
1970 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Max Max 57 55 60 86 87 93 97 90 90 80 57 69
Min 35 32 51 68 66 73 75 71 73 62 45 42
Min Max 0 3 30 37 53 60 69 66 61 46 19 24
Min −18 −5 17 25 35 52 51 55 37 30 7 7
1979 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Max Max 34 39 70 75 87 91 89 91 88 86 71 59
Min 25 28 57 59 66 70 70 75 68 66 46 45
Min Max 2 0 25 37 57 69 72 64 66 41 27 18
Min −22 −17 6 19 37 48 51 45 41 25 16 −2
1990 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Max Max 62 63 80 84 79 90 93 91 93 82 73 59
Min 44 36 60 63 59 73 75 73 72 61 55 35
Min Max 28 19 30 39 50 66 64 71 55 44 39 7
Min 14 6 19 21 37 48 55 51 37 28 19 −2
2000 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Max Max 62 54 ** ** 79 90 93 91 93 82 73 59
Min 39 43 ** ** 59 73 75 73 72 61 55 35
Min Max 17 28 ** ** 50 66 64 71 55 44 39 7
Min −2 10 ** ** 37 48 55 51 37 28 19 −2
2010 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Max Max 48 41 79 84 91 91 92 94 88 85 73 46
Min 35 33 51 64 67 75 77 76 68 57 48 25
Min Max 8 20 36 51 50 75 79 75 62 45 36 14
Min −11 0 22 32 35 57 58 54 43 27 20 −1
13 8 A Firs t C o urse in Q ua lit y En gin eerin g
Mini-Project 2.4
The quality of air in any place in the U.S. is measured by Air Quality Index (AQI),
which indicates how polluted the air is in that place. It is computed from the measure-
ments of five major pollutants: ground-level ozone, particulate matter, carbon mon-
oxide, sulfur dioxide, and nitrogen oxide. Based on the amount of each pollutant in
the air, the AQI assigns a numerical value to air quality, and it can be interpreted as
follows (according to the U.S. Environmental Protection Agency or EPA): 0 to 50
(good); 51 to 100 (moderate); 101 to 150 (unhealthy for sensitive groups); 151 to 200
(unhealthy); 201 to 300 (very unhealthy); 301 to 500 (hazardous). An AQI of 100 is
considered “acceptable” by the EPA even if it be a moderate health concern for a very
small number of people. When it exceeds 100, the air could cause harm to people’s
health in different degrees, depending on their sensitivity.
Given below are data on air quality of some selected cities in the U.S. recorded
between 1990 and 2000. The data represent the number of days the AQI exceeded 100
in a given city in a given year. Obviously, there is some variability in the readings from
year to year. Is it possible to draw meaningful conclusions from these data? Making
sense out of measurements is the common challenge to the engineering statistician.
You have to propose one or more meaningful hypotheses such as: cities on the
East Coast have cleaner air than those on the West Coast, or cities near the northern
border are more livable than those near the southern border, and so on. Then pick
suitable data from the table below and analyze them to verify if your hypotheses are
valid. State your conclusions in a way that would be meaningful to a couple who want
to retire in an air-friendly city in the U.S. They would like to have some alternatives
to choose from. Use a map of the U.S. if you are not familiar with the locations of the
cities.
CITIES 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000
Atlanta, Georgia 42 23 20 36 15 35 25 31 50 61 26
Baltimore, Maryland 29 50 23 48 41 36 28 30 51 40 16
Boston, 7 13 9 6 10 8 2 8 7 5 1
Massachusetts
Chicago, Illinois 4 25 6 3 8 23 7 9 10 14 0
Cleveland, Ohio 10 23 11 17 24 27 18 13 22 21 5
Dallas, Texas 24 2 12 14 27 36 12 20 28 23 20
Denver, Colorado 9 6 11 6 2 3 0 0 7 3 2
Detroit, Michigan 11 27 7 5 11 14 13 11 17 15 3
El Paso, Texas 19 7 10 7 11 8 7 4 6 6 3
Houston, Texas 51 36 32 27 38 65 26 47 38 50 42
Kansas City, Missouri 2 11 1 4 10 23 10 17 15 5 10
Los Angeles, 173 168 175 134 139 113 94 60 56 27 48
California
Miami, Florida 1 1 3 6 1 2 1 3 8 5 0
(Continued )
S tatis ti c s f o r Q ua lit y 13 9
CITIES 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000
Minneapolis/St. Paul, 4 2 1 0 2 5 0 0 1 0 0
Minnesota
New York, NY 36 49 10 19 21 19 15 23 17 24 12
Philadelphia, 39 49 27 62 37 38 38 38 37 32 18
Pennsylvania
Phoenix, Arizona 12 11 11 15 10 22 15 12 14 10 10
Pittsburgh, 19 21 9 13 19 25 11 21 39 23 4
Pennsylvania
St. Louis, Missouri 23 24 14 9 32 36 20 15 23 29 14
San Diego, California 96 67 66 59 46 48 31 14 33 16 14
San Francisco, 0 0 0 0 0 2 0 0 0 0 0
California
Seattle, Washington 9 4 3 0 3 0 6 1 3 1 1
Washington, DC 25 48 14 52 22 32 18 30 47 39 11
Source: U.S. Environmental Protection Agency, Office of Air Quality Planning & Standards.
References
Bilgin, M., and B. Frey. May 1999. “Applying SPC at a Candy Manufacturing Co.” Unpublished
project report, IME 522, IMET Department, Bradley University, Peoria, IL.
Garrett, L. May 2003. “Statistical Model of the Game Time of a Major League Baseball
Game.” Unpublished project report, IME 526, IMET Department, Bradley University,
Peoria, IL.
Guttman, I., S. S. Wilks, and J. S. Hunter. 1982. Introductory Engineering Statistics. 3rd ed.
New York: John Wiley.
Guyer, K., and A. Lalwani. December 1994. “Analysis of Plastometric Whiteness in the
G-198 PVC Resin.” Unpublished project report, IME 522, IMET Department, Bradley
University, Peoria, IL.
Hines, W. W., and D. C. Montgomery. 1990. Probability and Statistics in Engineering and
Management Science. 3rd ed. New York: John Wiley.
Hogg, R. V., and A. T. Craig. 1965. Introduction to Mathematical Statistics. 2nd ed. New York:
Macmillan.
Hogg, R. V., and J. Ledolter. 1987. Engineering Statistics. New York: Macmillan.
McGinty, D. May 2000. “Die Spring Life Analysis.” Unpublished project report, IME 526,
IMET Department, Bradley University, Peoria, IL.
Mood, M. M., A. G. Graybill, and D. C. Boes. 1974. Introduction to the Theory of Statistics.
3rd ed. New York: McGraw-Hill.
Walpole, R. E., and R. H. Myers. 1978. Probability & Statistics for Engineers & Scientists. 2nd
ed. Upper Saddle River, NJ: Prentice Hall.
Walpole, R. E., R. H. Myers, S. L. Myers, and K. Ye. 2002. Probability & Statistics for
Engineers & Scientists. 7th ed. Upper Saddle River, NJ: Prentice Hall.
http://taylorandfrancis.com
3
Q ualit y in D esig n
We begin this chapter with the description of the “product creation cycle” to show
where the topics of discussion in this chapter fit within the activities that take place
in various phases of the product creation cycle. Then we present the quality-related
activities in the design phase, also known as the quality planning phase, of the cycle.
Specifically, we will discuss customer surveys, quality function deployment (QFD),
principles of reliability, design of experiments for product/process parameter selec-
tion, failure mode and effects analysis (FMEA), and principles of choosing tolerances,
which are all part of the effort in designing quality into a product.
The scheme of activities that are performed from the time a product is conceived
to the time the product is made and delivered to the customer is called the “product
creation cycle.” One such scheme, adapted from the “quality planning timing chart”
described in the Reference Manual of Advanced Product Quality Planning and Control
Plan, published jointly by the Big Three Automakers (Chrysler LLC, Ford Motor
Company, and General Motors Corporation 2008a), is presented in Figure 3.1. The
modern approach to making quality products requires that quality issues be addressed
throughout this product creation cycle.
The product creation cycle in Figure 3.1 shows the activities divided into the fol-
lowing six stages:
1. Product planning
2. Product design and development
3. Process design and development
4. Product and process validation
5. Production
6. Feedback, assessment, and corrective action
The quality activities performed in the stages of product planning, product design,
process design, and product and process validation are collectively called “quality plan-
ning activities,” and those performed during the production stage are called “quality
control activities.” The activities relating to quality performed during the planning
stages, along with their objectives and the tools employed, are covered in this chapter.
141
14 2 A Firs t C o urse in Q ua lit y En gin eerin g
Product planning
Figure 3.1 The product creation cycle. (Reprinted from Chrysler LLC, Ford Motor Company, and General Motors
Corporation. 2008a. Advanced Product Quality Planning (APQP) and Control Plan—Reference Manual, 2nd edition.
Southfield, MI: A.I.A.G. With permission.)
The major tools employed during the quality planning stage are:
Customer surveys—used to find the needs of the customers.
Quality function deployment—used for selecting product features that would
respond to customer needs.
Failure mode and effects analysis—used to proof the product and process designs
against possible failures.
Basic principles of reliability—needed to define, specify, measure, and achieve reli-
ability in products.
Design of experiments—used to select product characteristics (or process param-
eters) to obtain desired product (or process) performance.
Tolerancing—used to determine the economic limits of variability for product
characteristics and process parameters.
3.2 Product Planning
This is the first stage of planning for a product when the major features for the product
are determined. If the product is a car, features, such as horsepower, body style, trans-
mission type, safety standards, fuel consumption, and so on, are determined at this
stage. If the product is a lawnmower, such major features as engine horsepower, deck
size, and whether it will be self-propelled or self-starting will be determined. Quality
and reliability goals for the product are also established at this stage.
The quality goals are chosen based on several aspects of quality, such as perfor-
mance, safety, comfort, appearance, and so on, to meet the needs of the customer after
ascertaining what their needs and expectations are in these areas. Reliability goals
are set in terms of length of failure-free operation of the product based on customer
preferences, cost constraints, and prevailing competition.
Q ua lit y in D e si g n 14 3
The quality and reliability goals selected at this stage drive the specifics of design
activities during the next stage, in which product design details are worked out and
process design is undertaken to proceed concurrently with product design. Detailed
drawings for parts and subassemblies are prepared along with bills of materials. A pre-
liminary process flow chart is prepared, including the choice of materials and machin-
ery for making the product. Preparation of the flow chart, among other things, will
help to determine if existing technology will be adequate or if new technology is
needed. The bills of materials will help in deciding what parts will be produced and
what parts will be procured.
Quality planning activities are generally performed by a cross-functional team
called the “quality planning team.” This team is comprised of representatives from
various functional areas, such as marketing, product engineering, process engineer-
ing, material control, purchasing, production, quality, and customer service. Supplier
and customer representatives are included when appropriate. Quality planning begins
with finding the needs of the customer.
Finding customer needs is often referred to as “listening to the voice of the customer.”
The customers’ voice can be heard in several ways:
1. Surveying of past and potential customers.
2. Listening to focus groups of customers, such as chain stores, fleet operators, or
seniors.
3. Collecting information from the history of complaints and warranty services.
4. Learning from the experiences of cross-functional team members.
Deciding on which approach should be used in a given situation depends on the
nature of the product, the amount of historical information already available, and
the type of customer being served. For example, industrial customers may have to
be approached differently from the general public, and customers for cars must be
approached differently from customers for toys. The customer survey is the most com-
monly used method and is often combined with other methods, such as interviewing
a focus group. Thus, the customer survey is an important tool in assessing the needs
of the customer.
3.2.1.1 Customer Survey A typical customer survey attempts to establish the custom-
ers’ needs and the level of importance that customers attach to the different needs.
Many customers may also be lead users for a product or service category, and often
express latent, hitherto unarticulated needs. Additionally, lead users typically also
invent their own solutions to a problem with a product or service, and may be able to
provide valuable insight to the quality team. When appropriate, information is also
elicited on how much the customers favor a competitor’s product and why. The survey
14 4 A Firs t C o urse in Q ua lit y En gin eerin g
could be conducted by phone interview, personal interview, or mail; each method has
its advantages and disadvantages. For example, direct contact of customers, either by
phone or in person, could generate information that the design team might not have
even thought of asking (Gryna 1999). Mailed surveys will mostly produce answers
to prepared questions and may be cheaper to administer. In either case, the quality
planning team should have a prepared survey instrument or questionnaire. Often, the
survey tools used for measuring the customers’ satisfaction with an existing product
can be used for projecting what the customers would want in a new design or a new
product. A brief discussion on what makes a good survey instrument and how to
design one is attempted here.
Designing a survey starts with identifying the attributes the customers might look
for in the product. The customers are asked to express the level of their desire for the
chosen attributes, usually on a scale of one to five. Tables 3.1 and 3.2 show examples
of customer surveys; one is for a tangible product and the other is for an intangible
service.
Designing a customer survey is a science in itself. There are good references (Hayes
1998; Churchill 1999) that provide guidance on preparing a survey instrument. The
following discussion covers some of the important fundamentals of creating a cus-
tomer survey.
The list of attributes on which the customer’s rating is requested has to be drawn
by people with a good understanding of the product being planned and the customer
being served. The items in this list must be relevant, concise, and unambiguous. Each
item should contain only one thought and be easy to understand. For new products or
customers, the list should have room for addition of items by the customer. Usually,
a five-point (Likert-type) scale is chosen for the customer to express the strength of
their requirement of a particular item; the Likert-type format is known to give more
reliable results than a true-false response (Hayes 1998). The questionnaire should have
a brief introduction, as shown in Tables 3.1 and 3.2, to let the customer know the
objective of the questionnaire and how the questionnaire is to be completed.
The survey instrument must be reliable. A survey is reliable when the survey results
truly reflect the preferences of the customer as to their choice of attributes and how
much they desire those attributes.
One way to measure this reliability is to give the survey to the same set of sam-
ple customers twice, with an intervening delay, and evaluate the correlation between
the two sets of responses. A high correlation between the two sets of responses will
indicate high reliability of the questionnaire. The two sets of data presented in the
Q ua lit y in D e si g n 14 5
Table 3.1 Customer Requirement Survey for a New Book in Quality Engineering
A NEW BOOK IS PLANNED IN QUALITY ENGINEERING TO BE PUBLISHED BY AN INTERNATIONAL PUBLISHER. THE BOOK WILL
PRESENT THE STATISTICAL AND MANAGERIAL TOOLS OF QUALITY IN AN INTEGRATIVE MANNER. WE WANT TO FIND OUT FROM
YOU (A POSSIBLE CUSTOMER) THE TOPICS THAT YOU WOULD LIKE COVERED IN THE BOOK AND THEIR LEVEL OF
IMPORTANCE TO YOU. SOME OF THE TOPICS HAVE BEEN ANTICIPATED AND LISTED BELOW, BUT THERE IS ROOM FOR
ADDITIONS. PLEASE RATE THE TOPICS ON A SCALE OF 1 (NOT NEEDED) TO 5 (HIGHLY DESIRABLE) TO EXPRESS YOUR
PRIORITY. PLEASE ALSO RATE A BOOK THAT YOU MAY BE CURRENTLY USING IN TERMS OF HOW THE REQUIREMENTS ARE
MET BY THAT BOOK. PLEASE IDENTIFY (IF YOU WOULD) THE BOOK YOU ARE CURRENTLY USING.
INFORMATION ABOUT YOURSELF (MARK THE MOST SUITABLE ONE): (1) I AM A PROFESSOR TEACHING QUALITY SUBJECTS
_____ (2) I AM A PROFESSIONAL ENGAGED IN QUALITY WORK _____(3) I AM A STUDENT INTERESTED IN QUALITY TOPICS.
STATISTICAL METHODS
1. Rigor in mathematics 1 2 3 4 5 1 2 3 4 5
2. Fundamentals of prob. & stat. 1 2 3 4 5 1 2 3 4 5
3. Statistical process control methods 1 2 3 4 5 1 2 3 4 5
4. Acceptance sampling methods 1 2 3 4 5 1 2 3 4 5
5. Design of experiments 1 2 3 4 5 1 2 3 4 5
6. Regression analysis 1 2 3 4 5 1 2 3 4 5
7. Reliability principles 1 2 3 4 5 1 2 3 4 5
_____________________ 1 2 3 4 5 1 2 3 4 5
MANAGEMENT TOPICS
1. Supply chain management 1 2 3 4 5 1 2 3 4 5
2. Team approach 1 2 3 4 5 1 2 3 4 5
3. Strategic planning 1 2 3 4 5 1 2 3 4 5
4. Principles of management 1 2 3 4 5 1 2 3 4 5
5. Organizational theory 1 2 3 4 5 1 2 3 4 5
6. QFD 1 2 3 4 5 1 2 3 4 5
_____________________ 1 2 3 4 5 1 2 3 4 5
QUALITY MANAGEMENT SYSTEMS
1. ISO (QS) 9000 standards 1 2 3 4 5 1 2 3 4 5
2. Baldrige Award criteria 1 2 3 4 5 1 2 3 4 5
3. Six Sigma system 1 2 3 4 5 1 2 3 4 5
_____________________ 1 2 3 4 5 1 2 3 4 5
QUALITY IMPROVEMENT TOOLS
1. PDCA/Breakthrough methods 1 2 3 4 5 1 2 3 4 5
2. Magnificent seven 1 2 3 4 5 1 2 3 4 5
3. Benchmarking 1 2 3 4 5 1 2 3 4 5
4. Use of computer software 1 2 3 4 5 1 2 3 4 5
5. FMEA 1 2 3 4 5 1 2 3 4 5
_____________________ 1 2 3 4 5 1 2 3 4 5
14 6 A Firs t C o urse in Q ua lit y En gin eerin g
SD D N A SA
1. I waited a short period of time before getting help. 1 2 3 4 5
2. The teller completed my transactions in a short time. 1 2 3 4 5
3. The financial consultant was available to schedule me at a good time. 1 2 3 4 5
4. The teller greeted me in a pleasant way. 1 2 3 4 5
5. The teller listened carefully to me when I was requesting a transaction. 1 2 3 4 5
6. The bank lobby was quiet and comfortable. 1 2 3 4 5
7. The teller did not pay attention to what I told him/her. 1 2 3 4 5
8. The waiting line was too long when I arrived. 1 2 3 4 5
9. The teller took his/her own time to complete my transaction. 1 2 3 4 5
10. The financial consultant was not available at a time convenient to me. 1 2 3 4 5
11. The teller was very personable. 1 2 3 4 5
12. The lobby was noisy and cold. 1 2 3 4 5
13. Additional comments.
following two tables show the difference between survey results that are well corre-
lated and those that are not well correlated. The correlation coefficient, a measure of
relationship between variables, which can be obtained using a statistical software, will
help in determining the strength of correlation.
(Recall from Chapter 2: the correlation coefficient lies between −1 and +1 and a
value close to −1 or +1 indicates the two variables are highly correlated, and a value
close to 0 indicates that they are not. A P-value less than 0.05 indicates the correlation
is significant, otherwise not.)
Another way of evaluating this reliability is to include in the questionnaire two
questions for each attribute, worded differently. The correlation between the responses
for the same attribute is then evaluated. A high correlation would indicate high reli-
ability of the questionnaire. The example questionnaire from the bank in Table 3.2
contains questions of this kind. For example, items 1 and 8 in this questionnaire are
about the same attribute but worded differently.
A survey result becomes more reliable with improvements in the clarity and rel-
evance of the questions. It also becomes more reliable with an increase in the number
of questions (Hayes 1998), which, of course, has to be balanced with relevance so that
the whole survey instrument remains concise.
Once the survey instrument is prepared, the plan for administering the survey
should be made. Because of the cost and time constraints, surveying the entire popu-
lation is not possible except in small populations, and statistical sampling techniques
are required. Two commonly used sampling techniques are:
1. Simple random sampling; and
2. Stratified random sampling.
In simple random sampling, the sample is chosen from the entire population such
that each customer in the population has an equal chance of being included in the
sample. In stratified random sampling, the population is first divided into several
strata, based on some rational criteria—such as sex, age, income, or education—and
simple random sampling is done within each stratum. Stratified sampling provides
better precision (i.e., less variability) in the estimates for a given sample size compared
to the simple random sampling. It also provides estimates of customer preferences for
each of the strata, which may provide additional useful information for some product
designers.
The size of the sample is determined by the confidence level needed so that the
error in estimates does not exceed a chosen value. If (1 − α)100% level of confidence
is required that the error in the results is not more than ± e, then the sample size n is
given by the formula:
2
z s
n = α/2
e
where zσ/2 is the value that cuts off α/2 probability on the upper tail in the stan-
dard normal distribution and s is the estimate for standard deviation of the preference
scores selected by the customer. The s can be borrowed from similar surveys in the
past or can be estimated from a pilot survey done using the same questionnaire in the
14 8 A Firs t C o urse in Q ua lit y En gin eerin g
same population. It is the standard deviation calculated out of all scores received for
all questions from all responses.
Example 3.1
Suppose we want to determine the sample size for a survey that requires the expres-
sion of customer preference on the one-to-five scale. Suppose also that the confi-
dence level needed is 95% that the error in the estimates does not exceed ±0.1, and
the estimate for standard deviation of the scores from previous similar surveys is 0.5.
Find the sample size needed.
Solution
Because (1 − α) = 0.95, α = 0.05, and Z α/2 = Z0.025 = 1.96 (from normal tables),
2 2
z ×s 1.96 × 0.5
n = 0.025 = ≅ 96
e 0.1
This means that at least 96 customers must be surveyed in order to have 95%
confidence that the error in the estimate does not exceed ±0.1.
The above discussion on the basics of conducting a customer survey was included
to make the reader aware of the important issues involved in drawing up a customer
survey. This may even be adequate to make some simple surveys. For more complete
information on sample surveys, the reader is referred to the references cited earlier.
The next important quality-related function in product planning is translating the
customers’ voice into design parameters or design features of the product. A formal
tool used in this translation is called the “quality function deployment.”
Quality function deployment is a method used to closely tie the design features of a
product with the expressed preferences and needs of the customers. This method,
which originated in Japan, also helps in prioritizing those features and choosing
the most important ones for special attention further down the design process. The
major component of the QFD method is a matrix created with the customers’ prefer-
ences in the rows and the design features selected to meet those preferences in the
columns (see Figure 3.2). The intersecting cells between the columns and rows show
the strength of the relationships between the design feature and the customer prefer-
ence. In other words, the cell at the intersection of a column and a row shows how
well the chosen feature of the product will meet the corresponding customer prefer-
ence in the row that it has been chosen to satisfy. These relationships are determined
by the collective judgment of the product planning team and are recorded in the
matrix using notations that express the strength of those relationships. Some design
Q ua lit y in D e si g n 14 9
V. strong (5)
Strong (3)
Weak (1)
Neg. impact
No. of illustrations/chapter
Customer requirements (what)
Statistics fundamentals
No. of chapters/pages
Formula derivations
Management topics
Statistical methods
Imp-wt betw cust
Wtd pref rating
Latest research
Competitor D
Competitor A
Competitor C
Competitor B
Mini-projects
Rational chapters 1 5 5 3 2 3 2
Good examples/exercises 2 5 10 4 3 3 3
Good references 0.5 5 2.5 5 5 4 3
Open ended projects 0.5 5 2.5 1 1 1 1
Reader friendly 3 3 9 5 2 3 3
Students
Enhances knowledge 3 3 9 3 3 3 3
Good computer use 2 3 6 2 1 1 1
Not expensive 2 3 6 1 1 1 1
Usable tools 3 1 3 4 3 3 3
Publisher Professionals
features may satisfy more than one customer preference, while other design features
may involve significant design tradeoffs—they may work for a customer preference
and against another customer preference. Identifying the strength of the relationships
is an important step in prioritizing the design features in terms of their importance
in satisfying customer need. Remember, the major objective of the QFD method is
to make sure there is a design feature to satisfy each customer preference and then to
select the most important of these features so that they are adequately addressed in
the design process.
15 0 A Firs t C o urse in Q ua lit y En gin eerin g
Three different notations are generally used to indicate the strength of relationship
between a customer preference and a design feature: “very strong,” “strong,” and “weak.”
A very strong relationship for example means that the feature in question almost fully
satisfies the corresponding customer preference. A dark circle is used to indicate this
“very strong” relationship. An empty circle is used to indicate a “strong” relationship
and an empty triangle is used to indicate a “weak” relationship (see Figure 3.2).
The process of prioritizing the design features based on their contribution to satis-
fying customer preferences is done using some simple arithmetic calculations. By this
process, a design feature that has strong relationships with several customer prefer-
ences will come out ahead of a feature that has only weak relationships with a few
customer preferences.
First, numerical values are assigned to the customer preferences to signify how
strongly the customer prefers one requirement over the other. These preference num-
bers are obtained from those expressed by the customers in the customer survey. They
are indicated in the column immediately next to the customer requirements in Figure
3.2. These numbers are multiplied by the numerical equivalent of the notations entered
in the cells that express the strength of relationships between customer preferences and
design features. The total of these resulting products for any design feature represents
how important that feature is in terms of satisfying the requirements of the customer.
These totals, obtained column-wise, are entered at the bottom of each column, thus
assigning a score for each design feature. Those with the highest scores are the most
important features from the customer’s point of view. These features are further stud-
ied with regard to the advantage they may provide over one or more competitors. The
design features that are important from the view of satisfying customer preferences,
and those that provide certain advantages over competitors, are identified and given
special treatment in the new design. This procedure is explained in the example below.
The matrix of customer preference versus design feature is topped by a triangular
matrix, which shows how the design features are related among one another. Knowledge
of these relationships helps in alerting the designer to changes that may occur in other
design features while making changes to one of them. The QFD method also provides
for studying how a competitor’s product fares against the expressed preferences of the
customers. This provision—which helps in evaluating competitors and, thus, enables
a comparison of the features of the new product with those of the competitors—is
known as “benchmarking.” This enables the identification of the strengths and weak-
nesses of the competitor’s product. It also helps in building upon their strengths or
winning a competitive edge by satisfying a need the competitors have not addressed,
as shown in the example below. Different users of the QFD method also add many
other details to the main function in order to suit their individual products.
The matrices, when put together, look so much like the picture of a house with a
roof, windows, and doors, that the assembly of the matrices is called the “house of
quality” (HOQ ). The procedure of using the QFD method and creating a HOQ is
explained in Example 3.2.
Q ua lit y in D e si g n 151
Example 3.2
Solution
The first task for a design team is to identify who the customers are. In this case, the
authors are the “planning team,” and anyone who will be impacted by the design
is a customer. Thus, the professors, students, professionals, and the publisher are
the customers, and they all have their own preferences. A list of these preferences
is first generated based on the experience of the authors and enquiry among their
colleagues and students. The customers and their needs are shown on the left-hand
side of the HOQ (Figure 3.2). Notice that the items in this list are in the language
of the customers. This column represents the voice of the customer (VOC).
the measure of the design feature will provide increased satisfaction of the particular
customer preference. Some design features may help in meeting a preference of one
customer but may work against a preference of another customer. Where a design
feature affects a customer requirement inversely, such an affect is indicated by a down-
arrow (↓) next to the strength notation.
3.2.2.2 Prioritizing Design Features For each design feature, the product of the
numerical equivalent of the strength relationship and the importance-weight of the
corresponding customer preference is obtained and added column-wise, and the total
is placed under each column in the row at the bottom of the relationship matrix. (The
relationships with down-arrows also make a positive contribution in this step when
we are determining the importance of a design feature. Their negative significance is
taken into account later, when we determine target values for the design features of
the new design.) The numbers in this row at the bottom of the central matrix represent
the importance of the design features in meeting the customer requirements. For this
example, the following numerical equivalents for the strength relationships have been
used: very strong = 5, strong = 3, and weak = 1. This is the usual scale employed, but
other scales, such as (9, 3, 1) instead of (5, 3, 1), are also sometimes used.
The numbers obtained for each of the design features are then “normalized” using
the formula yj = 100 (xj/Σxj), where yj is the normalized score and xj is the raw score
for the j-th design feature. These normalized scores represent the relative importance
of a given design feature among all design features. These relative importance scores
(called the “normalized contributions”) are used to prioritize the design features for
further deployment. Usually, three or four features with top-ranking normalized
scores will be chosen as the most important design features. For the example, the top-
ranking features are identified with a (#) mark below the normalized scores, with the
top-most feature being identified with a (##) mark.
3.2.2.3 Choosing a Competitor as Benchmark Also shown on the right-hand side of the
HOQ is the assessment on how well competitor products fare with respect to the
established customer preferences. For this example, four books—A, B, C, and D—
are identified as competitors for the new design. These books are evaluated by the
planning team and assigned numbers on a scale of 1 to 5 to represent their ability to
meet the established customer preferences. The products of these numbers and the
importance-weights of customer requirements are added and the total is shown at
the bottom of the column for each competitor. These numbers represent how well a
competitor book satisfies the customer preferences. The competitor with the largest of
these numbers is the best in class, and it is chosen as the benchmark. For this example,
the benchmark is Competitor A.
3.2.2.4 Targets For the design features that have been prioritized as the most
important, targets are selected for the new product based on a comparison with
Q ua lit y in D e si g n 15 3
the benchmark. The benchmark is first evaluated by the planning team, and scored
on a scale of 1 to 10 to reflect how well it has handled the important design fea-
tures. These scores are shown in a row below the row containing the normalized
contributions of design features. These numerical scores for the benchmark provide
the basis for selecting the target for the new product. The new product will then
have the important design features at targets chosen based on a comparison with the
benchmark.
For this book example, the design features “statistics fundamentals,” “statistical
methods,” “management topics,” and “formula derivations” are identified as the most
important. The targets for these features in the new book will be chosen by taking
into account the numerical sores the benchmark secured for these features. We notice
from the scores assigned for the different design features that, this benchmark, best-
in-class competitor, is weak in one important design feature, “management topics.”
This should be a signal to the design team that they can exploit this weakness in the
competitor. The “management topic” was identified as an important feature based on
the expressed preferences by the customer, but the competitor does not satisfy this
requirement well. This offers an opportunity to the design team to take advantage of
the weakness in the competitor’s design and make it a strong point in the design of
the new product.
The targets for each design feature of the new book are chosen by keeping in mind
how the design features satisfy the customer requirements as well as how these fea-
tures interact among themselves. These target values are shown in the row below
the row displaying the feature scores of the benchmark, Competitor A. These target
numbers are relative numbers, related to the scores of the corresponding features of
Competitor A. For example, the number of statistical methods covered in the new
book will be about seven-tenths of those covered in Competitor A. The target for the
statistical methods is made smaller than the competitor’s in order to balance out the
increase in the target for the number of management topics so that the total size of
the new book will still be comparable to that of the benchmark.
The above example illustrates the important principles involved in using the QFD
methodology for identifying the needs of the customer and designing a product to sat-
isfy those needs while being competitive in the market. A simple example of designing
a book was used to describe the QFD methodology. The reader can imagine the level
of details needed for a product like a refrigerator or a car. Several good books (e.g.,
Cohen 1995; Akao 1990) are available for further study.
At the planning stage, the selection of major design features of a product includes
the selection of quality and reliability goals. The customers are asked for their quality
and reliability preferences for the particular grade level of the product. Their needs
expressed in this regard through past complaints are also gathered. Suitable design
features are then incorporated to respond to these needs.
An important requirement expressed by customers for many products is reliability,
which in the language of the customer is the ability of the product to run without
15 4 A Firs t C o urse in Q ua lit y En gin eerin g
3.2.3 Reliability Fundamentals
“Reliability” refers to the ability of a product to perform without failure over a given
period of time. Thus, reliability is a function of time. It is related to the length of time
a product performs before a failure occurs. This length of time to failure is referred to
as “life.” This life is a random variable in the sense that in a given population, although
the units may all have been built by the same process, the life of one unit for the first
failure, for example, will be different from another’s. Even for the same unit, the time
to first failure will be different from that of the second failure, and so on. Those life
values can be viewed as values of a random variable usually denoted as T.
In the above definition of “life” of a product, the term “failure” is an important
component. In fact, the life is defined with respect to a particular failure under con-
sideration. The life for a product may be one value for a cosmetic failure, another for a
minor failure and yet another value for a major failure. So, we should first define the
failure that is of interest and then define the reliability with respect to that failure.
The variability in the life-variable can be described by a frequency distribution,
and this frequency distribution can be obtained from data collected on lives of sample
units if the product already exists. The frequency distribution of a future product can
be projected based on similar past models. An example of a frequency distribution is
shown in Figure 3.3. This frequency distribution can also be represented by a math-
ematical function, which we call the probability density function (pdf ) of the random
variable T. (We will assume that T, which generally represents life in hours, days, or
f (t) f (t)
Area = f (t)∆t
R(t) R(t)
∆t
t Time t Time
(a) (b)
months, is a continuous variable.) The frequency distribution of the life variable is the
basic information necessary to assess the reliability of a product. The cumulative dis-
tribution function of the life variable is called its “life distribution.”
R(t ) = P (T > t )
This probability can be seen in Figure 3.3a where the curve represents the probabil-
ity distribution of the life of the product. This probability is the area under the curve
above t. It can be seen this probability also represents the proportion of the population
that survives beyond time t. If f(t) is the pdf of T, the function form of the curve, then
∞
R (t ) =
∫ t
f ( x ) dx
Also, it can be seen that R(t) = 1 − F(t), where F(t) is the cumulative distribution
function (CDF) of T.
Example 3.3
If T represents life of certain break cylinders in hours, and its pdf is given by
f (t ) = 0.001e −0.001t , t ≥ 0, what is the proportion of cylinders that will fail by 1000
hrs.? What is the reliability of the cylinders at 1000 hrs.? At 2000 hrs.?
Solution:
T: Life of cylinders in hrs.
f (t ) = 0.001e −0.001t, t ≥ 0
1000
1000
=
∫
0
0.001e −0.001t dt = 1 − e −0.001x1000 = 1 − e −1 = 0.632.
f (t ) × ∆t
R (t )
f (t ) × ∆t
.
R(t ) × ∆t
We want the rate at the instant immediately following time t—that is, we are look-
ing to find the rate when Δt → 0. That is, we want:
f (t ) × ∆t f (t )
lim = .
∆t → 0 R (t ) × ∆t R (t )
Therefore,
f (t )
h (t ) =
R (t )
Example 3.4
Table 3.3 shows the data on the life of 1000 compressors giving the number that
failed in each time interval. Draw the frequency distribution of the failure times,
reliability, and failure rate curves as functions of time.
Solution
The calculations for frequency distribution, reliability, and failure rate are shown
in Table 3.3. In this table, Columns 1 and 2 have the data on the life of compres-
sors, Column 3 gives the frequency distribution of failures in each interval, which
is simply the proportion of the total of 1000 that failed in each interval. Column 4
gives the cumulative proportion failing before the end of each interval. Column 5
gives the reliability of the compressor at the end of each interval, which is equal to
(1 – cumulative proportion failures). This is also the proportion surviving at the end
of the interval. Column 6 gives the failure rate at the instant following each interval.
We may recall that the failure rates in Column 6 represent the proportion of those
that survive past the interval, failing in the “instant” (month) immediately following
the interval expressed as a rate, per month. To illustrate the computation of failure
rate, the failure rate of the compressors that are 20 months old is calculated as below:
Table 3.3 Life Distribution, Reliability, and Failure Rate Calculations for Compressors
(1) (2) (3) (4) (5) (6)
INTERVAL NO. OF FAILURES FREQUENCY OF CUMULATIVE RELIABILITY AT FAILURE RATE AT
(MONTH) FAILURES IN FREQUENCY OF END OF INTERVAL END OF INTERVAL
INTERVAL FAILURES
0–0 0 0.00 0.00 1.000 0.0347
0 < t ≤ 10 347 0.347 0.347 0.653 0.0093
10 < t ≤ 20 61 0.061 0.408 0.592 0.0117
20 < t ≤ 30 69 0.069 0.477 0.523 0.0166
30 < t ≤ 40 87 0.087 0.564 0.436 0.023
40 < t ≤ 50 101 0.101 0.665 0.335 0.031
50 < t ≤ 60 103 0.103 0.768 0.232 0.044
60 < t ≤ 70 101 0.101 0.865 0.135 0.074
70 < t ≤ 80 97 0.097 0.966 0.034 0.100
80 < t ≤ 90 34 0.034 1.000 0.00 −
Total 1000 1.000
15 8 A Firs t C o urse in Q ua lit y En gin eerin g
50
40
% Failure
30
20
10
0
10 20 30 40 50 60 70 80 90 Months
1.0
Reliability
0.5
10 20 30 40 50 60 70 80 90 Months
.10
Hazard or failure rate
.05
.0347
.01
10 20 30 40 50 60 70 80 90 Months
Figure 3.4 Example of failure distribution, reliability, and failure rate functions.
20 months, 1.17% of them will fail within a month following the 20th month. This
failure rate, as one can see, increases to 3.1% for 50-month-old compressors and
increases further to about 10% for the 80-month-old compressors. Also, to be noted
is the fact that these compressors had a larger failure rate prior to 20 months. When
they were new, the failure rate was nearly 3.5%. The failure rate bottomed out at
10 months and started increasing thereafter. The graphs of the frequency distribu-
tion of failure times, reliability, and failure rate for the compressors, calculated in
Table 3.3, are shown in Figure 3.4.
3.2.3.3 The Bathtub Curve Failure rate curves have been used to study the life behav-
ior of many different types of equipment. Figures 3.5a to d show a few different types
of failure rate curves experienced by different types of equipment. There are products
whose failure rate increases with time (e.g., rubber or plastic products like belts and
Q ua lit y in D e si g n 15 9
Failure rate
(a) Essentially constant failure rate
(typical of electronic hardware,
mature electromechanical equipment)
Failure rate
Failure rate
(b) Decreasing failure rate (c) Increasing failure rate
(typical of computer software, (typical of parts subject to
structural parts) degredation such as
rubber parts)
Failure rate
Infant
mortality Chance failures Wear-out
A B C
hoses), decreases with time (e.g., structural parts or housing components), or remains
constant over time (e.g., electronic components or glass parts). The one failure rate
curve that seems to describe the failure rate behavior of a variety of complex equip-
ment is in Figure 3.5d. Because of its shape, it is called the “bathtub curve.” This curve
shows that the failure rate for this type of equipment changes differently at different
periods of the product’s life. Because of its wide applicability, this type of failure rate
curve needs to be studied further.
Based on the nature of change in the failure rate, the life of such equipment can be
divided into three major periods.
16 0 A Firs t C o urse in Q ua lit y En gin eerin g
σload σstrength
µload µstrength
(e)
A design engineer would want the failure rate that remains “constant” over
this period to be as small as possible. This can be accomplished by minimizing
the chance for accidents. The load-strength analysis would help. The “load” on
a product or part is a random variable as is the “strength” of the product or part.
These random variables—each can be represented by a probability distribution,
the normal distribution being the plausible choice. The sketch in Figure 3.5e
shows the relationship between these random variables when they are repre-
sented by their distributions. Obviously, the mean strength should be larger
than the mean load. Yet, because of the variability in both load and strength,
there is a region where the two distributions intersect, and this region repre-
sents the collection of events where a load could exceed the strength causing
an accident. Increasing the average of the strength or decreasing the average of
load will, of course, minimize this chance for accident. However, if the vari-
ability in these distributions is reduced as shown by the distributions in dotted
lines, the potential for accidents can be reduced without incurring the cost of
increasing the strength. This is another example where reducing variability
reduces waste and improves reliability and quality delivered to customer.
Period C - Wear out
Period C is a time of increasing failure rate because of parts starting to wear
out. Failures occur in this period as a result of fatigue, aging, or embrittle-
ment. The parts such as seals, bushes, gaskets, and hoses that have increasing
failure rate themselves cause failures of the assembly. This region is known
as the “wear out” period. There could be parts that would wear out sooner
than others and identifying them through testing or from data on field fail-
ures would help in finding better replacements. Moreover, knowledge of when
wear out begins helps in planning replacements and overhauls so that wear out
can be delayed and the useful life extended.
The above discussion brings out the importance of studying the failure rate behav-
ior of a part or product so that failure modes can be identified and proper solutions can
be implemented to enhance product reliability.
3.2.3.4 Distribution of Product Life The above discussion of the failure rate behavior of
products and the phenomenon of the bathtub curve reminds us that the distribution
of the life variable could change over the life of a product. At least three distinctly
different periods can be identified for most electromechanical equipment, in which
the life distribution follows different characteristics. So, we would want to know the
distribution pattern of life in the different periods of a product’s life in order to be
able to understand and predict their behavior. The distribution characteristics can be
studied by gathering data on the failure times and then checking which distribution
fits the data best. Some of the candidate distributions are: the exponential, Weibull,
16 2 A Firs t C o urse in Q ua lit y En gin eerin g
log-normal, and gamma distributions. The properties of these distributions, and how
their fit to a given set of data can be determined, are discussed in books on reliability—
such as Ireson and Coombs (1988), Krishnamoorthi (1992), and Tobias and Trindade
(2012)—which are cited at the end of this chapter. Of all the distributions employed
to model product life, however, the exponential distribution is the most common one,
as many parts and products are known to follow the exponential law during a major
part of their lives. The exponential distribution is to the life variable what the normal
distribution is to other measurable quality characteristics of products. A brief study of
the exponential distribution follows.
3.2.3.5 The Exponential Distribution If T represents a product life, the function form,
or the pdf, of the exponential distribution is given by:
f (t ) = λe − λt , t ≥ 0
The exponential random variable takes only non-negative values and the distribu-
tion has one parameter, λ. We write T ~ Ex(λ) to indicate that a random variable T
has exponential distribution with parameter λ. The distributions shown in Figure 3.3
do, in fact, represent the shape of an exponential distribution.
If T ~ Ex(λ), then, it can be shown:
t
CDF : F (t ) = P (T ≤ t ) =
∫ λe
0
− λx
dx = 1 − e − λt , t ≥ 0
R(t ) = 1 − F (t ) = e − λt , t ≥ 0
f (t ) λe − λt
h (t ) = = = λ, a constant,, independent of t
R(t ) e − λt
The mean of the distribution is:
∞
1
µT =
∫ 0
t λe − λt dt =
λ
We can see from the above that the mean and the standard deviation of an expo-
nentially distributed random variable are equal. We can also see that a product with
Q ua lit y in D e si g n 16 3
exponential failure times has a constant failure rate equal to the value of the parameter
of the distribution, λ. This failure rate can be estimated from the failure data of sample
units, using the following formula:
Number of failures
λ̂ =
Total number of hours of ru
unning time
3.2.3.6 Mean Time to Failure The mean of the distribution, or the average life of
all units in the population, is called the “mean time to failure” (MTTF). The term
MTTF is used for products that have only one life (i.e., those that are not repairable).
For products that are repairable, the term “mean time between failures” (MTBF) is
used to denote the average time between repairs. The MTTF (or the MTBF) has a
special significance when the life distribution is exponential. Then, as shown above, it
is equal to the reciprocal of the failure rate λ, the single parameter of the distribution.
This means that knowledge of the MTTF alone provides information about the life of
the entire population. The evaluation and prediction of all measures relating to reli-
ability can be made once the MTTF is known.
It should be pointed out, however, that the MTTF does not have the same signifi-
cance when the life distribution is not exponential. For example, if the Weibull distri-
bution (another popular model for life variables) is appropriate to model the life of a
product, then knowledge of the MTTF is not adequate to define the life distribution.
Instead, the two parameters of the Weibull distribution must be estimated.
Example 3.5
A production machine has been in operation for 2200 days. If it broke down three
times during this period, what is the estimate for the failure rate and the MTBF of
the machine? Assume that the time between failures is exponential.
Solution
3
λˆ = = 0.00136 failures/day
2200
Example 3.6
Twelve fan belts were put on test, of which four failed at 124, 298, 867, and 1112
hours, respectively. The remaining belts had not failed by 1200 hours, when the test
was stopped. What is the estimate for the failure rate and the MTTF if the failure
time can be assumed to be exponential?
16 4 A Firs t C o urse in Q ua lit y En gin eerin g
Solution
This is an example of censored data, in which the test was stopped before all the test
units failed. The estimate for λ is calculated taking into account the fact that those
units that did not fail, called “run-outs,” survived until the test was ended. So, the
fact that the eight units that did not fail each ran for 1200 hrs. is counted in calcu-
lating the total hours of running time. Thus,
4
λˆ = = 0.00033 failures/hourrs
124 + 298 + 867 + 1112 + 8 × 1200
Books on reliability discuss few other variations of this formula to estimate λ under
few other real-world circumstances, but we have to limit our discussion here, at this
point. The next example is about predicting reliability measures when the distribution
of life is known as a mathematical model.
Example 3.7
The life of some engine seals is known to have the exponential distribution, with
MTTF = 12,000 hours.
Solution
Let T represent the life of the seals, then,
f (t ) = λe − λt, t ≥ 0
1
1 − t
f (t ) = e 12,000 , t ≥ 0
12, 000
t t
−
F (t )
∫ 0
f ( x ) dx = 1 − e − λt = 1 − e 12 ,000
, t ≥0
t
−
12 ,000
R (t ) = 1 − F (t ) = e , t ≥ 0.
2 ,000
−
R( 2, 000 ) = e
a. 12 ,000
= e −0.167 = 0.846
8 ,000
−
b.
F (8, 000 ) = 1 − e 12 ,000
= 1 − e −0.667 = 0.487
Q ua lit y in D e si g n 16 5
From the above discussion, we saw that reliability is a function of time. The reliabil-
ity of a product for a given time is quantified as the probability that the product will
survive beyond that given time. Reliability can also be interpreted as the proportion
of the population that will survive beyond the given time. The failure rate expressed
as a function of age represents the susceptibility of a product to failure after a given
age, and it provides another measure of reliability. The reliability of a product can be
evaluated if its life distribution is known. The life distribution can be obtained from
empirical data on the failure times of sample units. It can also be modeled using a
distribution function that is chosen to “fit” the historical failure data of the product.
Probability distributions (such as the exponential, Weibull, log-normal, and gamma)
are commonly used to model life variables.
The most commonly used model for describing life variables, however, is the expo-
nential distribution. It has one parameter, λ, called the “failure rate.” This failure rate
is a constant and is independent of age. If a product life is exponential, then its reli-
ability can be measured using its failure rate or its reciprocal MTTF. The failure rate,
or MTTF, can then be used for setting reliability goals and monitoring reliability
achievements. We have also seen how knowledge about the behavior of the failure
rate over time, expressed as a bathtub curve, can be used to understand—and possibly
enhance—a product’s reliability.
The reliability of current designs can be estimated either from field failure data or
from laboratory tests. From these, the gap between the required and actual reliabil-
ity can be obtained for each component, and from these estimates will emerge a few
critical components that must have their reliability improved in order to attain the
system reliability goals. Reliability improvement can be accomplished by studying the
failure rate behavior and failure mechanisms of parts and subassemblies. If failures
occur in early life, process controls should be implemented or improved. Variability
in process parameters must be reduced and poor workmanship must be avoided. If
the failures result from wear out, it may be because seals, belts, and hoses have failure
rates that increase with time. Better material, better tolerances, and improved main-
tenance will delay the wear-out failures. If the failure rate is high during the useful
life, load-strength studies (also see Chapter 4 in O’Connor 1985) will help in identify-
ing opportunities to minimize “accidental” failures. Also, designs can be made more
robust; that is, less susceptible to failure due to changes in environment over which
the user has no control. This is done by optimal choice of product parameters and their
tolerances. The issues relating to the choice of product parameters and their tolerances
are discussed next as part of the product design.
From the above discussion it must be clear that any significant improvement in reli-
ability can come only from design changes although process control and better inspec-
tion can help in avoiding products susceptible for early failures reaching the customer.
3.3 Product Design
Product design is done in two stages: first, the overall parameters are chosen; and sec-
ond, the details of the parameters are worked out, engineering drawings and specifica-
tions are created, and prototype testing is done for validating the design. As indicated
in Figure 3.1, process design is undertaken even as the product design progresses.
Trial runs are made for validating both the product and the process design. The major
quality-related activities at this stage are:
1. Parameter design;
2. Tolerance design;
3. Failure mode and effects analysis;
4. Design for manufacturability study; and
5. Design reviews.
The objectives, procedures, and outcomes of each of these activities are explained
below.
3.3.1 Parameter Design
Parameter design in the context of product design refers to selecting the product
parameters, or those critical characteristics of the product that determine its quality
Q ua lit y in D e si g n 16 7
and performance; in other words, its ability to meet the needs of the customer and
provide satisfaction.
At the end of the QFD exercise, the major design features of the product and their
target values would have been decided. For example, if the product is a lawnmower,
the planning team would have chosen the performance target as: mowing an average
yard of about 10,000 sq. ft. in less than one hour. They must then decide the product
parameters, such as blade size, blade angle, engine horsepower, speed of rotation, deck
height, chute angle, and so on, in order to accomplish the target performance. Most
product designers would have initial values for these parameters based on experience
with previous models. The question to be answered is whether these initial values are
good enough to meet the new target or if they need to be changed. Often, the answer
has to be found through experimentation—that is, by trying different values for the
product parameters, measuring corresponding performances, and choosing the set of
parameters that give the desired performance. Thus, in the lawnmower example, an
experiment has to be conducted with the objective of finding the best set of values
for the product parameters that will enable the cutting of a 10,000-sq. ft. yard in one
hour or less.
A vast body of knowledge exists on how to perform experiments efficiently so
that the required information about the product performance, vis-à-vis the product
parameters, is obtained with the minimum amount of experimental work. This branch
of statistics, referred to as the “design of experiments,” (D.O.E.), was invented by
Sir Ronald Fisher, the English statistician, who in early 1920s was researching the
selection of the best levels of inputs, such as fertilizer, seed variety, amount of mois-
ture, and so on, to maximize the yield from agricultural fields. The designed experi-
ments were subsequently used profitably in industrial environments to optimize the
selection of product parameters during product design, and process parameters during
process design. Japanese engineer Dr. Genichi Taguchi propagated the philosophy
that experiments must be used for selecting the product and process variables in such a
way that the performance of the product or process will be “robust.” By this, he meant
that the selection of parameters should be such that the performance of the product
will not be affected by various noise or environmental conditions to which the product
or process may be subjected.
The basics of designed experiments are discussed below. The objective here is to
impress upon readers the need for experimentation when choosing product and pro-
cess parameters, and alert them to the availability of different experimental designs to
suit different occasions. Details on how experiments should be conducted, and their
results analyzed, are provided for some popular designs. In the end, it is hoped that
readers will be able to appreciate the value of designed experiments in the context of
product or process design, perform some basic experiments, and analyze the data from
them to make practical conclusions. Readers will also be better prepared to explore
more advanced designs like Taguchi designs when the need for them arises. The dis-
cussion below relates to two simple, but important, designs that are used in industrial
16 8 A Firs t C o urse in Q ua lit y En gin eerin g
3.3.2 Design of Experiments
An experiment is designed to study the effect of some input variables, called “factors,”
on a “response,” which may be the performance of a product or output of a process.
The factors can be set at different “levels,” and the product performance or the process
output could change depending on the levels at which the different factors are held.
The design of the experiment involves choosing the relevant factors, selecting the
appropriate levels for them, and determining the combinations of the factor levels,
called the “treatment-combinations,” at which the trials will be conducted. The design
also determines the number of times the trials will be repeated with each treatment
combination in order to obtain a measure of the random variability present in the
results. In addition, the design will specify the sequence in which the trials should be
run, and is usually accompanied by a procedure for analyzing the data from the trials
and drawing conclusions.
Sometimes, an experimenter is concerned with only one factor and wants to deter-
mine the best level of that factor to achieve the desired level of a response. For exam-
ple, a process engineer may be interested to know the temperature to which a steel
casting should be heated to obtain the best results on stress relieving. In such a case,
an experiment would be conducted by running trials at different levels of that one fac-
tor. Such an experiment is known as a “one-factor experiment.” More often, though,
we will be dealing with situations where several factors are influencing a response,
which is a quality characteristic of a product or the output of a process, and we have to
find out how the different factors, individually and jointly, affect the response. Then,
we would need the tools to perform multifactor experiments.
There are many multifactor experimental designs to suit the varying situations in
which experiments have to be run. We will discuss below one type of design called the
“2k factorial” design, in which k number of factors, each with two levels, are studied
to learn of their effect on a response. These designs are very useful in the selection
of product and process parameters and are considered to be “workhorse” designs in
industrial experimentation. We will discuss the 22 and 23 designs in this chapter,
which, being simple designs, are useful in explaining the concepts and terminology of
experimental design. The more general 2k design is discussed in Chapter 5.
Example 3.8
Consider the case of the lawnmower mentioned earlier. The response is the time
needed to cut a yard of 10,000 sq. ft. The possible factors would be blade diameter,
blade angle, rotation speed, engine horsepower, deck height, chute size, chute loca-
tion, along with a few others. Suppose that for a new design of the lawnmower,
only changes to two factors—the blade angle and the deck height—are being con-
sidered for achieving the desired performance. The two levels for the factor “blade
angle” are 12° and 16°, and the two levels for the factor “deck height” are 5 in. and
7 in. The best combination of blade angle and deck height for achieving the desired
performance—minimal time to cut the average yard—is to be determined.
Solution
All the possible treatment combinations for the factorial experiment is represented
graphically in Figure 3.6. When two factors each have two levels, and each level of
one factor is combined with each level of every other factor, there are four possible
treatment combinations. This experimental set up, or design, is called a 22 factorial
design.
The 22 factorial design shown in Figure 3.6 can also be represented in a table, as
shown in Table 3.4. In this table, the treatment-combinations that are numbered as 1,
2, 3, and 4 in the graph are listed in that order in the leftmost column. And, we use
the (−) sign to denote the lower level of a factor and the (+) sign to denote the higher
level of the factor. If we read across a row, for a given treatment-combination, we can
read the level at which each factor is kept. When the signs are filled in under each of
Levels
Factors
Low(–) High(+)
A: Blade angle 12 16
B: Deck height 5 in. 7 in.
Level 2 3 4
Level 1
1 2
combinations will contain less variability from such noise, compared to the single
readings from individual trials.
3.3.2.3 Experimental Results from a 22 Design Suppose the eight trials of the above
experiment are run in a completely randomized manner and the results from the tri-
als are as shown in Table 3.4. The results are presented in the graph in Figure 3.7,
with the respective corners of the square representing the treatment-combinations. It
is easy to see that Treatment Combination 2 produces the best result, requiring the
17 2 A Firs t C o urse in Q ua lit y En gin eerin g
Level 2 b = 60 ab = 83
3 4
1 2
Level 1
(1) = 73 a = 48
Figure 3.7 Results from a 22 experiment (see Table 3.5 for treatment combination codes).
least amount of time to cut the given-size yard. Looking at the difference between
the two replicates at each treatment combination in Table 3.4, not much variability
is seen between the replicates, indicating that the experimental error, or unexplained
variability, is almost not there. This means that there is not much noise; therefore, the
signal is clear, and it is easy to see the best treatment combination.
The results from many real experiments, however, do not come out this clear. There
may be much difference in the results from replicates of the same treatment-combination,
and the results for the various treatment combinations may not be far apart to give a
clear-cut choice. The results from the trials of one treatment-combinations may even
overlap with those from another. When the differences among the results from the
treatment-combinations are not obvious either due to variability within the treatment-
combinations being too large or the difference between the treatment-combinations not
being large enough, we would then want to know whether there is true differences due to
factors beyond the experimental variability in the observations. In such a case, we would
have to calculate the effect of the individual factors and the effect of interaction among
the factors, and use a statistical technique to determine if these effects are significant.
3.3.2.4 Calculating the Factor Effects The data from the experiment under discussion
are rearranged in Table 3.5 to facilitate the calculation of the effects. This table has a
new column, with the heading “interaction,” added to those in Table 3.4. The original
columns with (−) and (+) signs are named “design columns,” and the new column is
called the “calculation column.” The treatment-combinations are identified by new
codes: “(1)” for the treatment-combination in which both factors are at low level, “a”
for the treatment-combination in which Factor A is at high level and Factor B at
low level, “b” for the treatment-combination in which Factor B is at high level and
Factor A at low level, and “ab” for the treatment-combination in which both fac-
tors are at high level. These codes represent the average response from the respec-
tive treatment-combinations and are used in the formulas that are derived below for
Q ua lit y in D e si g n 173
calculating effects. The graphic representation of the design in Figure 3.7 includes
these new notations to help readers follow the development of formulas for computing
the effects. (Please note that the codes we use here represent the averages from the
treatment-combinations. Some authors use the codes to represent the totals from the
treatment-combinations, so the formulas given here may look different from theirs.)
3.3.2.5 Main Effects Factors A and B are called the main factors in order to differ-
entiate them from interactions that also arise as outcomes of experiments. The effect
caused by a main factor, called a “main effect,” is calculated by subtracting the average
response at the two treatment-combinations where the factor is at the lower level from
the average response at the treatment-combinations where the factor is at the higher
level. For example, the average of the responses of treatment combinations where
Factor A is at the higher level is (refer to Figure 3.7):
ab + a
2
and the average of the responses of treatment combinations where Factor A is at the
lower level is:
(1) + b
2
The difference between these two averages gives the effect of Factor A and is
denoted as A. So,
ab + a (1) + b −(1) + a − b + ab
A= − =
2 2 2
Similarly,
b + ab (1) + a −(1) − a + b + ab
B= − =
2 2 2
174 A Firs t C o urse in Q ua lit y En gin eerin g
−73 + 48 − 60 + 83
A= = −1
2
−73 − 48 + 60 + 83
B= = 11
2
These effects can be interpreted as follows: if the blade angle (Factor A) is changed
from its lower level of 12° to the higher level of 16°, then the mowing time decreases
by 1 minute, and if the deck height (Factor B) is changed from 5 in. to 7 in., then the
mowing time increases by 11 minutes.
3.3.2.6 Interaction Effects Interaction between two factors exists if the effect of the
two factors acting together is much more, or much less, than the sum of the effects
caused by the individual factors acting alone. It is necessary to detect the existence
of interaction between factors, because when significant interaction exists, the main
effects calculations are rendered suspect. The interaction effect between two factors
also has practical meaning and helps in understanding how the factors work together.
The interaction effect between Factors A and B in the two-factor experiment is cal-
culated as follows: Take the average of the responses from the treatment combinations
where both factors are at the high and both are at the low level; this is the average of
the responses at the two ends of the leading diagonal of the square in Figure 3.7. Then
take the average of the responses from the treatment combinations where one factor is
at the high level and the other is at the low level; this is the average of the responses at
the two ends of the other diagonal in Figure 3.7. Subtract the latter from the former;
the difference is the interaction effect caused by increasing A and B simultaneously,
denoted as the AB interaction.
For the example, to get the AB interaction, find the average of the treatment com-
binations where both factors are at the high and both factors are at the low level:
(1) + ab
2
Then find the average of the treatment combinations where one factor is at the high
and one factor is at the low level:
a+b
2
(1) + ab a + b (1) − a − b + ab
AB = − =
2 2 2
Q ua lit y in D e si g n 17 5
This means that increasing both the blade angle and the deck height together
increases the mowing time much more than the sum of the effects from increasing
them individually. Considerable interaction between Factors A and B exists in this case.
The above formula for the interaction effect will give a value of zero if the average
(or total) of the responses at the end of the leading diagonal is equal to the average
(or total) of the response at the end of the lagging diagonal in Figure 3.7. When this
happens, we say the joint effect is “additive,” which means that the change in the
response from increasing the levels of the two factors simultaneously is just the sum
of the changes resulting from changing the factors individually. Thus, we can see that
there is no interaction when the joint effect is just additive and interaction exists when
the joint effect is not additive.
3.3.2.7 A Shortcut for Calculating Effects The product of the column of treatment com-
bination codes and the column of signs under any factor is called a “contrast” of that
factor. For example, the product of the column of codes and column of signs under
Factor A = [−(1) + a − b + ab] is called Contrast A. The effect of a factor can be obtained
from its contrast using the general rule for calculating an effect:
3.3.2.8 Determining the Significance of Effects Next, we have to determine if the effects
are significant. The analysis of variance (ANOVA) method is normally used to answer
this question. The ANOVA method is explained in Chapter 5; here, we will use a
quick and simple method (adapted from Hogg and Ledolter 1987) for evaluating
the significance of factors using confidence intervals. The method consists of first
obtaining an estimate for the experimental error in individual observations, and then
an estimate for the standard error of the factor effects. (The standard deviation of
17 6 A Firs t C o urse in Q ua lit y En gin eerin g
an average is called the “standard error,” and because a factor effect is an average, its
standard deviation is called its standard error.) Then, approximate 95% confidence
intervals are established for each of the main and interaction effects using the esti-
mated standard error. If any of the confidence intervals includes zero, that is, zero
lies in between the limits of the C.I., then the corresponding effect is not significant.
If, on the other hand, any confidence interval does not include zero, then the cor-
responding effect is significant.
To explain the method of obtaining an estimate for the experimental error, we lay
out the results of the experiment as shown in Table 3.6. In this table, the term yijk
represents the observation from the k-th replicate of the treatment combination, with
Factor A at the i-th level and Factor B at the j-th level.
We assume there are n observations (from n replicates) in each of the treatment
combinations located in each cell. The quantity yij . represents the average from each
cell, yi.. the average of each row, y. j . the average of each column, and y... the average
of all observations.
Take, for example, the cell (1,1), where both factors are at the lower level Level 1.
The quantity
∑( y
k =1
11 k − y11. )2
(n − 1)
obtained from observations of this cell represents the variability among the n obser-
vations, all made at the same treatment combination represented by the cell (1, 1). If
there is no experimental error, then all the values in this cell would be the same, and
the value of the above quantity would be zero. If its value is not zero, then it represents
an estimate for the error variance from this cell. There are four such estimates for
experimental error in this 22 experiment, and if they are pooled together, the quantity
2 2 n
2
S =
1
4(n − 1) ∑ ∑ ∑( y ijk − yij . )2
i =1 j =1 k =1
gives a better estimate for the variance of the experimental error. The general formula
for estimating the error variance with k factors, i.e., in a 2k experiment, is
k 2 n
∑ ∑ ∑( y
i =1 j =1 k =1
ijk − yij . )2
s2 = k
2 (n − 1)
1
s2 = (72 − 73)2 + (74 − 73)2 + ( 47 − 48)2 + ( 49 − 48)2 + ( 59 − 60 )2 + (61 − 60 )2
4
+ 0 + 0 = 1.5
1.5
s .e. = = 0.87
2
An approximate 95% confidence interval for an effect is given by:
From these confidence intervals, we can see that there is no significant effect due to
Factor A because 0 lies in that interval. The effects of Factor B and the AB interaction
are significant because 0 does not lie in those intervals. The existence of interaction
can also be detected in the graph of the response drawn against factor levels, as shown
178 A Firs t C o urse in Q ua lit y En gin eerin g
Response
(83)
100
(73) x Factor B at level 2
x
50 (60) Factor B at level 1
(48)
Factor A
1 2
in Figure 3.8. The graph shows that the response changes in one way (decreases)
between the two levels of Factor A when Factor B is at Level 1 but changes in the
opposite way (increases) when Factor B is at Level 2. If there is no interaction, the
lines will be parallel and will not cross each other.
When such interaction exists, the calculation of main effects becomes meaningless,
as averaging done to obtain the effect of a main factor hides the fact that the response
behaves differently at the different levels of the other factor. In such situations, we simply
look at the responses at the treatment combinations at which the trials were run and then
choose the treatment combination that produces the best response. In this example, the
combination with Factor A at the higher level and Factor B at the lower level produces
the least time for mowing and, therefore, provides the best combination of the parameters
considered. This method of selecting the best combination of levels from those at which
trials were conducted is referred to as “running with the winner,” meaning that no further
analysis of the data is made to investigate responses at other possible treatment combina-
tions in the factor-level space. The method of exploring the treatment combination space
for the best response using a model built out of the results of the experiment, known as
“response surface methodology,” is explained later in another section of this chapter.
3.3.2.9 The 23 Design The 23 design will be used when there are three factors affecting
a response and each factor is studied at two levels. We will illustrate this design using
another lawnmower example.
Example 3.9
In this example, we are experimenting with another lawnmower model and are
considering one additional factor, chute location, along with blade angle and deck
height. Chute location is a different kind of factor in that it is not measurable, as
the other two factors are. The chute location is called a “qualitative factor,” whereas the
other two are termed “quantitative factors.” There are two possible locations for
the chute: one location is (arbitrarily) identified as the “lower level” and the other
as the “higher level.” The response, again, is the time in minutes required to cut a
Q ua lit y in D e si g n 179
bc
abc
b
ab
c
ac
(1) a
yard of 10,000 sq. ft. area. There are eight possible treatment combinations for the 23
design, and they are shown graphically in Figure 3.9 and in Table 3.7. We will need
to find the main and interaction effects and check if they are significant.
Solution
First the treatment combinations are arranged in the standard order, as in Table 3.7.
The standard order for the 23 design is created as an extension of the standard order
for the 22 design, as shown below:
Standard order for the 22 design:
(1) a b ab
Standard order for the 23 design:
(1) a b ab c ac bc abc
The order for the 2 design is first written down and is then multiplied by the code
2
for the third factor. The result is appended to the original order of the 22. The above
rule can be extended to any number of factors. For example, if we want the standard
order for the 24 design, the order would be:
(1) a b ab c ac bc abc
d ad bd abd cd acd bcd abcd
The order for the 23 design is multiplied by the code for the fourth factor, and the
result is appended to the order for the 23 design.
Once the standard order for the 23 design is determined, the design is obtained by
placing the (−) and (+) signs alternatively in the column for Factor A, placing (−, −)
and (+, +) signs alternatively in the column for Factor B, and placing (−, −, −, −) and
(+, +, +, +) signs in the column for Factor C. This completes the design of the experi-
ment. Because the columns under Factors A, B, and C are used in creating the design
in this fashion, they are called the “design columns.” The signs under the interaction
18 0
columns are obtained as the product of the corresponding individual factor columns.
For example, the column of signs for the AB interaction is the product of the columns
of signs under Factor A and Factor B. The column of signs for the ABC interaction is
the product of columns AB and C. The interaction columns are grouped together and
referred to as “calculation columns” in Table 3.7.
With the data arranged as in Table 3.7, the factor effects and interaction effects can
be calculated using the following formula:
−80 + 74 − 77 + 70 − 84 + 81 − 87 + 74
A= = −7.25
4
−80 − 74 + 77 + 70 − 84 − 81 + 87 + 74
B= = −2.75
4
−80 − 74 − 77 − 70 + 84 + 81 + 87 + 74
C= = 6.25
4
+80 − 74 − 77 + 70 + 84 − 81 − 87 + 74
AB = = −2.75
4
+80 + 74 − 77 − 70 − 84 − 81 + 87 + 74
AC = = −0.75
4
+80 + 74 − 77 − 70 − 84 − 81 + 87 + 74
BC = = 0.75
4
−80 + 74 + 77 − 70 + 84 − 81 − 87 + 74
ABC = = −2.25
4
These calculations can be conveniently made on Table 3.7 and the results recorded
in the two bottom rows, as shown in the table. To check if the effects are significant,
18 2 A Firs t C o urse in Q ua lit y En gin eerin g
we first estimate the experimental error by pooling the variances from within each
treatment combination, using the formula:
k 2 n
2
S = k
1
2 (n − 1) ∑ ∑ ∑( y
i =1 j =1 k =1
ijk − yij . )2, which gives:
1
S2 =
8
{(78 − 80)2 + (82 − 80)2 + + (72 − 74)2 + (76 − 74)2 } = 5.75
with a standard error of an effect:
S2 S2 5.75
= = = 1.2
2 k−2
n 4 4
3.3.2.10 Interpretation of the Results Looking at the confidence intervals, we see that
all three main effects, A, B, and C, are significant because there is no 0 included in
any of those intervals. The AB interaction is also significant. No other interaction is
significant. The responses from the trials are presented on the design cube in Figure
3.10. If not for the significant AB interaction, we could have pursued the analysis
bc = 87 abc = 74
b = 77 ab = 70
c = 84 ac = 81
(1) = 80
a = 74
B : deck height
C : chute location
A : blade angle
further to locate the optimal combination of factor levels that gives the best product
performance. Because of the significant AB interaction, we have to limit our search for
the best treatment combination to those at which trials were performed.
From the graph in Figure 3.10, we see that the lower-level chute location has uni-
formly smaller mowing time. This can be seen at the four corners of the side of the
cube nearest to the viewer, with (1), a, b, and ab treatment combinations at the cor-
ners. On this side of the cube, we can see that an increase in levels of Factors A and B
decreases the mowing time, and together they reduce the mowing time even further.
All this points to the corner with the Treatment Combination ab, where the mowing
time is 70 minutes. That is the best choice of parameter values, so the best choice of
parameters would be as follows: blade angle high, deck height high, and chute loca-
tion low.
3.3.2.11 Model Building One of the important aspects of post-experiment data analy-
sis is model building and exploring the treatment combination space for the optimal
treatment combination that produces performances even better than those obtained
in the experimental trials. A mathematical model is postulated from the test results
to represent the response as a function of the factor levels. This model is then used
to predict the response at any treatment combination—including those at which the
experiment was not run. Using these predicted values, the treatment combination that
produces the best response can be located. This line of investigation, searching on the
response surface for the combination of treatment levels that produces the best pos-
sible response, is called the “response surface methodology.” The reader is referred to
books on the design of experiments, such as Box et al. (1978) for more details on this
method.
A model for the above three-factor experiment would be:
y = β 0 + β1x1 + β 2 x 2 + β12 x1x 2 + β 3x3 + β13x1x3 + β 23x 2 x3 + β1233x1x 2 x3 + ε
which proposes that a value of the response y from the experiment is made up of an
overall mean β0 modified by effects from Factors 1, 2, and 3 (for Factors A, B, and C,
respectively) and their interactions, and an error term ε. For 23 designs, the estimate for
β0 is the overall average of the responses from all eight treatment combinations, and esti-
mates for β1, β2,… are equal to half the effects of factors A, B, ... (Montgomery 2001).
Thus, for the experiment with the lawnmower, the model would be:
The above model can also be used to verify if the assumptions regarding the error
in the responses—that they are independent and follow N(0, σ2)—are true. For this,
the error term for a treatment combination is estimated as the difference between the
actual value obtained from the experiment for the treatment combination and the
predicted value from the above model. This difference, called the “residual,” is calcu-
lated for all observed values of the response and is then analyzed for assumptions of
normality and independence. The residuals are also plotted against different factor
levels and studied to see if they remain constant over the levels of the factors, for all
factors.
Such studies using a model to search for the global optimum treatment combina-
tion is possible only if there is no significant interaction among factors. If there is any
significant interaction effect, the choice of the best combination should be limited to
those at which the trials were conducted.
We have discussed above an important design (the 23 factorial) that is a very useful
design in industrial experimentation. If only three parameters are considered by the
experimenter as being important in determining the level of a quality characteristic,
this design can be used to determine the optimum levels at which these parameters
must be set in order to obtain the desired level of the quality characteristic. At least
two replicates are needed if an estimate of the experimental error is to be obtained
in order to determine the significance of the effects. When the number of factors is
more than three—say, for example, it is four—the number of treatment combinations
becomes 16. If two replicates are needed, the number of trials needed is 32, which
is too large considering the time and other resources needed to complete the trials.
If the experiment is to be conducted as the production is going on, it will be consid-
ered a serious interruption to production activity. Statisticians have therefore created
methods to estimate the experimental error even with one replicate at each treatment-
combination. These are discussed in Chapter 5.
When the number of factors becomes even larger—for instance, seven—the num-
ber of treatment combinations becomes 27 = 128, which is prohibitively high in terms
of practical considerations. Then, a fraction—say, one-half or one-fourth—of the tri-
als needed for a full factorial is run, with the trials being chosen judiciously to gain
all useful information while sacrificing some information that may not have much
practical importance. These are called “fractional factorial designs.”
The 2k designs and the fractional designs derived from them are known as “screen-
ing designs” and are used to find out which subset of all factors in a given context are
important in terms of their influence on the response. Another experiment is then
run with the chosen subset, usually consisting of three or four factors, to identify the
best combination of levels of the chosen subset of factors. Some additional discussion
of the designs, including the fractional factorials, can be found in Chapter 5, and the
reader is referred to references given at the end of this chapter for further details on
such experimental design.
Q ua lit y in D e si g n 18 5
3.3.2.12 Taguchi Designs Taguchi designs, which have become very popular among
engineers, are of the fractional factorial type, in which the designs are provided as
orthogonal arrays, which are in fact the design columns, columns of signs for fac-
tors that we saw in Table 3.7. Whereas traditional designs handle noise by random-
izing the experimental sequence and expecting that their effect is thus neutralized,
Taguchi designs seek to deliberately vary the noise factors and study their effect on
the response. More specifically, the best treatment combination is chosen as the one at
which the effect of the noise factors is minimum. This is done by using what is called
the “signal-to-noise” ratio, which roughly equals the ratio of an effect to its standard
error. The treatment combination that produces the largest value for the signal-to-
noise ratio will be the best treatment combination. As mentioned earlier, fractional
factorials save on experimental work but result in the loss of some information. When
choosing fractional factorials, one must be careful not to lose important information
for the sake of economy of experimental work; otherwise the results may provide
inadequate or even false information. At this point, we wish to make it clear that the
Taguchi designs are advanced designs that should be used only by those who under-
stand the fundamentals of designed experiments and are aware of the advantages and
shortcomings of using these designs.
3.3.3 Tolerance Design
3.3.3.1 Traditional Approaches Traditionally, tolerancing has been done based on the
experience of designers regarding what has worked for them satisfactorily. Experienced
designers have documented the tolerances that have worked well for different manu-
facturing processes. These selections have also been considerably influenced by what
18 6 A Firs t C o urse in Q ua lit y En gin eerin g
These tolerances will be adequate and appropriate to use for a new product if the
production machinery used for producing the characteristic has the same capability
as those assumed in obtaining the standard tolerances. Here, “capability” refers to the
variability around a target with which a machine or a process is capable of producing
the characteristic. This capability is measured using the standard deviation (σ) of the
characteristic produced by the machine or process. In general, these tolerances are
calculated as T ± 3σ, where T is the target and the 3σ spread is chosen on the assump-
tion that the characteristic follows the normal distribution and most variability occurs
within 3σ distance on either side of the target.
The subject of process or machine capability is discussed in more detail in Chapter 4,
in which the measures for quantifying process capability, such as Cp and Cpk, are
defined. These indices illustrate how well a machine or process is capable of holding
the process variability within a required set of limits. For now, we only note that the
tolerance selected for a characteristic should be related to the capability of the machine
or process by which the characteristic is produced. For design engineers to assign
rational, feasible tolerances, they should have information on the capabilities of the
machines and processes that are used to create the characteristics. When the capabil-
ity of a process to produce a characteristic is known through the standard deviation—
say, σ0 —of the characteristic, then a reasonable tolerance for the characteristic would
be T ± 3σ0, where T is the nominal or target.
When design engineers choose tolerances without knowing if the available pro-
cesses will be able to hold those tolerances, that is when the production people are
frustrated and the tolerances lose their meaning and are likely to be violated or ignored.
If tighter tolerances than what the current processes are capable of are needed, process
improvements must be made to reduce the output variability before tighter tolerances
are demanded of them. Needed here is a collaborative effort by design and manufac-
turing engineers.
Q ua lit y in D e si g n 18 7
Loss Taguchi
Traditional loss function
loss function
Value of quality
LSL Target USL characteristic x
Figure 3.11 Traditional and Taguchi’s views of loss from a characteristic not on target.
3.3.3.3 Assembly Tolerances Many situations arise in industrial settings, where toler-
ance for an assembly has to be calculated from the tolerances of components. For
example, the track for a bulldozer is made up of 50 links. The tolerances of the links
are known, and we may want to know the tolerance that the assembled track will
meet. Another example would be finding the tolerance on the clearance between a
bore and a shaft given the individual tolerances on the bore and shaft diameters. In
situations like these, when we need the tolerance on the total (or difference) of certain
number of characteristics, we need to use the root sum of squares (RSS) formula.
3.3.3.4 The RSS Formula Suppose L1 and L2 are the lengths of two components (see
Figure 3.12a) and L = L1 + L2. Let t1 and t 2 be the tolerances on L1 and L2, respectively.
18 8 A Firs t C o urse in Q ua lit y En gin eerin g
L
L1
L2 L1
L2 L
(a) (b)
Then the tolerance on L is given by t L = t12 + t 22 . We need the assumption that L1 and
L2 are independent. Similarly, in the Figure 3.12b L = L1 − L2, and if t1 and t 2 are the
tolerances on L1 and L2 respectively, then, assuming independence between L1 and L2,
the tolerance on L is also given by t L = t12 + t 22 . Notice that the “+” sign inside the radi-
cal of both formulas irrespective of whether L is the sum or the difference of L1 and L2.
This is based on the following results for linear combinations of normally distributed
( )
random variables. Suppose that X 1 N µ1 , σ12 and X 2 N µ 2 , σ 22 and that X1 ( )
and X2 are independent. It can then be shown (Hines and Montgomery 1990) that:
(
( X 1 + X 2 ) N µ1 + µ 2 , σ12 + σ 22 , )
( X1 − X 2 ) N (µ 1 − µ 2 , σ12 +σ ) 2
2
Note that the variances add up for both the sum and the difference of the random vari-
ables. Hence, the standard deviation of the sum (or the difference) of the random variables is:
σ X 1 ± X 2 = σ12 + σ 22
Tolerance is usually taken as ±3σ, or some multiple of the standard deviation, for
both the components and the assembly, thus:
t x1 ± x2 = t12 + t 22
The more general RSS formula can be stated as follows:
Suppose that L1, L2, …, Ln are lengths of n independent components, and their
assembly length is given by L = ±L1 ± L2 ± … ± L n. If t 1, t 2,…, tn are the tolerances on
the individual lengths, then the tolerance on the total length tL is given by:
t L = t12 + t 22 + + t n2
This rule can also be used for other measurements such as weights, heights, and
thicknesses.
Q ua lit y in D e si g n 18 9
4.5 ± 0.02
(c)
Example 3.10
A track for a tractor is made up of 120 links. Each link is produced with a nominal
length of 12 in. and a tolerance of t = ±0.125 in.
a. If the links meet their tolerance, what tolerance will the tracks meet?
Assume that the links are independent and normal.
b. If the specification for the total length of track is 1440 ± 1 in., what propor-
tion of the track lengths will be out of specification? Assume that the link
lengths are independent and normal, and meet their tolerance. The toler-
ance is chosen as three times the standard deviation.
c. If the tolerance on the total track length should not be more than ±1 in.,
what should be the maximum tolerance on the links?
Solution
a. If t is the tolerance on the total track, then t = 120 × 0.125 2 = 1.37 in.
Hence, the tracks will meet the specification 1440 ± 1.37 in.
1 = 120 × t 2
1
t= = 0.091 in .
120
Therefore, the specification for links should be 12 ± 0.091 in.
Example 3.11
An assembly (as shown in Figure 3.12c) calls for a tolerance on the “gap.” Assume
that the parts A, B, C, and D are manufactured independently and have the nomi-
nal and tolerance in inches, as shown.
Solution
a. The nominal value of the gap is:
σA = σB = σC = 0.01/3 = 0.0033
b.
σD = 0.02/3 = 0.0066
Therefore, σ gap = 0.00332 × 3 + 0.0066 2 = 0.0087.
The specification limits for the gap are given as 0.25 ± 0.02. The proportion out of
specification is:
1 − P[0.23 < Gap < 0.27], which can be found as follows.
Since the average of the gap is 0.25 and σgap = 0.0087,
−0.02 0.02
Proportion out of spec = 1 − P ≤Z ≤
0.0087 0.0087
= 1 − Φ( 2.3) − Φ( −2.3)
= 1 − 0.9786 = 0.0214
Therefore, 2.14% of the assemblies will have gaps that are either too small or too
large.
Q ua lit y in D e si g n 191
Failure mode and effects analysis (FMEA) is a technique used to investigate all
possible weaknesses in a product design or process design and prioritize the weak-
nesses in terms of their potential to cause product failure or process failure. This
prioritization enables taking corrective action on the most critical weaknesses in
order to reduce the overall likelihood of failure of the product/process. When this
technique is used at the stage of evaluating the design of a product, it is called the
“design FMEA.” When it is used during the design of a process, it is called the
“process FMEA.”
The FMEA is illustrated using an example, where it is used for evaluating a prod-
uct design. The FMEA is typically done by a multifunctional quality planning team.
The design engineer who is responsible for the product design will have the major
POTENTIAL
System FAILURE MODE AND EFFECTS ANALYSIS FMEA number A
19 2
(DESIGN FMEA)
Subsystem Page of
B Design responsibility C H
Component Prepared by:
D E F
Model years(s)/programs(s) Key date FMEA date (orig.)
Core team G
Severity
mode completion
RPN
Classification
Function
Detection
Detection
Occurrence
date date
Occurrence
Front Door LH Maintain integrity Integrity breach Corroded interior 6 Upper edge of Design 3 Vehicle 7 105 Laboratory A. Tate Based on test
HOHX-0000-A of inner door allows envlron lower door panels protective wax requirements durability test, accelerated body engineer results (test no. 7 2 3 30
panel access of inner application specified (#31268) and T-118 (7) corrosion test 148-1) upper edge
door panel Deteriorated life for inner door panels best practice OX 09 03 spec raised 126
of door leading is too low (BP 3455) OX 09 30
to: Insufficient wax Design 3 Vehicle 7 105 Laboratory A. Tate Test results (Test
-Unsatisfactory thickness specified requirements durability test, accelerated body engineer No. 1481) show 5 2 3 30
appearance due (#31268) and T-118 (7) corrosion test specified thickness
to rust through best practice OX 09 03
(BP 3455) is adequate.
paint over time. OX 09 30
-Impaired Design of J. Smythe DOE shows 25%
function of experiments body engineer variation in specified 6 2 3 30
interior door (DOE) on Wax thickness is
hardware thickness OX 10 18 acceptable OX 10 25
Inappropriate wax Industry 2 Physical and 5 50 None
formulation specified standard chemical lab
MS-1893 test - report
No. 1265 (5)
vehicle
durability test.
T-118 (7)
Corner design 5 Design aid 7 175 Team T. Edwards Based on test:
prevents spray equip with non- evaluation using body engineer 3 additional vent 5 1 1 5
from reaching all functioning production spray and assy ops holes provided in
areas spray head (8) equipment and affected areas
vehicle specified wax Ox 11 15 (error-proofed)
durability test. OX 12 15
T-118 (7)
SAMPLE
Insufficient room 4 Drawing 4 80 Team Body Engineer Evaluation showed
between panels evaluation of evaluation using and assy ops adequate access 5 2 4 40
for spray head spray head design aid buck OX 11 15 OX 12 15
A Firs t C o urse in Q ua lit y En gin eerin g
-----n----
a1 a2 b c d e f h g h i j k l m
Figure 3.13 Example of a design FMEA study. (Reprinted from Chrysler LLC, Ford Motor Company, and General Motors Corporation. 2008b. Potential Failure Mode and Effects Analysis
(FMEA)—Reference Manual. 4th edition. Southfield, MI: A.I.A.G. With permission.)
Q ua lit y in D e si g n 19 3
responsibility for gathering the team and conducting the design FMEA. The FMEA
study should start with a block diagram of the product in question that shows the
functional relationship of the parts and assembly.
Figure 3.13 shows a design FMEA for the front doors of an automobile. The analy-
sis relates to the design features to prevent failure of the door panel from rust and
corrosion. Referring to the identifying letters shown in the figure, items A through
G relate to information on the product name, designer name, date, reference number,
and so on, which are meant for documenting traceability. Items a1 and a2 (shown at
bottom of columns) give a full description of the part and its functional requirement.
Item b is a potential failure mode, one of several modes to be considered.
Item c is the description of the possible effects of the failure mode in using the prod-
uct. Item d is the “severity” column, where the severity of the effect that the failure
mode will cause to the end user, next component, or subsystem, is recorded. Severity
is estimated on a scale of 1 to 10, with 10 being the greatest severity. If the failure
of the part due to the mode in question will result in unsafe vehicle operation—thus
causing injury to the customer—then the severity rating will be 10. If the failure will
have no effect on the proper functioning of the assembly and is not likely to cause any
inconvenience to the customer, then the rating will be 1. In this example, a severity
rating of 5 had been given, possibly because the rust may result in the poor function-
ing of locks and other hardware in the door. Item e is for a special classification by
automotive manufacturers to identify those characteristics that may require special
process control in subsequent processing of the part. Such characteristics may relate to
emission control, safety, or other government ordinances.
Item f lists the possible causes that are responsible for the failure-mode in question.
Item g is the “occurrence” column, where the frequency of occurrence of the failure
due to the different causes is rated on a scale of 1 to 10. The occurrence of a cause
relates to the number of failures that may occur because of the particular cause during
the design life of the equipment (e.g., 100,000 miles for a car). One possible scale: rate
the occurrence as 10 if failure occurs in one in every two vehicles from this cause and
rate the occurrence as 1 if the occurrence is less than one in one million vehicles. The
rating for occurrence can be educated estimates based on data obtained from previous
model experience or warranty information. The rating standard must have the agree-
ment among the planning team and should be consistent.
Item h shows the current control activities pertaining to the cause of the failure-mode
that are already incorporated in the design. The information on current activities is
taken into account while rating the failure-mode in the next column, item i on “detect-
ability.” If the current control activity is certain to detect the failure due to a particular
mode, then the rating will be 1. If the current activities will not—or cannot—detect
the failure, then the detectability rating will be 10. Item j, the “risk priority number”
(RPN), is the product of ratings for severity (S), occurrence (O), and detectability (D).
RPN = S × O × D
19 4 A Firs t C o urse in Q ua lit y En gin eerin g
The RPN will range from 1 to 1000 and is used to prioritize the various modes of
failure and, within each mode, the various causes of failure. The larger the RPN is for
a particular cause or mode, then the greater is the risk that failure will occur due to
that cause or mode. Organizations set threshold limits for RPN to take action when
the RPN exceeds the limit. Whenever an RPN is encountered that is greater than
the threshold number, suitable action is taken to reduce the RPN below the threshold
value. Items k through m show what action is planned and who has the responsibil-
ity to see that action is completed. Teams also set time schedules to have the actions
completed by. Item n is the new RPN for the failure mode and respective causes after
the recommended actions have been completed.
The above example shows the analysis for only one failure mode along with the
causes contributing to that failure mode. For any product or a component, however,
there will be a number of failure modes, and each one must be analyzed as described
above, and the high RPN modes must be addressed with a suitable response.
The FMEA technique helps in identifying possible failure modes, and possible
causes of those failure modes, and enables prioritizing them on a rational basis to help
selecting those that need to be addressed. It provides a format for a design engineer
to think together with a team, proactively anticipate possible failures, and implement
improvements to the design. Thus, the FMEA method:
1. Facilitates concurrent design of products, because it brings several functions
together;
2. Contributes to a better quality product;
3. Keeps the customer in focus during the design activities;
4. Reduces future redesigns; and
5. Provides documentation of the improvement activities.
3.3.5 Concurrent Engineering
Concurrent engineering (CE) is the design approach that calls for the design activity
to include several related functions, such as manufacturing engineering, production,
quality, marketing, customer service, customers, and suppliers, in order to obtain their
inputs early in the design process and incorporate them into the product design. This
is in contrast to older models followed in design work, termed “sequential engineer-
ing” (SE), in which the designers worked in isolation, prepared designs, and passed
on the blueprints to manufacturing engineers “over the wall,” who in turn would
make the process designs sitting in their cells and then pass them on over-the-wall
to the production people. If any drawbacks were discovered by the production people
regarding manufacturability or any other aspect of the design, the documents would
travel the same course backward, over the walls, to the designers, who would make
the appropriate corrections and then send the designs back to production over the
same course. Such back-and-forth transmission of documents caused delays, made
Q ua lit y in D e si g n 19 5
the changes costly, extended the time for completion of the designs, and delayed the
production and delivery of the product to the market.
Under the CE model, representatives from all the functions related to design work
are assembled in a team, the members of which review the designs from their indi-
vidual perspectives even as the design is being prepared, and provide timely feedback.
The designers would incorporate necessary changes based on the feedback before
finalizing the drawings. The manufacturing engineers, for example, would start on
process designs, using the preliminary information provided to them; and while the
product design is still under preparation, they would suggest changes to the product
design if they encountered any difficulties while making the process design. Thus, the
product design and process design can proceed concurrently, with information being
shared mutually. This type of design activity—which contributes to the simultaneous
progression of product design, process design, quality planning, and other related
activities—is called CE. This approach has produced some dramatic results (Clausing
1994; Syan and Menon 1994; Skalak 2002), including:
1. Improved product quality;
2. Reduced product cost;
3. Reduced time to market;
4. Reduction in number of redesigns;
5. Reduced cost of design;
6. Improved customer satisfaction;
7. Increased profitability; and
8. Improved team spirit among employees.
Of all the benefits, the most cited is the reduced time to market, which is the time
between the definition of the product to the delivery of the first unit. Studies (Skalak
2002) have shown that the manufacturers of products, from telephones to airplanes,
have recorded 30% to 80% reductions in time to market through the use of CE prin-
ciples. The oft-quoted example of success in the use of CE concepts is the story of
designing the Taurus/Sable models made by Ford in early 1980s. In using the new
approach, the Ford design team not only achieved a reduction in development time,
fewer engineering changes, a reduction in time to market, and an improvement in
quality of product. They designed the car that was declared one of the world’s 10 best
cars for three years in a row, between 1986 and 1988 (Ziemke and Spann 1993).
CE is also a very important component of the Toyota Production System, also
called Lean Production Method. How Toyota improved the quality of their cars,
reduced the time to market, and reduced the total hours for design using CE methods
in the 1980s is legend (Womack, Jones, and Roos 2007).
The CE methodology makes use of several tools, such as QFD, design and process
FMEA, designed experiments, reliability analysis, design for manufacturability and
assembly, and design reviews, along with computer-aided tools for design and integra-
tion. An extensive computer-based communication network for exchanging data, text,
19 6 A Firs t C o urse in Q ua lit y En gin eerin g
and drawings among the members of the design team is an important component of
CE work.
Many of the tools mentioned above are covered in this chapter as well as in other chap-
ters of this book. Design for manufacturability and design reviews are explained below.
3.3.5.2 Design Reviews Design reviews are regularly scheduled meetings of a review
team organized by design engineers. These meetings include representatives from
manufacturing, quality, materials, suppliers, and customers, to review designs and
monitor progress. The reviews are made to verify:
1. Functional requirements to meet customer needs;
2. Quality and reliability goals;
3. Design FMEA study results;
Q ua lit y in D e si g n 19 7
3.4 Process Design
Manufacturing engineers design the processes to convert the raw material into the
finished product. As mentioned earlier, they begin their work even when the product
design is still progressing, and they complete it after the product design is completed.
Among the outputs of the process design are several quality-related outcomes:
1. Process flow chart
2. Process parameter selection
3. Floor plan layout
4. Process FMEA
5. Process control plan
6. Process instructions
7. Packaging standards
8. Preliminary process capability studies
9. Product and process validation
The process flow chart (PFC) is a schematic representation of the operations, or activi-
ties, starting from raw material and leading to the production of the final product.
The PFC is an important planning document used by manufacturing engineers in
selecting appropriate machinery, methods, and measurement tools. It provides a per-
spective on the flow of activities in a production process and, thus, facilitates planning
for quality-related activities as well. Therefore, it forms the central tool in making the
quality plan for a product. Making the PFC is also referred to as “process mapping” in
the context of making value-stream analysis in Lean Manufacturing.
The PFC is an important communication tool among the process designers, includ-
ing the quality planners, and a convention exists for the use of symbols to denote the
various activities involved in a process:
19 8 A Firs t C o urse in Q ua lit y En gin eerin g
The quality planning team describes the checks and control activities to be performed
during production at suitable points in the process flow chart. The square inspection
boxes mark the major control activities to be performed. It could be a quality charac-
teristic of the product to be checked at certain critical points, or it could be a critical
process variable that influences an important product characteristic that must be con-
trolled. Determining when to measure a product characteristic, or a process variable, is
an important decision in planning for quality. One rule is to verify the condition of the
product before any critical or expensive operation is to be performed. Another rule is to
verify the product condition after a critical characteristic has been created. Of course,
there should be final inspection at the end. The process FMEA, which is discussed
later in this chapter, will also disclose the critical product or process variables that must
be checked, and indicate where in the process flow they must be checked.
Figure 3.14 is an example of a PFC for making molds for castings from sand, resin,
and catalyst. Notice how the inspection and control to be performed during the pro-
cess are indicated on the chart.
It must be pointed out there are many versions of the symbols and language used for
making the PFC depending on the consultants offering the training and the software
they recommend. We have used the basic language used in creating the PFC, which
underlie all the higher language procedures.
Selecting the levels at which the process variables are to be maintained during the
production of a product is perhaps the most important part of process design in terms
of assuring the quality of the final product. In practice, this selection is often made
based on previous experience with the same or a similar process, which might have
been arrived at through a formal, or—most probably—informal, seat-of-the-pants
experimentation carried out by individuals sometime during the history of the pro-
cess. The production of unacceptable goods causing chronic waste in production shops
Q ua lit y in D e si g n 19 9
M, K3 M, K1,2
Incoming Incoming Incoming
catalyst sand resin
V V V
Incoming
Storage Storage
spray Storage
Sand
heat
M, C
Dilute
Sand
M, C mixer
Check
viscosity Mold
make
V
Check
Viscosity No mold
in spec?
Yes Roll-
over
Clean
M, C, K4
Spray
LEGEND
V: Visual inspection
Burn M: Measurement
off C: Control chart
K#: Key process variable
brainstorming session that the following three process variables would be the
factors in the experiment to solve the blister problem and improve the existing
process. The team also determined the levels for the factors as shown:
FACTORS LEVELS
LOW HIGH
A: Drying oven temp. 350°F 400°F
B: Sand for port-cores Silica Lake
C: Flow-off vents Yes No
received the appropriate treatments at the proper workstations. The plan shown
in Figure 3.15 and Table 3.8 helped in planning the logistics of the experiment.
The plan makes use of the serial numbers assigned to the molds, which are writ-
ten outside and etched inside the molds in order to provide traceability between
a mold and its casting. The mold numbers are shown in square parentheses in
Figure 3.15. In this plan, we should note that each level of a factor is combined
with each level of every other factor.
A total of 160 molds were needed for the experiment, and they were divided
into groups of 20 and assigned to the different treatment combinations, as shown
160 Total
[1, 3, 5,...77, 79] [2, 4, 6,...78, 80] [81, 83, 85,...157, 159] [82, 84, 85,...158, 160]
Sand: 40 Silica 40 Lake 40 Silica 40 Lake
[1, 5,...77] [3, 7,...79] [2, 6,...78] [4, 8,...80] [81, 85,...157][83, 87,...159][82, 86,...158][84, 88 ,...160]
Vent: 20 Yes 20 No 20 Yes 20 No 20 Yes 20 No 20 Yes 20 No
in Figure 3.15. Copies of Table 3.8, which was prepared out of Figure 3.15, were
handed to the workstation operators so that they could provide the correct treat-
ment to the molds when the molds arrived at their workstations.
Copies of Table 3.8 were given to operators in a meeting that was organized to
explain the details of the experiment and the importance of making sure the right
treatment is given to the molds when they arrived at their workstation. When all
the molds were prepared according to the plan, iron was poured into them with
the utmost consistency regarding iron temperature and other pouring practices.
(When the process output is tested against certain factors, it is absolutely essential
that other factors that might affect the output are controlled at constant levels as
far as is practical.) When the iron cooled and the castings were cleaned the next
day, the castings were cut at appropriate places and were evaluated for blisters, and
the numbers of defective castings in each of the treatment combinations were tal-
lied. The proportions of defectives at each treatment combination were computed
and the results presented in a design cube, as shown in Figure 3.16.
Figure 3.16 shows that a treatment combination exists that produced 0%
defectives, whereas all others at which trials were conducted produced some
defectives. The “current practice,” which is represented by the treatment combi-
nation where all factors are at the low level, produced a 39% defect rate. Although
the quality engineers were hesitant to conclude that the treatment combination
showing 0% defectives had indeed produced no defectives, because of the pos-
sible variability in the outcome, the production people wanted to run the process
immediately at the newly discovered treatment combination having the potential
to produce “perfect” quality. A trial run was ordered for one full day at the “best”
treatment combination. When the trial run confirmed the experimental results,
the process designer changed the process specs for regular production. The blis-
ter problem disappeared.
Oven temperature
0% 15%
18% 40%
400°F
Vent
No
9%
350°F 27%
Yes
39%
Silica 31% Sand
Lake
or rejection and return by customers is often the result of process variables selected in
an ad hoc manner without experimentation or with unplanned experiments that do not
generate the best combination of levels for process variables. Experimentation using
designed experiments is the best course to determine the best combination of levels of
process variables.
A case study presented below shows, in some detail, how an experiment is con-
ducted on a production process, and gives an idea of the logistical problems involved
in executing a designed experiment on a running process and of how they are handled.
Although this case study relates to an experiment performed to improve an existing
process, many of the steps are also applicable to designing a new process.
In the case study above, no further analysis was made after finding the treat-
ment combination that provided the best possible result that could be expected. If
such clear-cut results are not forthcoming and the results must be analyzed for the
effect of individual factors or their interactions, then calculation of the effects can be
made, as was done in the example of the lawnmower design earlier in this chapter.
Verifying the significance of an effect, however, has to be done keeping in mind that
the response here is an attribute, not a measurement, and that there are no replicates.
The best approach would be to assume that the proportion of defectives among the
treatment combinations are normally distributed on the assumption that the sample
sizes are large, and then make a normal probability plot of the effects, as discussed in
Chapter 2. The significant effects will show as outliers and can be identified as such.
Adjustments to the process design can then be made accordingly to obtain the desired
results.
The purpose of presenting the above case study is, firstly, to emphasize the need
for designing processes using statistically designed experiments; and secondly, to note
that the responsibility of a quality engineer in running designed experiments does
not end with drawing up the design on paper and then handing it to the production
personnel, giving them the responsibility to run the experiment. There may be occa-
sions when the production personnel can conduct the experiment themselves because
of simple process configuration, and changing factor levels meant only changing a few
dial settings.
For example, if the experiment involves changing the temperature, pressure, and
amount of water added to a chemical reactor and then measuring an output at the
end of the reaction period all at one location, this experiment can be entrusted to
production operators. On the other hand, if the process configuration is such as in
the case study above, where treatments to the experimental units are given at different
places by different people, several opportunities exist for making mistakes because of
logistics and lack of communication. The quality engineer/team should participate in
the execution of such an experiment and make sure that participants both understand
the objectives of the experiment and follow the instructions meticulously. Then, if
mistakes occur, timely detection will prevent large-scale rejection of data or, worse,
making erroneous conclusions.
204 A Firs t C o urse in Q ua lit y En gin eerin g
The facilities engineers prepare the floor plan layout to facilitate the smooth flow of
material and subassemblies toward the final assembly. The quality planning team has
to verify that sufficient room has been assigned for inspection stations, sample col-
lection and transmittal, control chart location, computer terminals for data gather-
ing and display, and suitable fixtures for displaying work instructions and inspection
standards. Some inspection and instrument calibration may have to be done under
controlled environmental conditions. The team should make sure that the plant layout
makes suitable space allocation for these requirements.
3.4.4 Process FMEA
The process FMEA is performed to assess the effect of possible process failure modes
on the quality of the product. How the FMEA is made has been explained earlier
in this chapter while discussing its use in product design. A process FMEA must be
performed for any new process, or for a current process scheduled to produce a new
product, in order to make certain that possible failures are considered and suitable
remedial actions are implemented. The process FMEA is also a good tool to use while
investigating the root causes for defectives produced in an existing process.
An example of a process FMEA is shown in Figure 3.17. The items identified by
letters in the figure have the same meaning as those for the design FMEA, except that
the modes and effects of failure now belong to a process. This process FMEA relates
to one failure mode—the lack of sufficient wax coating on the inside panel of a car
door, causing rusting of the door and poor functioning of the door hardware. Note
that FMEA analysis has been used to discover the causes of the failure mode and
prescribe solutions to rectify the cause, which, when implemented, result in smaller
values for the RPN.
The process control plan is a working document prepared using the information
from the PFC, FMEA, design reviews, and other tools employed for the process
design. The information from these documents is transferred in a manner that can
be easily used by the operating personnel. The process control plan is, in essence, the
consolidation of all process design information that can be handed off to the operating
personnel for use. Thus, it becomes the source of information for process operators for
creating all the product characteristics and keeping them in control. The process con-
trol plan is also a “living document” in the sense that it is continuously updated (by the
authorized process engineers) based on the changing requirements of the customer,
experience gained with the processes, or solutions discovered in resolving previous
problems.
POTENTIAL A
FAILURE MODE AND EFFECTS ANALYSIS FMEA number
(PROCESS FMEA) Page of
B Process responsibility C H
Item: Prepared by:
Core team G
Severity
failure
RPN
Classification
Detection
Detection
date
Occurrence
date
Occurrence
Op to: Corner inner Insufficient wax Allowa integrity 7 Manually insurial None 8 Variable 5 200 Add positive Mig engineering Stop added, 7 2 5 70
manual door, lower coverage over breach of inner sprary head not check for dim depth stop to by 0X 10 15 sprayer checked
application surfaces with specified surface door panel inserted ler sprayer
of wax wax to enough Visual check
Connraded for coverage
inside door specification Rejected due to
interox lover Automato Mig engineering
pannel thickness complexity of
door panels spraying by 0X 12 15
different doors on
Deteriorated life fly same line
of door leading to: Spray head Tool appay all 5 Variable check 5 175 Use design of Mig engineering Temp and press 7 1 5 35
-Unsatisfactory clogged start-up and after for dim exporiments by units wore
appearance due -vennocity ico high idle products thickness (DDE) on 0X 10 01 determined and
to rust through -temperature lea and preventative viscosity vs. line controls have
paint over time. kpw maintenance Visual check temperature vs. been installed-
-Impaired -Pressure loo low program to for coverage pressure. control cleats
function of clean heeds show process is in
interior door control cpic-1.85
hardware
Spray head Proventation 2 Visual check 5 70 None
deformed due maintanance for film
to inspect program to thickness
maintain feects
Q ua lit y in D e si g n
Visual check
for coverage
Spray lime None 5 Operator 7 245 Install spray Maintenance Automatic spray 7 1 7 49
sufficient instructions. linear xxxxxxxxxx linner installed-
operator sinris
Lo sampling spray, spitor
(visual) check controls shut off
SAMPLE coverage of control chanels
artical access show process is in
control-cpic=2.03
Executive
wax coverage
over specified
surface
-----n----
a1 a2 b c d e f h g h i j k l m
Figure 3.17 Example of a process FMEA. (Reprinted from Chrysler LLC, Ford Motor Company, and General Motors Corporation. 2008b. Potential Failure Mode and Effects Analysis (FMEA)—
205
3.4.6.1 Process Instructions Process instructions are documents describing how each
operation in the process should be performed. Process instructions and control plans
complement each other in defining how operations are to be performed. The process
instructions specify the optimal sequence in which motions should be made in order
to accomplish a job. The control plans give the quality-critical information. The pro-
cess instructions are also useful when training new operators on the job. These have
to be prepared out of information obtained from PFCs, control plans, and the recom-
mendations of equipment manufacturers. The experience gained in past operations by
the operators must be used in finalizing the instructions.
3.4.6.4 Product and Process Validation This is the step before the product is launched
into production, when the process is tried out to check if it is capable of producing the
product according to design and in a manner to meet the customer’s needs. The process
will be tested at production-quantity levels, using production workers and production
tooling, in the normal production environment. The following are the quality-related
outcomes expected from such a trial run:
Process capability results
Measurement system (instrument capability) evaluation
Product/process approval
Feedback, assessment, and corrective action
Q ua lit y in D e si g n 207
3.4.6.5 Process Capability Results These are measured during the validation run and
are compared with expected or targeted benchmarks. Changes may have to be made
to the process if any shortfall occurs in the expected process capabilities or prod-
uct quality. Changes to process variables or additional process controls may have to
be implemented to achieve the target capabilities specified by customers. For mea-
surements, the capability is generally specified using C p and C pk. Customers usually
demand C p and C pk values of 1.33 or greater. For attributes, capabilities are specified
in defects per thousand (dpt) or defects per million (dpm). The target capability varies
with the type of product, type of attribute, and needs in subsequent processes or use
by the customer. The most demanding requirement when the use of the product or
part needs such a requirement is 6σ capability (equivalent to C pk = 2.0 for a centered
process).
3.4.6.8 Feedback, Assessment, and Corrective Action This is the stage when the effec-
tiveness of all the quality planning work in previous stages is verified. The objective
is to gather all the information together and assess the capabilities of the processes
for producing the product characteristics to meet customer satisfaction. The capabili-
ties of both measurements, as well as attributes, must be evaluated periodically, and
opportunities to make improvements, to reduce variability further, must be explored.
Additional effort must be made to satisfy the customer through help in using the
product in the customer’s environment. Customer feedback should be obtained in
208 A Firs t C o urse in Q ua lit y En gin eerin g
order to review and revise the original design to improve their satisfaction even more.
The customer’s requirements on delivery and service must be sought and fulfilled.
Whenever possible, the customer’s participation in the design and planning activities
should also be encouraged.
3.5 Exercise
3.5.1 Practice Problems
Question no. 1 2 3 4 5 6 7 8 9 10 11
First survey 4.2 2.7 3.9 2.8 4.8 5.0 1.8 3.1 4.1 2.2 3.7
Second survey 3.8 3.6 2.9 1.7 4.1 4.6 2.6 3.8 4.8 3.1 3.3
3.2 A customer survey contains two sets of questions, each set containing ques-
tions similar to those in the other set but worded differently. The average
scores received for the questions, with similar questions paired together, are
given below. Is the questionnaire reliable?
Similar Q # 1 2 3 4 5 6 7 8 9 10
Set l 4.2 3.8 3.2 4.9 4.0 2.8 1.8 2.2 4.3 3.9
Set 2 2.7 4.5 1.6 3.2 1.6 4.3 2.0 3.5 4.0 4.1
3.3 Determine the sample size for a customer survey if the allowable error in
the estimates should not be more than ±0.3 and the desired confidence
level is 95%. The standard deviation of the individual responses is known
to be 0.7.
3.4 A survey is to be conducted among the alumni of an educational institu-
tion to obtain impressions of their experience when they were in school.
A questionnaire has been prepared that asks for their response against
questions on a scale of one to five. If the standard deviation of the scores
from similar past surveys is 0.7, and the present results are desired within
an error of ±0.2 with a 95% confidence level, how many alumni should be
surveyed?
3.5 The following data represent the life of electric bulbs in months. Check what
distribution the data follow by preparing a histogram and checking its shape.
Q ua lit y in D e si g n 209
Determine the MTTF. (Roughly, the MTTF for a population that has nor-
mal distribution is given by X50 and that for an exponential distribution is
given by X63.2.)
3.6 Calculate the percentage frequency of failures, reliability, and failure rate for
the following set of data on failure of a production line. Draw the graphs of
these functions in the manner of Figure 3.4.
3.7 The following data represent the failure times in years of a sample of 10 bat-
teries of a certain make. The numbers shown with a superscript (+) are not
failure times; they show that those batteries were still working by the time
the experiment was concluded. Calculate the MTTF and failure rate of these
batteries. Assume that the battery life follows an exponential distribution.
1.0 2.3 4.4 5.4 6.1 11.8 12.9 13.7 13.7+ 13.7+
and choose the combination of tool angle and speed that produces the least
amount of vibration. Calculate the main and interaction effects and determine
if they are significant.
SPEED
1 2
(2600) (3000)
Tool angle 1 18.2, 14.4 14.5, 15.1
(15°)
2 24.0, 22.5 41.0, 36.3
(20°)
MOISTURE
3% 6%
Clay 1% 48, 44 56, 50
3% 42, 46 52, 49
3.12 The following data come from an experiment to find the optimal combination
of process parameters to obtain the best yield from a process. The parameters
were Factor A, Factor B, and Factor C. The response, obtained in two rep-
licates, was the number of pieces produced per run. Calculate the main and
interaction effects, and estimate the standard error of the effects. Create the
confidence intervals, and comment on the results.
(1) A B AB C AC BC ABC
Replicate 1 226 322 350 553 442 406 608 399
Replicate 2 318 432 346 475 457 375 504 415
3.14 Twenty-pound boxes of nails are packaged in cartons, with 12 boxes per car-
ton. Each 20-lb. box has a tolerance limit of ±0.5 lb. What is the tolerance
on the carton weight? Assume that the box weights are independent and are
normal.
3.15 For a shaft-bearing assembly, the shaft diameters were found to have an aver-
age of 2.1 in., with a standard deviation of 0.12 in. The bearing bore diameters
were found to have an average of 2.2 in., with a standard deviation of 0.07 in.
What proportion of the assemblies will have a clearance of less than 0.01 in.?
Assume that the shaft and bearing dimensions are independent and normal.
3.16 The total weight of a cereal box includes the weight of the cereal and the
weight of the cardboard box. The total weight of the filled boxes is known to
meet the specification 20 ± 1.0 oz. If the empty boxes meet the specification
4 ± 0.5 oz, does the net weight of the cereal meet the tolerance 16 ± 0.8 oz?
3.17 Hundred-pound bags of alcohol (the solid, industrial variety) are stacked 25
per skid for shipping. The skids are weighed before shipping to check that
the bag weights meet the customer specifications. The customer specifications
call for 100 ± 4 lb per bag, so the material handler at the stacking operation
uses 2500 ± 100 lb as the specification for the skid weight (less the weight of
the wooden skid). The customer complains of receiving bags that are under-
weight. What would be the correct specification for the skid weight, assuming
that the bag weights are independent? (The company uses this procedure of
checking the skid weights as a quick way of checking the bag weights.)
3.18 The amount of time a maintenance person is out on a service call, including
travel, is 120 ± 25 minutes. The travel time between jobs is 60 ± 20 minutes.
What is the average and the “tolerance” of the service time? Assume that the
travel time and the service times are normal and independent.
3.5.2 Mini-Projects
Mini-Project 3.1 Create a customer survey instrument to find out from the group of
IE (ME/EE/CE or any other) majors how satisfied they are with the services they
receive in an IE (ME/EE/CE or any other) department in both academic and non-
academic areas. Alternatively, create a customer survey instrument to find out from
users of a university library their level of satisfaction with the several services provided
by the library.
212 A Firs t C o urse in Q ua lit y En gineerin g
Mini-Project 3.2 Based on the survey instrument created in Mini-Project 3.1, con-
struct a HOQ , and choose the design features that you will use to enhance the
services provided by the IE (ME/EE/CE or any other) department or the library.
Assume some reasonable data for responses.
Mini-Project 3.3 This project is meant to verify the theoretical principles of the root-
sum-of-squares formula used for obtaining assembly tolerance from parts tolerance.
The theoretical results in the simplest form can be expressed as follows:
( ) (
If X 1 N µ1 , σ12 and X 2 N µ 2 , σ 22 , )
( )
then ( X 1 + X 2 ) N µ1 + µ 2 , σ12 + σ 22 ,
( )
and ( X 1 − X 2 ) N µ1 − µ 2 , σ12 + σ 22 .
In other words, the sum of the two random variables has the mean equal to the
sum of the two means. The standard deviation of the sum equals σ12 + σ 22 , not
(σ1 + σ2). Similarly, the difference of the two random variables has the mean equal
to the difference of the means. The standard deviation of the difference equals
σ12 + σ 22 , not σ12 − σ 22 , nor ( σ1 − σ 2 ).
Using Minitab, generate 50 observations from two normal distributions, one with
mean 30 and standard deviation 3, and the other with mean 40 and standard devia-
tion 4, and store the data in two columns. Add the numbers in each row to obtain a
column of 50 observations from the random variable, which is the sum of the above
two random variables. Draw the histogram of all three sets of data, and calculate their
means and standard deviations. What is the relationship between the mean of the two
original distributions and that of the distribution of the “sum”? What is the relation-
ship between the standard deviation of the two original distributions and the standard
deviation of the “sum”?
Repeat the above experiment, except now calculate the difference of the two origi-
nal observations to obtain the column of differences. Draw its histogram, and calcu-
late the average and standard deviation of the “difference” and compare them with
those of the original distributions.
You could also choose any mean and any standard deviation for the original two
distributions instead of what is suggested.
References
Akao, Y., ed. 1990. Quality Function Deployment: Integrating Customer Requirements into Product
Design. Cambridge, MA: Productivity Press.
Box, G. E. P., W. G. Hunter, and J. S. Hunter. 1978. Statistics for Experimenters. New York:
John Wiley.
Q ua lit y in D e si g n 213
Chrysler LLC, Ford Motor Company, and General Motors Corporation. 2008a. Advanced
Product Quality Planning (APQP) and Control Plan—Reference Manual. 2nd ed. Southfield,
MI: A.I.A.G.
Chrysler LLC, Ford Motor Company, and General Motors Corporation. 2008b. Potential
Failure Mode and Effects Analysis (FMEA)—Reference Manual. 4th ed. Southfield, MI:
A.I.A.G.
Churchill, G. A., Jr. 1999. Marketing Research: Methodological Foundations. 7th ed. Chicago, IL:
Dryden Press.
Clausing, D. 1994. Total Quality Development. New York: ASME Press.
Cohen, L. 1995. Quality Function Deployment. Reading, MA: Addison-Wesley Longman.
DeVor, R. E., T. H. Chang, and J. W. Sutherland. 1992. Statistical Quality Design and Control—
Contemporary Concepts and Methods. New York: Macmillan.
Gryna, F. M. 1999. “Market Research and Marketing.” Section 18 in Juran’s Quality Handbook.
5th ed. Edited by J. M. Juran and A. B. Godfrey. New York: McGraw Hill.
Hayes, B. E. 1998. Measuring Customer Satisfaction. 2nd ed. Milwaukee, WI: ASQ Quality
Press.
Hicks, C. R., and K. V. Turner. 1999. Fundamental Concepts in the Design of Experiments. 5th ed.
New York: Oxford University Press.
Hines, W. W., and D. C. Montgomery. 1990. Probability and Statistics in Engineering and
Management Science. 3rd ed. New York: John Wiley.
Hogg, R. V., and J. Ledolter. 1987. Engineering Statistics. New York: Macmillan.
Ireson, W. G., and C. F. Coombs, Jr., eds. 1988. Handbook of Reliability Engineering and
Management. New York: McGraw-Hill.
Krishnamoorthi, K. S. 1992. Reliability Methods for Engineers. Milwaukee, WI: ASQ Quality
Press.
Montgomery, D. C. 2013. Introduction to Statistical Quality Control. 7th ed. New York: John
Wiley.
O’Connor, P. 1985. Practical Reliability Engineering. 2nd ed. New York: John Wiley.
Shingley, J. E., and C. R. Mischke. 1986. Standard Handbook of Machine Design. New York:
McGraw-Hill.
Skalak, S. C. 2002. Implementing Concurrent Engineering in Small Companies. New York: Marcel
Dekker.
Syan, C. S., and U. Menon, eds. 1994. Concurrent Engineering—Concepts, Implementation, and
Practice. London: Chapman & Hall.
Syan, C. S., and K. G. Swift. 1994. “Design for Manufacture.” In Concurrent Engineering—
Concepts, Implementation, and Practice. Edited by C. S. Syan and U. Menon. London:
Chapman & Hall.
Tobias, P. A., and D. C. Trindade. 2012. Applied Reliability. 3rd ed. Boca Raton, FL: CRC
Press.
Womack, J. P., D.T. Jones, and D. Roos. 2007. The Machine that Changed the World. New York:
Free Press.
Ziemke, M. C., and M. S. Spann. 1993. “Concurrent Engineering’s Roots in the World War II
Era.” In Concurrent Engineering—Contemporary Issues and Modern Design Tools. Edited by
H. R. Parsaei and W. G. Sullivan. London: Chapman & Hall.
http://taylorandfrancis.com
4
Q ualit y in P roducti on —
P ro cess C ontrol I
In the last chapter, we discussed the quality methods employed in the product plan-
ning, product design, and process design stages of the product creation cycle. In this
chapter, we discuss the quality methods used during the production stage of the cycle.
The major tools used in the production stage are the control charts, of which several
different types exist to handle the different types of product characteristics and pro-
cess variables encountered in real-world processes. The three major types of charts
needed to monitor the three major types of variables are discussed here. The measures
for evaluating process capability, the ability of a process to produce products within
customer’s specifications, are covered in this chapter, as are the methods for assessing
the capability of measuring instruments.
The topics in process control that are considered essential for the day-to-day pro-
cess control work in a production or service environment are included in this chapter.
Some advanced topics in process control, such as the charts needed for special situ-
ations, derivation of the formulas for limits, evaluating the performance of control
charts and 6-sigma as a measure of process capability, are covered in the next chapter.
4.1 Process Control
Process control is the most important part of the quality effort during manufacture. The
objective is to proactively monitor and control processes so that they produce products
with desirable characteristics consistently, thus preventing production of defective units.
First, a definition of a “process”: any productive work can be viewed as a process
consisting of machinery, manpower, methods, and measuring schemes, as shown in
Figure 4.1. The environment sometimes plays a significant part in the production
activity and can influence the quality of the output; therefore, it should also be con-
sidered while studying the process.
A process receives inputs—mainly material, parts, or subassemblies in a produc-
tion process (information, documentation, and requests from people could be consid-
ered analogous inputs in a service system)—and then processes them in a sequence
of operations using machinery, tools and manpower, and delivers a “product,” which
could be a product or a service. Each operation in the process has to be performed
under certain selected conditions, such as speed, feed, temperature, pressure, or
qualification or training level of personnel. These are called the “process parameters,”
215
216 A Firs t C o urse in Q ua lit y En gin eerin g
Environment
Machinery
Input Manpower Output
(Material) Methods (Product)
Measurement
Dr. Walter Shewhart, working in the early 20s at the Bell Telephone Laboratories in
Princeton, New Jersey, proposed the procedures that later came to be known as the
control charts. According to him (Shewhart 1931), the variability in a process param-
eter, or product characteristic can be from two sources:
1. From a “stable system of chance causes,” meaning the aggregate of small,
unavoidable variability arising from natural differences in material, manpower,
machinery, instruments, and the environment, that should be expected.
2. From “assignable causes,” meaning the causes arising from a specific occur-
rence, such as a broken tool, pressure surge, or temperature drop that might
occur unexpectedly.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 217
These two sources of variability were later renamed by Dr. Deming as “common
cause” variability and “special cause” variability, respectively. The former is an integral
part of the process and cannot be eliminated at reasonable cost. Therefore, it must be
accepted as part of the process. The latter produces disturbances to the process and
usually increases the variability beyond acceptable levels, and so must be discovered
and eliminated. Dr. Shewhart proposed the control chart as a means of differentiating
between the condition of a process when it is subject only to the natural, chance causes
(common causes), and when it is affected by one or more assignable causes (special
causes).
A control chart typically has a centerline (CL) and two control limits—an upper con-
trol limit (UCL) and a lower control limit (LCL), such as those shown in Figure 4.2.
These limits represent the limits for chance-cause variability and are computed using
data drawn from the process. (We will see later how these limits are calculated so that
they represent the limits for the chance-cause variability.) Samples are taken from the
process at regular intervals, and a measure that reflects the quality of the product, or
process, is computed out of the sample observations. The values of the measure are
plotted on a chart, on which the limit lines have been drawn to a suitable scale. If the
values from all the samples taken during a period of time lie within the limits, the
process is said to be “in control” during that time period. However, if the value from
any of the sample plots lie outside either one of the limits, the process is said to be “not
in control” during that time period. When the process is “not in control,” one or more
assignable causes, or special causes, must have occurred, and action needs to be taken
to discover and eliminate them.
The measure to be computed from sample observations depends on what we desire
to control in the process. Suppose that the average value of a product characteristic is
to be controlled; the average of that characteristic from the sample observations will
be then plotted. If the variability in the characteristic is to be controlled, then the
?
UCL
CL
LCL
As stated in Chapter 2, the data from sample observations are classified into either of
two categories:
1. Measurement data
2. Attribute data
Measurement data result from measurements taken on a continuous scale, such
as height, weight, volume, and so on, and result in observations such as 142.7 in.,
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 219
1.28 kg, and 68 cu. ft. Attribute data come from inspections performed based on attri-
butes such as taste, feel, and color, in which a product is classified into categories such
as good/bad, tight/loose, or flat/not flat. Attribute data can also come from gaging.
Attribute data are usually in counts, proportions, or percentages, such as three defects
per printed circuit, two of eight too small, or 10% too tight. The reader will recognize
that the measurement data are observations made on continuous random variables
and the attribute data are observations made on discrete random variables. It is neces-
sary to recognize the type of data one has to deal with in a given situation in order
to determine the type of control chart to use. The limits for the charts are computed
based on the distribution that the data came from, and the distributions for the two
types of data are different.
The measurement control charts are designed to control measurements, with the
assumption that those measurements follow the normal distribution. When a mea-
surement is to be controlled, both its average and its variability must be controlled. A
measurement will fall outside its specification limits if its average moves to a location
different from the target, its variability increases beyond the specified allowable range,
or both (see Figure 4.3).
Figure 4.3 Process conditions when process mean and/or standard deviation change.
220 A Firs t C o urse in Q ua lit y En gin eerin g
For normal populations, the sample average and sample measures of variability
such as sample standard deviation and sample range, are statistically independent—
a result proved in books in mathematical statistics. This means that controlling the
sample average will only help in controlling process average and controlling the mea-
sure of sample variability will only help in controlling the process variability. Hence,
two control charts are needed: one that uses the sample average to control the process
average, and another that uses a sample measure of variability to control the process
variability.
The most popular combination used to control a measurement is that of X -chart and
R-chart, the former to control the process mean and the latter to control the process
variability. Both are made from the same samples and are generally used together.
Samples, usually of size four or five, are taken from the process at regular intervals,
and the sample average X and the sample range R are computed for each sample.
These values are then plotted on graphs where the limits for each chart have been
drawn. The limits are calculated using the following formulas:
CL(R ) = R CL( X ) = X
In the above formulas, X and R are the averages of at least 25 sample averages and
sample ranges, respectively, which have been obtained from the process being con-
trolled. A2, D3, and D4 are factors chosen from standard tables such as that in Table 4.2,
based on sample size. (Table 4.2 is an abridged version of Table A.4 in the Appendix.)
Table 4.2 Factors for Calculating Limits for Variable Control Charts
n A A2 A3 B3 B4 c4 D1 D2 D3 D4 d2 d3
2 2.121 1.880 2.659 0 3.267 0.798 0 3.686 0 3.267 1.128 0.853
3 1.732 1.023 1.954 0 2.568 0.886 0 4.358 0 2.574 1.693 0.888
4 1.500 0.729 1.628 0 2.266 0.921 0 4.698 0 2.282 2.059 0.880
5 1.342 0.577 1.427 0 2.089 0.940 0 4.918 0 2.114 2.326 0.864
6 1.225 0.483 1.287 0.030 1.970 0.952 0 5.078 0 2.004 2.534 0.848
7 1.134 0.419 1.182 0.118 1.882 0.959 0.205 5.204 0.076 1.924 2.704 0.833
8 1.061 0.373 1.099 0.185 1.815 0.965 0.387 5.306 0.136 1.864 2.847 0.820
9 1.000 0.337 1.032 0.239 1.761 0.969 0.546 5.393 0.184 1.816 2.970 0.808
10 0.949 0.308 0.975 0.284 1.716 0.973 0.687 5.469 0.223 1.777 3.078 0.797
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 2 21
The above formulas give “3-sigma” limits for the statistics X and R, in the sense
that the limits are located at a distance three standard deviations of the respective sta-
tistics from their averages. A detailed discussion on how these formulas are derived to
provide the limits of natural variability for the statistic plotted is given in Chapter 5.
The example below shows how the formulas are used to calculate the limits and how
the charts are used in process control.
Example 4.1
Solution
We use the X - and R-charts to control the average and variability of the filling
operation. The data are shown in Figure 4.4 on a standard control chart form. In
this example, 24 samples of five bags each were collected at approximately one-hour
intervals. The X and R values were calculated for each sample and then plotted
using a suitable scale. The control limits were calculated as follows:
From calculations made from the data, X = 21.37, and R = 3.02. From Table 4.2,
for n = 5, A2 = 0.577, D4 = 2.114, and D 3 = 0. Therefore,
The limits calculated as above are drawn on the control chart form in Figure 4.4.
Because one of the plotted R values is outside the limit in the R-chart, the process
must be declared “not in control.”
If an R value falls outside its limits, this means that the process variability does not
remain consistent. If an X value falls outside its limits, however, it would indicate that
the process mean does not remain consistent.
If we relate what happens in the process to what the control chart shows, it is often
possible to identify the assignable cause (special cause) that generates an out-of-limit
data point on the chart. In situations where process interruptions, such as a broken
tool or falling air pressure, are likely to occur, the time of occurrence of the events can
be related to the time of indication of signals on the control chart. For this reason, it
is advisable to maintain a log of events that happen in the process, such as change of
operator, change of tool, change of raw material, and so on. This will facilitate the dis-
covery of the assignable causes when they are indicated on the charts. Control charts
can tell a lot more about process changes if we know how to read them correctly.
PART NO. CHART NO.
NO. 21/2 bag 1/2
PART NAME (PRODUCT) OPERATION (PROCESS) Specification units
222
SAMPLE
4 24.0 23.0 22.0 23.0 22.0 22.0 20.5 19.5 22.0 23.0 20.0 21.0 20.5 23.0 21.5 21.5 21.0 22.0 20.5 22.5 20.5 18.0 22.0 19.5
MEASUREMENTS
5 23.5 21.5 21.5 22.0 21.0 20.0 22.5 19.5 22.0 18.5 20.5 20.5 21.0 20.0 21.0 23.5 23.5 21.0 21.5 20.0 22.5 21.0 23.5 20.5
AVERAGE, X 22.9 22.0 21.4 22.0 21.5 21.9 20.8 20.0 21.5 21.6 20.2 20.5 20.6 21.7 21.1 21.8 21.8 20.8 21.9 21.2 20.7 21.3 22.6 21.3
RANGE, R 2.0 2.5 3.0 2.0 3.0 3.5 3.5 2.5 2.5 4.5 1.5 2.0 1.5 4.0 3.0 3.0 4.5 2.5 4.0 2.5 3.5 7.0 1.5 3.0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
UCLx = 23.11
23
22
x = 21.37
21
AVERAGES
20
LCLx = 19.63
19
16
UCL = 6.38
6.0
A Firs t C o urse in Q ua lit y En gin eerin g
4.0
R = 3.02
RANGES
2.0
LCL = 0.0
If all X and R values are within limits on both charts, the limits computed from
such an in-control process can be used for future control. If not, those X and/or R
values that are outside the limits can be removed from the data after making sure that
the causes responsible for those values have been eliminated. New limits can then
be calculated from the remainder of the data. Such recalculation of limits from the
“remaining” data saves times and money, and it is an accepted procedure for calculat-
ing and setting limits for future control.
When making these recalculations with the remaining data, it is advisable to fix
the R-chart first, because calculation of limits for the X -chart requires a value of R.
We may as well fix the R-chart first and use a “good” R from the in-control R-chart
to calculate the limits for the X -chart. Otherwise, if the X -chart is fixed first, and
then find an R-value outside limits in the R-chart, we may have to go back and fix
X -chart again after fixing the R-chart. Such going back and forth can be avoided if
the R-chart is fixed first before fixing the X -chart.
In the above example, the one R value outside the upper limit is removed, assuming
that the reason for the value being outside the limit was found and rectified. (In this
case, the level controller of the silo feeding the chemical was faulty. It was replaced.)
We then obtain a new R of 2.85 from the remaining 23 observations of R. This results
in new limits for the R-chart:
23 UCL = 23.01
Sample mean
22
Mean = 21.37
21
20 LCL = 19.73
Subgroup 0 10 20
7
6 UCL = 6.022
Sample range
5
4
3 R = 2.848
2
1
0 LCL = 0
Figure 4.5 X - and R-charts with revised limits calculated using “remaining” data for Example 4.1.
been found and eliminated, and that those causes will not reoccur. The objective of using
a control chart is to bring a process to the in-control condition. Merely taking out data
points to make the chart look to be in-control does not serve any purpose. Only when
the process has been improved following the signal from the chart, searching for the
underlying assignable cause and rectifying it, should the data that signaled out-of-control
condition be removed. The guiding principle here is never to remove a data point from the
chart unless the cause that put the process outside the limits is discovered and eliminated.
If an R value is outside its limits and its causes are eliminated, should the cor-
responding X value also be eliminated from the data along with the R value? The
answer is yes; it makes sense to eliminate the entire sample from the data because it
came from the process when it was not in-control.
How many of the original samples can be thrown out, leaving samples that will be
considered enough for calculating the limits? Here, the issue is part practical and part
statistical. Dr. Shewhart, the author of the control charts, recommended that a process
be observed over a sufficient period of time, for at least 25 samples of size four, before
concluding that a process is in-control (Duncan 1974). His advice, which most practi-
cal statisticians would agree with, was that conclusions should not be drawn about a
process being in-control based on observations taken over a short period of time. That
was the genesis for the practice of using at least 25 samples in calculating the control
limits. If 25 samples are initially taken, the process is not-in-control, and some of the
sample values have to be thrown out, then obviously there will not be enough samples
to claim that the limits are calculated from a process that is in-control. This is where
practical considerations become necessary.
If sampling is expensive and testing takes time, we cannot afford to waste data.
Waiting to collect a new set of data may also delay control of the process. Therefore, in
practice, people use as few as 15 samples to calculate the limits if that is all that is left
after discarding the samples that produced plots outside the limits. Those limits, how-
ever, should be updated when additional samples become available. Another solution
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 225
is to take about 30 samples of size four or five to start with, with the expectation that
after discarding a few samples, enough will remain for recomputing the limits.
A statistical reason also exists for using 25 or more samples for calculating limits.
The limits for the X -chart, as well as for the R-chart, are calculated using R/d2 as
an estimate for process standard deviation σ (see Chapter 5 for the derivation of the
limits). The d2 factor, which makes R an unbiased estimator for σ, is not good if the
R value is obtained from a small number of samples. For example, for samples of size
four or five, the tabulated d2 values are not good if R is calculated from less than 15
samples. However, for larger sample sizes (n > 8), the tabled values of d2 are good even
if computed from less than 15 samples (from Table D3 of Duncan 1974, which gives
d 2* the correction factor to estimate σ from R for varying sample sizes and number of
samples). This means that if the sample size for the chart is large, we can do with fewer
samples, and if the sample size is small, then we need a larger number of samples
from which to calculate the limits. These facts should be kept in mind while decid-
ing if there is an adequate number of samples for calculating the limits. If using the
conventional sample size of four or five, it might be a good idea not to use less than 15
samples to calculate the limits. Even then, when more samples (>25) become available,
the limits must be recalculated.
The reader might have noticed that in the above example, only 24 samples were
available and we proceeded to use them to calculate limits although we had stipulated
that a minimum of 25 samples are needed to calculate the limits. Such compromises
may often be necessary in practical situations. The reader would also understand that
such compromises can be justified in view of the above discussion on number of sam-
ple needed to calculate limits.
Example 4.2
This is another example in the use of X and R-charts, this time the application is
in the healthcare area. The data in the table below represents the daily fasting blood
sugar (in mg/dL) taken before breakfast, by a diabetic patient in weeks 1 to 21.
Diabetics are advised to take this reading of the amount of glucose in their blood
by way of monitoring the condition of their disease. The doctor who advises this
patient has recommended that the blood sugar be controlled below 120. We need
to assess if the blood sugar of this patient is consistent (in control) over the period
when the data were gathered and study if progress is being made by the patient
toward achieving the target. The patient collected data each day during the week
and did not gather data during weekends.
WEEK 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
114 150 174 175 167 157 153 151 152 154 165 156 157 164 171 160 151 146 153 155 146
129 153 170 177 * 127 141 153 149 156 169 147 143 144 143 156 139 145 151 * 158
110 154 167 169 157 143 151 159 145 165 160 162 147 159 155 152 146 151 157 158 162
143 148 177 169 166 148 141 152 136 164 165 153 145 160 146 151 152 153 154 153 166
144 154 142 162 157 143 154 153 155 150 149 152 166 158 147 149 146 154 152 * 163
226 A Firs t C o urse in Q ua lit y En gin eerin g
Solution
An X - and R-chart are used to verify if the “process” is in control with sample size
n = 5 since five readings came from one week. The charts obtained from Minitab are
shown below in Figure 4.6. At the first glance, the charts show that the patient is not
controlling the blood sugar very well and is not nearly reaching the goal of controlling
below 120. There are several strategies the patient could adopt like taking additional
medication, reducing intake of carbohydrates or increasing the amount of exercise.
The process is not in control. The average swings up and down although it seems to
be steadying after the sixth week. The variability in the readings as seen in the R-chart
seems to decrease after the 15th week, which may be a good sign. The patient is not
controlling the blood sugar level and is nowhere near reaching the goal of containing
the level below 120. He may have to go back to the doctor to seek advice on adjust-
ing the medication in addition to increasing exercise and reducing sugar intake. The
patient, however, will benefit if he continues to use the control chart to record the
data and receive guidance in adjusting the amount of exercise, diet, and medication to
reach the goal of gaining control over the blood sugar and the diabetes.
Incidentally, the above table was missing some data points as the patient had not
recorded data on certain days. The Minitab has provision to make control charts when
the sample size does not remain constant and it was used in making the control charts.
In another section below, the procedure for calculating limits when the sample size
changes is explained.
_
X ε R charts for morning glucose of a diabetic patient
175 1
1
UCL = 164.05
Sample mean
160 _
X = 153.35
145 LCL = 142.66
130
1
1 3 5 7 9 11 13 15 17 19 21
Sample (weekly)
45
UCL = 39.21
Sample range
30
_
R = 18.54
15
0 LCL = 0
1 3 5 7 9 11 13 15 17 19 21
Sample
Tests performed with unequal sample sizes
Figure 4.6 X - and R-chart for fasting blood glucose of a diabetic patient (n = 5).
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 227
4.3.2.1 The Many Uses of the Charts The X - and R-chart combination is the most pop-
ular SPC method employed in industrial applications. Their simplicity and effective-
ness in discovering significant assignable causes have made them very popular. The
fact that many quality characteristics and process variables are measurements that are
normally distributed has also contributed to their popularity. The X and R-charts are
used to accomplish several objectives:
1.
To maintain a process at its current level
The main use of the X - and R-charts is for the purpose of maintaining
the consistency of product variables such as strength of steel, viscosity of a
paint or amount of impurity in a chemical. These variables cannot be easily
moved or tweaked into any given average level. They have to be first con-
trolled at their “current” levels using the X value derived from data as the
centerline of the control chart, and then moved progressively in the desired
direction through additional investigation, experimentation, and process
improvement. During this time, the process variability is controlled at the
“current” level using the R-chart. The charts with limits calculated for this
purpose, using the formulas X ± A2 R, D4 R and D3 R are said to be charts
for maintaining “current control.” Notice that the process is controlled
using the data obtained from it; no outside standards or specifications are
imposed on the process. These processes will be, after they are brought in
control, compared with specifications in what is called the process capabil-
ity analysis discussed later in this chapter.
2.
To control a process at a given target or nominal value
There are many processes, such as cutting sheet metal to a given width
or turning a shaft to a given diameter, can easily be controlled around
a given target with a simple adjustment of tools or fixtures. It would be
desirable if such a process could be made to conform to a given standard
even while it is being controlled for uniformity. Suppose a process is to be
controlled at a given target T, the X can then be replaced with T in the
formulas for the X -chart. The spread for the control limits is still obtained
as A2 R from the data. Similarly, there may be situations in which the
process variability, the σ of the process, is specified not to exceed a certain
value σ 0, and the process has to be controlled below this specified variabil-
ity. These situations are referred to as “standards-given” cases, and details
of how the limits for them are calculated are discussed in Chapter 5.
3.
As a troubleshooting tool
Many quality problems in industry can be traced to excessive variability
in process variables and product characteristics. Implementation of X and
R-charts would be a good first step in solving these problems, because a
good understanding of the process behavior will be gained from a close
228 A Firs t C o urse in Q ua lit y En gin eerin g
observation of the process during data collection. The charts will first sig-
nal the existence of the causes; then they will also help in discovering the
sources of those causes if we relate happenings in the process to signals
on the chart. Once the sources of the causes are known, the causes can be
eliminated by taking proper remedial action.
4.
As an acceptance tool
Control charts are often used to demonstrate to a customer that the
process has been in a stable condition producing good products. In such
situations, the customer can reduce, or eliminate, their inspection at
receiving, resulting in considerable savings in labor, time, and space. In
fact, customers who operate in a just-in-time environment often demand
proof of process control and the capability to meet specifications so that
they can eliminate the need for inspecting the product they receive. Thus,
the control chart can be used for accepting products without incoming
inspection by the customer.
4.3.2.2 Selecting the Variable for Charting Although control charts are very useful tools,
they are expensive to maintain, especially if measurements have to be made manually.
So, their use must be limited to where they are absolutely needed. Even if measure-
ments are made using automatic instruments and charts are made using computers, too
many charts may drown out vital process information in the important ones. Therefore,
they must be used on product characteristics or process variables that are critical to
product quality and customer satisfaction. In certain situations, the criticality of a vari-
able to the process may be obvious. In others, a process failure mode and effects analysis
(see Chapter 3) will reveal the critical process variables that will have to be controlled.
In some situations, a preliminary study using an attribute-type chart may have to be
made first, before selecting the important characteristics to be controlled using X and
R-charts. An attribute chart—which is explained later in this chapter—is cheaper to
maintain and will reveal characteristics critical to the product quality.
4.3.2.3 Preparing Instruments Before starting a control chart on any process, it is first
necessary to decide the instrument to be used for measuring the variable and the level
of accuracy at which the readings are to be recorded. It is also necessary to verify
if the chosen instrument gives true readings—in other words, whether it has been
calibrated—and whether the instrument has adequate capability. (The calibration and
capability of instruments are explained later in this chapter.) More often than not, lack
of a suitable measuring instrument can contribute to defective output. A preliminary
check of the instruments will provide an opportunity to repair or replace an inad-
equate instrument as a first step toward controlling the process and making a good
quality product.
In one example, one of the authors was asked to help a company making paper
cubes, an advertising specialty, to reduce the variability in the height of the paper
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 229
cubes. The company president felt that the company was losing money because the
operators made cubes taller than that called for in the specs. On his first visit to the
shop, the author picked one cube from a pile and asked several people in the shop,
including the plant superintendent, the shop foreman, the operator, and the inspector,
to measure its height. Each person gave a different number for the height, because
each person used a different tool and/or squeezed the cube differently while taking the
measurement. One used a foot-rule, another, a caliper; yet another, a tape measure,
and so on. No standard measuring scheme had been prescribed for determining the
height of a paper cube. A new measuring scheme had to be put in place that would give
reasonably the “same” reading for the same cube when measured repeatedly, either by
the same person or by different persons. Until this was done, no further work could be
undertaken to quantify the variability in the cubes—much less, finding the sources of
variability and eliminating them.
If there is no reasonable consistency in the measurements given by the instrument,
it is impossible to reach any reasonable consistency in the product characteristic being
measured. Making sure that a good measuring instrument is available is an important
first step in the control of any product characteristic or process variable.
4.3.2.4 Preparing Check Sheets Proper check sheets or standard forms must be devel-
oped for recording relevant process information and for recording and analyzing data.
Standard forms, such as that shown in Figure 4.4, that call for relevant data and
provide space for recording and analyzing these data, greatly facilitate the collection
of information. It is often advisable to gather more rather than less information. Any
information relating to the process, such as ambient temperature, humidity, opera-
tor name, type and ID of the instrument used, or information on any factor that
may affect the quality of the process’ output, must be recorded. As a matter of fact,
the preparation of check sheets forces an analyst to plan ahead the details of data
collection—such as what exactly is to be measured, which instrument is to be used,
and to what level of accuracy.
4.3.2.5 False Alarm in the X -Chart When a X value falls outside a control limit, we
normally understand that an assignable cause has occurred and changed the process
mean to a level different from the desired level. Just as with any statistical procedure,
however, the control chart is also subject to Type I and Type II errors. The Type I
error, which occurs when the control chart declares a process to be not-in-control
when, in fact, it is in-control, is called the “false alarm.” This arises from the fact that
some X values in the distribution of X fall outside the control limits even when the
process is in-control. The probability of this happening for a 3-sigma chart is 0.0027.
That is, even when a process is in-control, about 3 in 1000 samples will fall outside the
limits, indicating the process is not in control.
People who are experienced in using X -charts in practice, including Dr. Deming
(Deming 1986), advice that the false alarm should not be a concern while using the
230 A Firs t C o urse in Q ua lit y En gin eerin g
chart on a process that has not been brought in control previously. According to them,
if a process has not been already controlled using a procedure such as the control chart,
more than likely it is not in-control. So, concern over a false alarm should arise only
when monitoring a process that has already been brought in-control.
4.3.2.6 Determining Sample Size The typically recommended sample size for X and
R-charts is four or five. Dr. Shewhart, when he first proposed the X -chart, recom-
mended the use of as small a sample as possible, because averaging over large samples
would hide an assignable cause that may occur during the time the sample is collected.
However, samples should be large enough to take advantage of the effect of the cen-
tral limit theorem, which makes the sample averages follow the normal distribution
even if the population being controlled is not normal. Dr. Shewhart (1931) was also
influenced by the fact that the sample size of four provided an advantage in calculating
the “root mean square”
∑
( X i − X )2 /n , which he used as the sample measure of
variability. The ease in finding the square root of four was an advantage in those days
when electronic calculators were not available. He therefore recommended a sample
size of four.
Later users saw some advantage in using a sample size of five, because it provided
some advantage in calculating the average. (Obviously, they did not have electronic
calculators, either.) Therefore, a sample size of four or five became the norm for X
and R-charts. In some situations, however, the choice of sample size may be dictated
by practical considerations, and the analyst is forced to choose other sizes. In such
situations, sample sizes anywhere from 2 to 10 can be used. The minimum size of 2
is needed to calculate R, and the maximum of 10 is dictated by the fact that larger
samples make the R value less reliable. That is, the variability in the sample range R
becomes large when the sample size is larger than 10.
Theoretical studies have been made by researchers (see Duncan 1956; and Chapter
10 in Montgomery 2013) to determine the optimal sample size, the spread between
the CL and the limits, and the frequency of sampling. Such researchers consider the
cost of sampling, the cost of not discovering assignable causes soon enough after they
have occurred, and the cost of false alarms, and then choose the parameters of the
chart to minimize the average cost of controlling a process. The analysis has to be
done for each individual process, taking into account the cost structure for that par-
ticular process. Such determination of sample size and other parameters of the control
chart fall under the topic of the economic design of control charts. This is an advanced
topic, and readers interested in it may refer to Montgomery (2013).
4.3.2.7 Why 3-Sigma Limits? Again, it was Dr. Shewhart who recommended use of
the 3-sigma rule for calculating control limits when he originally proposed the control
charts. He was particular that the chances for process interruption due to false alarms
from the control procedure should be small. He chose the criterion that the chance
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 2 31
for looking for an assignable cause when one does not exist should not be more than
3 in 1000 (Shewhart 1939), which translates to the 3-sigma rule on the assumption
that the distribution of X is normal. According to this rule, the control limits will be
placed at three standard deviations of the statistic being plotted, from the centerline.
The limits could be placed at any distance—say, 2, 2.5, 4, or 4.5 sigma distance
from the centerline. If, for example, the limits are at 2-sigma distance, the control
chart will detect changes in the process more quickly, and detect even smaller pro-
cess changes more effectively, compared to the 3-sigma limits. However, the chart
will also have a larger probability of false alarm. If the limits are placed, for instance,
at 4-sigma distance, then the false alarm probabilities will be smaller but the con-
trol chart will take longer to detect assignable causes; even sizable changes in the
process may remain undetected for longer periods of time. False alarms cost money
because they may result in an unnecessary stoppage of the process and incur expenses
in searching for trouble that does not exist. Therefore, Dr. Shewhart argued that only
those assignable causes that can be found “without costing more than it is worth to
find” (Shewhart 1939, 30) need to be discovered. In his experience, the 3-sigma limits
provided that “economic borderline” between detecting those causes worth detecting
and those not worth detecting.
We must realize, however, that the economic borderline depends on the cost struc-
ture of a given process. In a situation where the false alarm is not expensive, it may
be possible to locate the control limits closer to the centerline, with increased power
for detecting changes. On the other hand, if the false alarm is expensive, the limits
should be located farther than the 3-sigma limits, thus saving the expenses from false
alarms. Such modifications to the 3-sigma rule are part of the recommendations by
researchers who work on the economic design of control charts.
In this connection, the recommendations by Dr. Ott, a pioneer in the use and prop-
agation of statistical methods in industry during the early 1940s, is worth mentioning.
Dr. Ott (1975) recommended that if we know a process is producing defectives and
are in a troubleshooting mode, we may as well use a narrow set of limits—say, 2-sigma
limits—and detect the causes and repair them. The premise is that, when there are
assignable causes in the process producing defectives, the chances are high that a sig-
nal in the chart indicates an assignable cause rather than being a false alarm.
rate, the cost of the sample, and cost of taking the measurements must be considered
when deciding how often samples are taken.
4.3.2.9 Rational Subgrouping In control chart parlance, the term “subgroup” and the
term “sample” mean the same thing. Some prefer the former to the latter, however,
because the former clearly implies there is more than one unit in it, which makes com-
munications clearer on the shop floor. We use both terms here interchangeably.
The term “subgrouping” refers to the method of selecting subgroups, or samples,
from the process in order to obtain data for charting. We want to decide the basis of
subgrouping in a rational manner. As an example, suppose a process is likely to dete-
riorate over time, then taking samples based on time, at regular intervals, would make
sense. If the process changes over time, the samples would then lead to the discovery
of such changes. On the other hand, for a process in which the individual skill of the
operators makes a difference in the quality of the output, subgrouping should be done
based on the operator. That is, if the first subgroup is taken from Operator 1, the next
subgroup will be taken from Operator 2, and so on. Then, if a change occurs to the
process because of a difference in the performance of one operator from others, the
chart will show the change, and that change will be traced to the operator causing it.
Such subgrouping that will facilitate the discovery of a cause when it is indicated on
the charts is called “rational subgrouping.”
Rational subgrouping is an important idea that should be understood and employed
correctly in order to get the most out-of-control charts. A respected textbook in sta-
tistical quality control by Grant and Leavenworth (1996) has a full chapter on rational
subgrouping. We would even add that if use of control charts has not been success-
ful in controlling a process, or, in other words, if assignable causes cannot be dis-
covered when the process is still producing defectives, then we should conclude that
proper subgrouping is not being used. A change in subgrouping based on some ratio-
nal hypothesis would help in discovering the causes. In some situations, an entire
project may consist of finding the right way of subgrouping by trying one method of
subgrouping after another to discover the assignable causes affecting the process. The
following two case studies illustrate the concept of rational subgrouping.
was started, plotting the average time per call on a daily basis, on the hypothesis
that the calls probably required more time to answer on some days of the week
than on others. The process was in-control; the chart did not show any difference
from day to day.
Daily averages of individual operators were then plotted, with the basis of sub-
grouping now being the operator. The process was in-control, showing no sig-
nificant difference among operators. The basis of subgrouping was next changed
to product type, such as calls on doors being separated from calls on openers.
The process was in-control; no difference was found among the calls on prod-
uct types. Then the basis was changed to type of calls, such as emergency calls
for help with installation being separated from calls for ordering parts. It was
discovered that the calls for installation help were the assignable causes requir-
ing significantly longer answering times than for the other types of services. A
separate group of operators was organized who were given additional training
on installation questions. All calls relating to installation were directed to this
group. The average time for installation calls decreased, as did the average for
all calls.
4.3.2.10 When the Sample Size Changes for X - and R-Charts This refers to the situation
in which samples of same size are not available consistently for charting purposes.
Such a problem arises less often in the case of measurement control charts than in
the case of the attribute charts (discussed in the next section). This is because the
sample size used for the measurement charts is small, on the order of four or five, and
obtaining such samples consistently from a large production process is not generally
a problem. In some situations, however, we suddenly find that there are not enough
units in a sample, either because some of the units were lost or because some readings
became corrupted and were removed from the sample. Such situations can be handled
by the following approach.
Consider Example 4.1, in which the first set of samples resulted in control limits
as shown in Figure 4.5. Now, suppose another set of 24 samples is taken from this
process, and suppose that some of those samples have sizes different from five, as
shown in Table 4.3. The limits for those samples with a size different from five are
calculated using X and R from the previous data set but with A 2 , D 3, and D4 appro-
priate to the differing sample sizes. The X and R from the samples with differing
sample sizes are compared with the limits calculated for those samples. (If there are
a few samples with different sizes in the set taken for calculating the trial limits, it
is best to discard those samples and replace them with samples of the planned size.)
For example, in the new set of data, if the sample numbers 4, 14, and 23 have
sample sizes of four, three, and three, respectively, as shown in Table 4.3, the limits
for those samples are recalculated, using A2, D 3, and D4 chosen appropriate to the
sample sizes. The provision available in Minitab to compute limits with a given mean
and given standard deviation has been used to calculate the limits and draw the charts
shown in Figure 4.7. Minitab also has provision to draw control charts when some
samples have sizes different from others.
4.3.2.11 Improving the Sensitivity of the X -Chart In the next chapter, while discuss-
ing its operating characteristics, we will see that the X -chart with 3-sigma limits
and a sample size of four or five—referred to as the “conventional chart”—is capable
of detecting only large changes in the process mean. If small changes in the process
mean (<1.5σ distance from center) occur, the X -chart has a very small chance of
detecting such changes. The R-chart also has only a very small chance of discovering
changes in process variability unless the changes make the process standard deviation
at least twice as large as the original.
Sometimes, this insensitivity of X and R-charts in discovering small changes is an
advantage, because those small changes in the process may not have to be detected
anyway. In some situations, however, small changes in the process mean may cause
large damage if the production rate is high or the cost of rejects is high, or both. In
such situations, it is desirable to have better power for the chart to discover small
changes. Several approaches are available.
Table 4.3 Data for X - and R-charts with Varying Sample Sizes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
22.1 20.3 18.6 20.7 22.2 20.5 21.7 21.4 21.4 22.7 22.4 22.2 22.7 23.4 20.2 21.0 22.6 22.8 19.6 20.1 20.9 19.8 19.5 21.5
19.6 20.1 21.1 * 23.1 22.7 21.7 21.7 21.7 21.0 21.4 19.2 21.6 * 19.3 23.4 20.2 23.0 22.0 23.8 21.8 19.3 22.3 21.2
21.8 21.2 24.4 20.5 20.8 20.9 20.8 21.5 19.2 21.3 19.4 19.3 19.5 22.3 22.9 21.1 19.5 21.7 20.3 21.9 20.7 23.0 * 19.3
20.0 20.8 24.0 18.9 19.4 19.9 23.5 23.1 21.7 22.2 22.2 23.4 20.4 * 21.1 22.6 22.0 20.7 21.2 22.6 22.8 21.7 * 23.4
19.8 21.7 21.0 21.8 22.8 19.6 22.2 21.6 20.4 21.9 22.6 20.0 19.6 21.6 20.8 22.8 21.8 21.0 20.4 20.8 20.7 22.7 22.3 19.7
X 20.4 20.3 22.4 21.4
R 1.0 2.9 1.8 2.8
A2 0.58 0.73 1.02 1.02
D4 2.11 2.28 2.57 2.57
UCL(X ) 23.11 23.6 24.4 24.4
LCL(X ) 19.63 19.2 18.3 18.3
UCL (R) 6.384 6.9 7.8 7.8
Note: Asterisks indicate missing data or empty cell.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I
235
236 A Firs t C o urse in Q ua lit y En gin eerin g
24
23 UCL = 23.11
Sample mean
22
Mean = 21.37
21
20 LCL = 19.63
19
Subgroup 0 5 10 15 20 25
7 UCL = 6.384
6
Sample range
5
4
3 R = 3.019
2
1
0 LCL = 0
4.3.2.12 Increasing the Sample Size One way to increase the power of the X - and
R-charts is to use larger sample sizes. As will be shown in the next chapter, the power
of the X -chart increases with an increase in sample size, as does the power of the
R-chart. An increase in sample size, however, also involves an increase in the cost
of sampling. When the cost of sampling is not very high, the increase in sensitiv-
ity of the charts can be accomplished by increasing the sample size. Of course, we
should remember that if a sample size larger than 10 is chosen, then we cannot use the
R-chart; we should use then the S-chart as discussed in a later section.
4.3.2.13 Use of Warning Limits Another way to improve the sensitivity of the X -
chart is to use warning limits drawn at 1-sigma and 2-sigma distances on either side,
between the centerline and the (3-sigma) control limits, as shown in Figure 4.8a. Rules
are then available to help discover assignable causes using the warning limits drawn
as above. One such rule: An assignable cause is indicated if two out of three consecu-
tive values fall outside the 2-sigma warning limits on the same side of the centerline.
Another rule: An assignable cause is indicated if four out of five consecutive values fall
outside of the 1-sigma warning limits on the same side. These rules are applied even if
no value falls outside the 3-sigma limits. Examples are shown in Figure 4.8a to illus-
trate how these rules work in disclosing the occurrence of assignable causes.
4.3.2.14 Use of Runs Yet another method to enhance the sensitivity of the X -chart
is to use runs. A run is a string of consecutive plots with some common properties.
For example, if a sequence of consecutive X values occurs below the centerline, this
sequence will constitute a “run below the CL,” as shown in Figure 4.8b. Similarly,
there could be a “run above the CL.” A “run up” would occur if there is a string of
plots in which each plot is at a higher level than the previous one and a “run down”
would occur if each plot is at a lower level than the previous one. An example of a
run-down is shown in Figure 4.8b. These runs signify that the process is behaving in a
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 237
Run-down of 6
Run-below the
centerline of 7
(b)
Figure 4.8 (a) Use of warning limits on a X chart. (b) Examples of runs in a control chart.
non-normal, non-random way because of an assignable cause. The run above or below
the CL would indicate that the process mean has jumped away from the target, and
the run up or run down would indicate that a drift, or trend, in the process mean is
occurring. To help decide when a change has occurred on the process, the following
rules are employed. A run longer than seven above or below the centerline is consid-
ered to indicate an assignable cause. Similarly, a run longer than seven, up or down,
would indicate that an assignable cause is in action (Duncan 1974; Montgomery 2013).
The rules to increase the sensitivity of the X -chart using runs and warning limits
can be summarized as follows. An assignable cause is indicated if:
1. Two of three consecutive plots fall outside of a 2-sigma warning limit on the
same side of the centerline.
238 A Firs t C o urse in Q ua lit y En gin eerin g
2. Four of five consecutive plots fall outside of a 1-sigma warning limit on the
same side.
3. More than seven consecutive plots fall above or below the centerline.
4. More than seven consecutive plots are in a run-up or run-down.
These rules are used in addition to the rule that any one plot outside of the 3-sigma
limit will indicate an assignable cause. These rules are referred to as the “Western
Electric rules,” because they were originally recommended in the handbook published
by the Western Electric Co., which was republished as the Statistical Quality Control
Handbook (AT&T 1958).
A caution may be in order here. Each rule that is employed to increase the sensitiv-
ity of a chart also carries a certain probability of false alarm. When several of the rules
are used simultaneously, the overall false alarm probability adds up (according to the
addition theorem of probability). Therefore, all the rules should not be used simulta-
neously. We recommend, based on experience, use of only the rules with runs (Rules 3
and 4 above) to supplement the rule of a single value falling outside the 3-sigma limits.
4.3.2.15 Patterns in Control Charts Besides runs, other telltale signs also appear in
control charts. Some of these patterns are shown in Figure 4.9. The comments accom-
panying each pattern suggest how an analyst can interpret the patterns. The patterns
indicating shift in the process mean (b, c, and d) are obvious, and the pattern in e, the
one we will see in a process that has never been controlled before, indicates that many
assignable causes are in play and need action from the quality analyst. The cycles and
trends are the most common patterns that help in discovering causes.
(a) Natural pattern: the pattern expected (b) Sudden change in level
from process in control
X-chart
T1 T2
C1 C2 C1 C2
(e) Instability: many causes in action: (f) Interaction: the plot in one period
possibly overcontrol by operator is related to plots in a previous period
(g) Mixtures: sampling done from two (h) Cycles: affected by cyclic behavior
different populations mixed together of some process parameters
Figure 4.9 Patterns in control charts. (Compiled from AT&T., Statistical Quality Control Handbook. 2nd ed. Indianapolis,
IN: AT&T Technologies, 1958.)
Although the R-chart is commonly used to control the process variability because of
the simplicity in computing R, in several situations the S-chart, or the standard devia-
tion chart, is preferred. For example, when the sample size must be larger than 10
24 0 A Firs t C o urse in Q ua lit y En gin eerin g
because that extra sensitivity is needed for the X -chart, the R-chart cannot be used
because of the poor efficiency (i.e., larger variability) of the statistic R when the sample
size is large. Furthermore, with the availability of the modern calculator and imple-
mentation of SPC tools on the computer, the simplicity advantage of the R-chart may
not be a real advantage any longer. Therefore, the S-chart can be used to take advantage
of its efficiency. For the S-chart, the standard deviation S = ∑ ( X − X ) /n − 1 is
i
2
calculated for each subgroup and plotted on a chart with limits calculated and drawn
for the statistic S.
The control limits for the X - and S-charts are:
where A3, B3, and B4 are factors that give 3-sigma limits for X and S and can be found
in standard tables (Table 4.2, or Table A.4 in the Appendix). The only difference
between using the S-chart instead of the R-chart in conjunction with the X -chart is
in calculating the value of S instead of R for each sample, and using the values of S
for calculating the limits. The methods of charting and interpreting these charts are
the same as those for the X - and R-chart combination. The Example 4.3 shows how
we make the X and S-chart.
Example 4.3
The data from Example 4.1 is reproduced in Table 4.4 and the calculation of X and
S is shown in the table. Calculate the limits for the X and S-charts.
Solution
LCL(S ) = B 3S = 0
LCL( X ) = X − A3S
= 21.37 − 1.43(1.2) = 19.65
23.5
UCL = 23.09
22.5
Sample mean
21.5 Mean = 21.37
20.5
LCL = 19.65
19.5
Subgroup 0 5 10 15 20 25
3
UCL = 2.515
Sample stDev
2
S = 1.204
1
0 LCL = 0
For these same set of data, we had prepared the X - and R-charts in Example 4.1.
In comparing the R-chart in Figure 4.4 and the S-chart in Figure 4.10, we notice that
Sample 22, which had its R value outside the limits in the R-chart, has its S value
inside the limits in the S-chart. The S-chart shows that the process is in-control even
if the S value of the one sample is very close to the limits in the S-chart. We have to
believe that the S-chart represents the true state of affairs better than the R-chart in
this case, because the statistic S is a better estimator of the process variability than the
statistic R.
Before we discuss the attribute control charts, we want to discuss one more chart that can
be used with measurements; in fact, it can be used with both measurements and attributes.
In this chart, there are no limit lines. The condition of the process, whether it is in control
or not, is determined by merely counting the length of runs and the number of runs. (A
run, we may recall, is a sequence of plots having a common characteristic like all plots in
the sequence falling above/below the center line, or each of the plot in the sequence falling
below/above the previous plot). Rules are available, as stated below, to determine when the
process is in control and when it is not. This chart can be used with measurements such as
diameter, weight of parcels, waiting time in the doctor’s office, etc., or attributes such as
number of defects on a casting, number of patients waiting for appointment, or number of
C-sections performed in a month at a clinic. This chart can be used with single observa-
tions (X values) or averages (X values), ranges (R values), standard deviations (S values),
proportion defectives in a sample (p values), etc. The main advantage of the run-chart is
its simplicity and universal application. This chart is widely used in healthcare industry.
The procedure consists of plotting the data chronologically and counting the length
of runs and number of runs and deciding the condition of the process based on certain
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 24 3
rules. One set of rules from Perla et al. (2011) is given below as an example. The pro-
cess is not in control if:
• there is a run of six or more above or below the median.
• there is a run up or run down of five or more points.
• there are too many or too few runs above or below the median as determined
by comparing with tabled critical values.
• there is an astronomical point—a plot that is obviously an outlier.
While counting the run lengths, plots on the median are not to be counted. Tables
are available in Perla et al. (2011) that give critical values for too few runs or too many
runs for given size of the data.
There are many such set of rules associated with the run-charts. Montgomery
(2013, p205) lists a few more and warns against using too many of the rules at the
same time because the false alarm probabilities “add up” according to the addition
theorem of probability. Therefore, we will recommend using only the following two
rules. After all, the hallmark of the run-chart is its simplicity and there is no sense in
making it complicated by adding too many rules. Consider the process has changed (is
affected by assignable cause) if any of the following condition apply:
1. More than seven consecutive plots fall above or below the center line.
2. More than seven plots are in a run-up or run-down.
Note that these are the same rules that were employed with the runs on the X -chart.
The Example 4.4 below shows the use of the run-chart to monitor the fasting blood
glucose of a diabetic patient. (This example uses the same data used for an X and
R-chart in an earlier example.)
Example 4.4
The Figure 4.11 shows the plot of daily fasting blood glucose taken before breakfast
of a diabetic patient. Diabetics are advised to monitor the condition of the disease
by monitoring the amount of glucose in their blood. The doctor assisting this patient
has recommended that the patient should bring the blood glucose reading below
120. The patient uses the run-chart to monitor the blood glucose level.
Solution
Figure 4.11 shows the run-chart made using the Minitab software. Applying Rule 1,
there are two runs with length equal to 6 above the center line, and another run
with length equal to 11 below the center line after the 45th observation. This shows
that the patient is not able to control the fasting blood sugar. Applying Rule 2, the
longest run, a run-down, has a length equal to 5, so there is no signal for trends.
There is wide swing in the average level. The glucose level is quite above the
desired level of 120. Some intervention, change of medication, change in diet, and/
or exercise may be warranted.
24 4 A Firs t C o urse in Q ua lit y En gin eerin g
170
165
Blood glucose mg/dL
160
155
150
145
140
1 5 10 15 20 25 30 35 40 45 50 55 60 65
Observation
One aspect visible in the run-chart which was not apparent in the X -chart with
the same data made earlier is the following: there is a long run of 11 plots below the
center line starting from the 45th plot. This means that for a considerable length of
the time the average blood glucose level had been quite below the overall average
before it started moving up. If the patient would investigate why such a long run
occurred and replicated it in the future, the patient may be able to get a handle on
the high level of glucose in the blood. He will also stand to gain from collecting and
plotting the data as a chart so that he can understand the changes taking place in
his health condition.
As mentioned before, the run-chart is simple to use as there are no control limits
and there are no calculations to be made for the limits. However, because the rules
used for determining process condition is the same for all types of data, the power
of the chart when used with different types of data may be different. The run-rules
adopted for such charts are usually determined so that the chart has detection ability
approximately “equal” to the Shewhart chart with 3-sigma limits.
We will discuss here two attribute charts: The P-chart and the C-chart. The P-chart
is used to control and minimize the proportion defectives in a process, and the
C-chart is used to control and minimize the number of defects per unit produced
in a process. The details on where they are used and how the limits are calculated
are explained below. Some minor variations of the P-chart and C-chart are also
included here.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 24 5
4.4.1 The P-Chart
p (1 − p )
UCL( P ) = p + 3
n
CL( P ) = p
p (1 − p )
LCL( P ) = p − 3
n
where p is the average proportion defectives in about 25 samples. Thus, about 25
samples from the process are needed to calculate the limits and start using the P-chart
to control a process.
To make the meaning of the notations clear:
• p is the unknown proportion defectives in the population that we want to control;
• P is the statistic that represents the proportion defectives in a sample and is
used to estimate p;
• pi is the proportion defectives or the value of the statistic P in the i-th sample; and
K
taken.
24 6 A Firs t C o urse in Q ua lit y En gin eerin g
The chart takes the name of the statistic being plotted. In this case, the statis-
tic plotted is P (in capital) and the chart is called the P-chart. (This will be the
convention we use in naming the charts although other authors use various other
conventions.)
Derivation of the above formulas for the limits is given in Chapter 5. For now,
however, we just want to recognize that P is related to D, the number of defectives
in the sample, which is a binomial random variable. The limits for P are calculated
as 3-sigma limits where “sigma” stands for the standard deviation of the statistic
P, σ P.
Example 4.5
The process was an automatic lathe producing wood screws. Samples of 50 screws
were taken every half-hour, and the screws were gaged for length, diameter of head,
and slot position, and were also visually checked for finish. The number rejected in
each sample for any of the reasons was recorded, as shown in Figure 4.12. Prepare a
control chart to monitor the fraction defectives in the process.
Solution
The fraction defectives in the samples are computed as pis. For example, the propor-
tion defectives in Sample 5 is 3/50 = 0.06 and that in Sample 14 is 1/50 = 0.02. The
average of the pis, p is computed as 0.027. The limits are:
0.027 × 0.973
UCL( P ) = 0.027 + 3 = 0.096
50
CL( P ) = 0.027
0.027 × 0.973
LCL( P ) = 0.027 − 3 = −0.042 0
50
Note that the denominator within the radical sign in the limit calculation is entered
as 50 being the sample size or number of units in each sample. Note also that when
the LCL is obtained as a negative number, it is rounded off to zero. The pi values are
shown plotted in Figure 4.12a in relation to the limits. The chart shows that the pro-
cess is not in-control. Action was necessary to find out why there were pi values outside
the limits. The reason was that several large pi values occurred at the start of the pro-
cess because the process was going through a set up. When the set up was complete,
the pi values settled down at a lower level.
When the causes for the values outside the limits are known and there is assur-
ance that these causes would not reoccur, the pi values outside the limits are removed
from the data and new limits are calculated. The pi values from Samples 3, 4, and
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 2 47
Product number:176 Woodscrew Machine number: 743 Auto lathe Date: Inspector KT:
Defect category: Length, slot, finish Gage used: Go-no-go Subgroup size: n = 50
Subgroup 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
number
Number of 1 2 5 6 3 5 2 1 1 0 0 1 0 1 0 2 1 0 0 1 1 0 0 1 0
defectives
Fracion
defectives, p .02 .04 .10 .12 .06 .10 .04 .02 .02 0 0 .02 0 .02 0 .04 .02 0 0 .02 .02 0 0 .02 0
p(1–p) / n = .027 – 3 (0.27)(.973)/50
p(1–p) / n = .027 + 3 (0.27)(.973)/50
= –0.042 = 0
= 0.096
p = Σp/K = 0.68/25 = 0.027
UCLp
0.10
UCLp = p + 3
LCLp = p – 3
0.05
p
0.0
(a)
0.08
0.07 UCL = 0.07019
0.06
0.05
Proportion
0.04
0.03
0.02
P = 0.01636
0.01
0.00 LCL = 0
0 10 20
(b) Sample number
Figure 4.12 (a) An example of a P-chart. (b) P-chart after removing p values outside the limit.
6 are removed, and the new limits are calculated from the remaining 22 subgroups
as follows:
p = 0.36 / 22 = 0.016
0.016 × 0.984
UCL( P ) = 0.016 + 3 = 0.069
50
CL( P ) = 0.016
0.016 × 0.984
LCL( P ) = 0.016 − 3 = −0.037 = 0
50
24 8 A Firs t C o urse in Q ua lit y En gin eerin g
Again, when the lower control limit is obtained as a negative quantity, it is rounded
up as 0. We see that all the remaining pi values fall within the new limits, and these
new limits can be used for further control of the process. The P-chart with the new
limits, drawn using Minitab, is shown in Figure 4.12b.
4.4.2 The C-Chart
The C-chart is used when the quality of a product is evaluated by counting the number
of blemishes, defects, or nonconformities on units of a product. For example, the num-
ber of pinholes may be counted to determine the quality of glass sheets, or the number
of gas holes may be counted to determine the quality of castings. In these cases, a cer-
tain number of nonconformities may be tolerable, but the number must be monitored,
controlled, and minimized. The C-chart would be the appropriate tool for this purpose.
The C-chart is also known as the “defects-per-unit-chart” or “control chart for non-
conformities.” (The term nonconformity is just a politically correct term for a defect.)
The procedure consists of selecting a sample unit from the process at regular inter-
vals and counting the number of defects on it. These counts per (sample) unit are the
observed values of the statistic C and the value of C at the i-th sample is noted as ci.
After inspecting about 25 sample units, the average of the ci values are calculated as
c = ∑ c /k, where k is the number of units from which the limits are calculated. The
i
UCL(C ) = c + 3 c
CL(C ) = c
LCL(C ) = c − 3 c
Example 4.6
The process is a laminating press that puts plastic lamination on printed art sheets,
which are later cut and finished into a credit card-type product. The number of
defects such as bubble, tear or chicken scratches on the laminated sheets is to be
controlled by counting the number per sample sheet.
Solution
A C-chart is used to control the defects per sheet. The data obtained from 25 sample
sheets is shown in Figure 4.13a. The figure also shows the calculation of control
limits and the plot of the observed values of C. The limits are calculated as follows:
c = 14.1
UCL(C ) = 14.1 + 3 14.1 = 25.4
CL(C ) = 14.1
LCL(C ) = 14.1 − 3 14.1 = 2.8
In this case, the LCL came out to be >0. If the LCL came out to be a negative value,
it must be rounded up to 0—as we did with the P-chart.
The chart shows the process not-in-control. It is possible, as in other charts, to
remove the points outside the limits from the data and recalculate the limits for future
use, assuming that the causes have been rectified. In this process, the process could
go out of control for any number of reasons like loss of steam supply to the platens
of the press, removal of the art sheet from the press too soon or too late, and so on.
Assuming the cause for the Sample 20 is identified and rectified, it is removed, and
new limits are calculated as follows: The new c is 13.5, and the new limits are:
The remaining ci values are all within these limits, and these limits can be used for
the next stage of control. The chart with the revised limits, drawn using Minitab, is
shown in Figure 4.13b.
250 A Firs t C o urse in Q ua lit y En gin eerin g
Unit #: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
No. of defects: 10 17 16 20 10 14 07 14 19 16 21 13 10 11 25 15 11 12 08 30 12 18 06 10 08
c = 14.1 + 3 14.1 = 25.4
c = 14.1 – 3 14.1 = 2.8
30
UCLc = 25.4
c = Σ c/K = 353/25 = 14.1
20
CLc = 14.1
UCLc = c + 3
LCLc = c – 3
10
LCLc = 2.8
0
(a)
25
UCL = 23.76
20
Sample count
15
C = 12.96
10
5
LCL = 2.158
0
0 10 20
(b) Sample number
Figure 4.13 (a) Example of a C-chart. (b) C-chart with revised limits.
As mentioned earlier, this method of calculating control limits for the future by
using the remaining data is an accepted procedure, because the alternative of taking
a whole new set of samples to recalculate limits is expensive and time consuming.
When, however, the process is severely not-in-control and several assignable causes
are discovered and removed, the first set of data may have no relevance to the future
process. In that case, a new set of data should be taken from the “new” process, and
new limits should be computed from them for future use. Experienced observers say
that if more than 50% of original data has to be discarded because they came from
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 2 51
assignable causes, then it is advisable to take a new set of data from the new process.
This is the rule of thumb for all control charts.
Returning to Example 4.6, the defects on the laminated sheets could be caused by
several factors though the temperature of plates and the pressure applied were con-
sidered the most important parameters. Use of the C-chart enabled the discovery of a
few abnormal conditions that resulted in plots outside the limits. It also brought out
the need for further investigation, because the average number of scratches, 13 per
sheet, was too high even after the process was in-control. This is an example where the
process is in-control but not acceptable.
An experiment was conducted to discover the optimal pressure and temperature of
the press. A specialist in plastic technology was brought in to help with the investi-
gation. Regression analysis (see Chapter 8) was used to find the best combination of
temperature and pressure to use. Application of the optimal parameters improved the
process considerably. This is an example where the control chart alone was not enough
to improve the process. It had to be supplemented by additional experimentation for
discovery of root causes and discovering and implementing the best possible solutions.
Example 4.7
As another example in the use of the C-chart we use the data from a case study from
Carey and Lloyd (2001) in their book Measuring Quality Improvement in Healthcare.
The data represent number of inpatient falls in a hospital per month. The hospital
would want to monitor the number of falls and would want to reduce them. The data
are presented in Table 4.5. The hospital had introduced a fall-prevention program at
the beginning of the ninth month and so the data following the ninth month repre-
sents data after this implementation has been made. The hospital would want to know
if the fall-prevention program has indeed resulted in an improvement. (The data in
table have been read off the figure provided in the case study as the case study does
not give the number of falls, nor does it give the number of patients in each month.)
Solution
Assuming that there are about equal number of inpatients in each of the months
under study, the opportunity for occurrence of falls in each month can be assumed to
be the same. So, a C-chart would be appropriate to use. (If the number of inpatients
varied considerably from month to month, then a U-chart would have been appropri-
ate.) First, we do analyze the data the same way the authors Carey and Lloyd did.
60
Number of falls
_
C = 51.33
50
40
30 LCL = 29.84
1
1
1 3 5 7 9 11 13 15 17 19 21 23
(a) Months
50
Sample count
_
C = 42.92
40
30
LCL = 23.26
20
1 3 5 7 9 11 13 15 17 19 21 23
(b) Months
Figure 4.14 (a) C-chart for inpatient falls in a hospital—limits based on before-improvement data. (b) C-chart for
inpatient falls in a hospital—limits based on all data.
We calculate the control limits out of the nine data points from months 1 to 9 in
order to see if the fall prevention strategy implemented after Month 9 has had any
effect on reducing falls. Figure 4.14a is the C-chart made for the data using Minitab.
We see in Figure 4.14a that the process is not in control as two plots, Plots 11 and 23, are
below the lower control limit. This can be interpreted as the average number of falls having
decreased after implementation of the fall-prevention program. Also, notice the run below
the center line starting from the 10th month that should bear witness to the change in the
number of falls after the ninth month. In fact, the average falls per month, calculated from
the data, after the ninth month is 37.87 compared to 51.33 prior to the ninth month.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 253
A question may arise regarding calculating the control limits with as few as nine
observations of C values. Normally, we would like to have at least 25 observations to
calculate the limits. An argument could be that this is reality; all we have are nine
months of data. This lack of enough data to calculate limits will be worrisome when
we draw conclusions from the graph. In this example, there is such conclusive evidence
from the long run-below-center-line after the ninth month, that the fall-prevention
program has made a difference. However, it is advisable to stick to established sta-
tistical rules while analyzing data using control charts, especially when dealing with
marginal situations where the evidence does not provide obvious conclusions. So, we
redrew the control chart with limits being calculated using all the 24 data points.
Figure 4.14b shows this chart. We find that this chart shows that the process is not
in control because one of the values is outside the upper limit. It shows the average
C value in the beginning month(s) has been above the average than in subsequent
months. At least, it provides a confirmation to the conclusion we drew with the earlier
chart. We needed this confirmation because the limit calculation for the earlier chart
was based on a small number of sample points.
4.4.3.1 The P-Chart with Varying Sample Sizes The discussion of the P-chart earlier
assumed that samples of size n could be drawn repeatedly from the process to be
controlled and that the number of defective units in each of the equal-size samples
could be counted. This may be true in many situations. However, there are situations
where samples of the same size may not be available each time we want a sample. For
example, in one situation in the manufacture of tool boxes, all boxes produced in a day
constituted the sample for the day, for a P-chart used for controlling defectives due to
poor workmanship. The number of boxes produced varied day to day and so did the
sample size from sample to sample; hence, a P-chart that can accommodate varying
sample sizes was needed.
The sample size n goes into the calculation of control limits. When the sample size
is different from sample to sample, the question arises as to which sample size should
be used in the formulas to calculate limits.
One way to handle the varying sample sizes recommended in the literature is to use
an average value for n, say, n, if the sample size does not vary too much—the largest
and the smallest do not vary by more than 25% from the average. This rule is not based
on any mathematical reasoning but is considered acceptable for practical purposes. A
more exact approach would be to calculate limits for the i-th sample, UCLi(P) and
LCLi(P), using the sample size ni of that sample, and then comparing the observed pi
with the limits thus calculated. The P-chart then looks like the one shown in Figure
4.15. These limits are called “stair-step” limits because of the way they look.
Yet, another approach is a compromise between the two approaches described above.
In this, the limits for the chart is calculated using an n, which results in constant limits
25 4 A Firs t C o urse in Q ua lit y En gin eerin g
for all samples. If a pi value falls close to the limit, then the limits for that sample are
calculated using the exact size of that sample and compared with the pi value. This
contributes to speedy charting while also allowing for clarification in doubtful cases.
Example 4.8a
Table 4.6 shows data on the daily production of castings in a foundry for a period of
35 days and the number found each day with the “burn-in” defect. Draw a P-chart
for the proportion of castings with burn-in defects and make any observation that
is discernible from the chart.
Solution
The control limits are calculated for each sample based on the size of that sample
using the following formulas:
p(1 − p )
UCL( P ) = p + 3
ni
CL( P ) = p
p(1 − p )
LCL( P ) = p − 3
ni
where ni is the size of the i-th sample and p is the average proportion defectives in
35 samples.
0.436( 0.564 )
UCL( P ) = 0.436 + 3 = 0.554
158
0.436( 0.564 )
LCL( P ) = 0.436 − 3 = 0.318
158
Therefore, the observed value of pi = 0.658 for this sample is outside the limits.
As another example, for the 35th sample,
0.436( 0.564 )
UCL( P ) = 0.436 + 3 = 0.557
151
0.436( 0.564 )
LCL( P ) = 0.436 − 3 = 0.315
151
The observed value of pi = 0.556 is inside the limits.
Suppose we want to calculate the constant control limits based on an average
sample size, n = 161; the limits would be:
0.436( 0.564 )
UCL( P ) = 0.436 + 3 = 0.553
161
0.436( 0.564 )
LCL( P ) = 0.436 − 3 = 0.319
161
The P-chart with varying sample sizes plotted using Minitab software is shown in
Figure 4.15a. The constant control limits based on an average sample size are also shown
on the figure. In this case, we see that the constant control limits show the same number
of plots outside the limits as the varying limits. The process is not in-control; there is a
sudden increase in the defective rate after the 15th sample. The defective rate also shows
an upward trend even before the 15th sample. Further investigation is therefore needed to
find the causes that create a trend in the defectives that results in plots outside the limits.
Example 4.8b
As another example for the use of P-chart, we will discuss an example from health-
care industry. This data come from a case study reported by Carey and Lloyd (2001) in
their book on quality improvement in healthcare. Table 4.7 contains data on number of
C-section deliveries and total number of deliveries in a hospital, for 26 months. The hos-
pital wanted to monitor the proportion of C-sections out of total deliveries and possibly
reduce the proportion. There could be any number of reasons for doing this, such as that
the number of negative outcomes in deliveries increase with increase in proportion of
C-sections, or that the cost of C-sections is higher than that of natural births, or that the
hospital accreditation agencies encourage hospitals to prevent preventable C-sections.
Solution
A P-chart was made for the proportion of C-sections using Minitab, with sample
size varying from month to month. Figure 4.15b shows the P-chart drawn for the
256 A Firs t C o urse in Q ua lit y En gin eerin g
0.9
0.8
0.7
0.6
UCL = 0.5570
Proportion
0.5
P = 0.4360
0.4
0.2
0.1
0.0
0 5 10 15 20 25 30 35
Sample number
(a)
0.18
0.16 _
P = 0.1541
0.14
0.12
1 4 7 10 13 16 19 22 25
(b) Month (Jan 1990 – March 1992)
Figure 4.15 (a) Example of a P-chart with varying sample sizes. (b) A P-chart for proportion of C-sections out of total
deliveries in a hospital.
Table 4.7 Monthly Data on Number of Deliveries and the Number of C-Sections among Them in a Hospital
Month 1 2 3 4 5 6 7 8 9 10 11 12 13
C-section 65 64 77 59 64 74 72 67 59 65 60 68 62
Total deliveries 370 383 446 454 463 431 443 451 433 407 381 406 374
Month 14 15 16 17 18 19 20 21 22 23 24 25 26 27
C-section 48 57 64 66 55 51 82 65 69 62 66 58 47 59
Total deliveries 355 393 417 434 421 417 444 429 411 386 357 373 370 415
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 257
data. The chart shows that the process is in control with average of 15.41%. This
average may be compared with a benchmark, a national average, or the regional
average for similar hospitals or with a goal set by the hospital for itself.
This average of 15.41% may be acceptable to the hospital, or maybe not. If the hospi-
tal would want to reduce the average even further, the data may be analyzed using other
bases of subgrouping. For example, subgrouping could be based on doctors who perform
the deliveries on the theory that certain doctors may have preference to C-section more
than others. Or, the data can be analyzed using subgroups based on age of the mothers
on the theory that mothers in certain age group may have higher preference for C-section
than others. Or, the data can be analyzed based on economic strata the mothers belong
to, and so on. Such analysis would reveal some root causes responsible for keeping the
C-sections high, which may lead to possible solutions resulting in reduced proportions
of C-sections. The control chart is thus a good tool not only for monitoring a measure of
performance of a hospital but also a tool for root cause analysis and improvement.
4.4.3.2 The nP-Chart For the P-chart, the statistic P, proportion defectives in a sample,
is calculated as P = D/n, where D is the number of defectives in a sample of size n. If P =
D/n, then nP = D. Instead of plotting P, suppose we plot nP, which is simply the number of
defectives found in a sample, we get the nP-chart. The control limits must then be calculated
for the statistic nP. The easiest way to calculate the limits for nP is to calculate the limits for
the corresponding P-chart first using the formulas given above and then multiplying the
limits by the sample size n. The following formulas can also be used for calculating the lim-
its, which have been obtained by multiplying by n the formulas for the limits of the P-chart:
UCL(nP ) = np + 3 np (1 − p )
CL(nP ) = np
LCL(nP ) = np − 3 np (1 − p )
The chart with limits calculated as shown above is called the nP-chart. (They could
have called it the “D-chart!”) The results from using a nP-chart will be the same as from
the P-chart. For example, the two charts with the following two sets of limits will pro-
duce the same result. With the one on the left, the fraction defectives, pi, will be plotted,
and with the one on the right, the number of defectives, npi, will be plotted.
The advantage in using the nP-chart can easily be seen. First, there is no need for
calculating pi for each sample. Second, most people prefer to deal with the number
of defectives, which is in whole number, rather than fraction defectives, which is in
decimal fraction. Of course, this chart cannot be used if n does not remain constant.
258 A Firs t C o urse in Q ua lit y En gineerin g
4.4.3.3 The Percent Defective Chart (100P-Chart) If, instead of multiplying the limits
of the P-chart by n, they are multiplied by 100, then the limits for percent defectives
are obtained. For example, the charts with the following two sets of limits are equiva-
lent. Whereas pi values will be plotted on the chart with the limits on the left, 100pi
(percent defectives) will be plotted on the chart with the limits on the right.
P-CHART 100P-CHART
UCL(P ) = 0.07 UCL(100P ) = 7.0
CL(P ) = 0.016 CL(100P ) = 1.6
LCL(P ) = 0.0 LCL(100P ) = 0.0
The advantage of using the 100P-chart, or percent defectives chart, is that the plot-
ted numbers will be large compared to the small fractions that are encountered with
the use of the P-chart. People like to handle large numbers, and they tend to grasp the
meaning of percent defectives more easily than fraction defectives.
4.4.3.4 The U-Chart The U-chart is a variation of the C-chart. The C-chart described
previously can only be used if all the units inspected for the chart are identical—that
is, if the opportunity for a defect is the same from unit to unit. Several situations exist
in which an inspection station receives units that are similar but not identical, such as
television sets of different sizes, cars of different models, or printed circuits of different
configurations. When units of different sizes are being inspected and one chart is used
to monitor all the units arriving at the inspection station, the U-chart is needed. The
statistic U represents the average number of defects per (standard) unit.
When the size of units varies from unit to unit, we define one size as a standard
unit and then find out the number of standard units in each of the other sizes. This is
done by comparing the opportunities for a defect in any unit with that in the standard
unit (see the example below). We then calculate the average number of defects per
standard unit in each sample unit. This quantity is designated as U. The limits for U
are calculated using the following formulas:
u
UCL(U ) = u + 3
ni
CL(U ) = u
u
LCL(U ) = u − 3 ,
ni
where u is the average of the u values from the sample units and ni is the number of
standard units in each sample unit. Because n varies from sample to sample, the limits
need to be calculated for each sample unit and the ui value from each sample should
be compared with its corresponding limits. We then have the same situation as in the
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 259
case of the P-chart with varying sample sizes (i.e., the chart with stair-step limits).
As an alternative, an average of the n values can be calculated as n and used in the
denominator inside the radical sign in the above formulas, resulting in a constant set
of limits. The example below illustrates the use of the U-chart.
Example 4.9
In a foundry that produces engine blocks, a salvage welder fills gas holes on castings
if the holes do not affect the structural strength of the castings. The number of holes
filled per casting, however, must be tracked, and any unusual occurrence like a sud-
den increase or decrease in number of holes (i.e., when the process is not-in-control)
must be conveyed to the mold line. A control chart is needed to monitor the holes
and possibly control the number of holes in the castings. The data on the size of the
sample units (castings) and the number of holes found in them are given in Table 4.8
for 20 castings.
Solution
Since the castings are of different sizes, we will use a U-chart. The block I-4 (inline
4-cylinder) is chosen as the standard unit. The numbers of standard units in other
blocks have been determined based on their surface areas (opportunities for defects)
and are noted in the table. The calculation of ui for each sample unit is shown in
Table 4.8. For example, for the sample no. 5, there are nine holes on the casting,
a V6, estimated to be equivalent to 1.5 standards units. So, the average number of
holes per standard unit = 9/1.5 = 6.
u=
∑c i
=
143
= 4.373
∑n i
32.7
32.7
n= = 1.635
20
The chart drawn using Minitab is shown in Figure 4.16. The chart shows the
limits calculated for each sample individually. For example, the limits are calculated
for the eighth sample as:
4.373
UCL(U ) = 4.373 + 3 = 10.646
1.0
CL(U ) = 4.373
4.373
LCL(U ) = 4.373 − 3 = −1.9 0
1.0
The negative value for the LCL is rounded off to zero. The observed value u 8 =
11 is outside the upper limit. As another example, the limits for the 20th sample are
4.373
UCL(U ) = 4.373 + 3 = 8.809
2
4.373
LCL(U ) = 4.373 − 3 = − 0.06 0
2
10
UCL = 8.809 (for
UCL(U ) = 9.279 the 20th sample)
Sample count
5
U = 4.373
LCL = 0 (for
0
the 20th sample)
0 10 20
Sample number
4.373
UCL(U ) = 4.373 + 3 = 9.279
1.635
4.3773
LCL(U ) = 4.373 − 3 = − 0.53 = 0
1.635
The control limit calculated using n has been added to the graph in Figure 4.17
(discussed later in the chapter). In this case, we see that the constant control limits
and the varying control limits produce the same result, that is, if a u value is inside/
outside the varying limit, it is also inside/outside the constant limit.
4.4.4.1 Meaning of the LCL on the P- or C-Chart The lower control limit does not have
the same significance with the P-chart, or the C-chart, as it has with the X -chart.
If an X value falls below the LCL of an X -chart, we will conclude that the process
average is moving downward, below the desired standard level. On the other hand,
when a ci value on a C-chart falls below the LCL, it indicates that the average defects
per unit has decreased. Should we still worry about the process not being in control?
We may not worry, but we should be alerted to the fact that process has changed,
maybe changed for the better. But we should take extra caution to make sure that the
values below the lower limit are not caused by a faulty instrument or an erring inspec-
tor. Also, if recalculation of the limits is warranted, we do not remove values below the
lower limit in the C-chart (or in the P-chart) because they may reflect the new reality
about the process. Often, the values below the LCL will come inside the limit when
the limits are recalculated after removing those outside the UCL.
4.4.4.2 P-Chart for Many Characteristics One of the advantages of the P-chart is
that one chart can be used for several product characteristics. For example, if we are
inspecting bolts, we can use one chart for several characteristics such as diameter,
length, head size, cleanliness of the screw, etc. The product will be called a defective if
any one of the characteristics is outside the acceptable limits from either visual inspec-
tion or gauging. It is often a good idea to start using one P-chart for several character-
istics in order to identify the characteristic(s) that causes the most problems and then
use a P- or X -chart for those characteristics that need a closer watch.
4.4.4.3 Use of Runs The rules pertaining to runs above or below the CL, and to runs
up or runs down, can also be used with the P-chart and the C-chart. These rules are
especially useful when the average of P, or C, is decreasing and there is no LCL (i.e.,
LCL = 0). In such circumstances, it is only through the runs that changes in the aver-
age P or C can be noticed.
262 A Firs t C o urse in Q ua lit y En gin eerin g
P = 0.115
1 2 21 22
(a) Days
P = 0.115
1 2 3 4 12 13 14
(b) Assembler number
11.5%
4 days
0.85%
Figure 4.17 (a) Control chart for fraction defectives in cable assembly-subgroups based on days. (b) Control
chart for fraction defectives in cable assembly-subgroups based on assemblers. (c) Control chart for fraction
defectives-subgroups based on days.
The discussion so far has been limited to the X - and R-, P-, and C-charts, and some
minor variations of these. These are the basic tools of process control that are most
useful in industry because of their applicability to many situations. Table 4.9 summa-
rizes the information on these three basic types. The table shows where these charts
are used, how to calculate the limits, assumptions involved, recommended sample
size, the formulas used for calculating limits, and some minor variations.
A few more sophisticated control charts are available in the literature that are use-
ful under some special circumstances. The cumulative sum chart is used when higher
sensitivity is needed to discover small changes. The exponentially weighted moving
average (EWMA) chart is also useful when small changes must be detected. In addi-
tion, the EWMA chart and the moving average chart are preferred when multiple
units are not available in quick succession and, therefore, X - and R-charts cannot
be used. The median chart is known to perform well with populations that are not
normally distributed. Details of these charts can be found in textbooks in statistical
quality control such as Montgomery (2013) and Grant and Leavenworth (1996).
A few of these special charts, with details on how and when they are used, along
with their strengths and weaknesses, are covered in the next chapter. Chapter 5 also
includes a discussion on the theoretical basis of the formulas used for calculating the
control limits for the various control charts we discussed earlier in this chapter. The
Table 4.9 Summary of the Three Basic Control Charts
methods for evaluating the performance of the charts using operating characteristic
curves are also included in Chapter 5. The remaining parts of this chapter deal with
three important topics related to process control: implementing SPC on processes,
process capability, and measurement system analysis.
screen. There is much to be learned when the analyst observes the process firsthand
and sees how the observations are generated and how the instruments respond to pro-
cess conditions. There is also valuable information to be gained from interacting with
operating personnel and working with the raw data by hand.
The next question would be which variable to track? If a product characteristic is
giving trouble to a customer, then that characteristic should be monitored. If there are
several characteristics that need attention, a prioritized list must be generated, maybe,
based on payback or other criteria that would make sense. The process variables that are
responsible for the characteristic in question must then be identified. A cause-and-effect
diagram, prepared by the team in a brainstorming session, may be all that is needed to
identify the cause variable(s). However, if the relationship between the characteristic
and the process variables is not obvious or needs to be confirmed, an experiment may be
necessary. Once the process variables are identified as being responsible for the product
quality, they must be controlled using the appropriate control charts.
The next question would be what type of chart to use? If the product character-
istic or the process variable to be monitored or controlled is a measurement and is
expected to follow a normal distribution, the X - and R-chart combination would be
the most suitable choice, provided that suitable samples of size between 2 and 10 are
available. If the sample size has to be large (>10), either because of a practical neces-
sity or because the extra power from a large sample size is needed, then the X - and
S-chart combination should be used. If multiple units are not available for forming
samples because the observations are slow in coming, then a chart, such as the chart
for individuals (X-chart), the moving average chart, or the EWMA chart, would be
the choice. (These charts for slow processes are discussed in Chapter 5.)
Use of a control chart should reveal whether the process is in-control. If the process
is not in-control, efforts should be made to hunt out the assignable causes and elimi-
nate them. Relating what happens in and around the process—whether through use
of event logs, consultation with operators, or close observation of the process—to the
signals appearing on the control charts should usually lead to the discovery of assign-
able causes. An intimate knowledge of the process, including the physics, chemistry,
and technology of the process, is helpful and is often necessary in the search of assign-
able causes. This is when the knowledge of the operators or process engineers becomes
most helpful. Patterns in the plots on the control charts, such as trends and cycles,
would also give clues to the sources of the assignable causes.
When an assignable cause is identified, steps should be taken to eliminate it com-
pletely; in other words, the remedies applied should be such that the assignable cause
will not affect the process again. Furthermore, we should keep in mind that the
assignable causes come and go, and lack of indication of them on the control chart for
a short period of time should not lull the analyst into believing that the process is in-
control. A process should not be declared in-control until the control chart indicates
the existence of no assignable cause for a sustained period of time. A rough rule of
thumb used by many practitioners (originally given by Dr. Shewhart) is that a control
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 267
chart should show an absence of assignable causes for at least 25 consecutive samples
before the process is considered to be in-control. Often, a process may have to be
tested for even longer periods before they are declared to be in-control.
Eliminating the first set of assignable causes should result in reduced variability.
This calls for revised control limits, which, in turn, may reveal the existence of more
assignable causes. The iterations of eliminating assignable causes, setting new limits, and
discovering and eliminating more assignable causes, should continue until the process
shows stability with respect to a set of control limits, which indicates that there are no
obvious assignable causes that can be economically eliminated. Most likely, when the
process has been brought in-control, the variability is small enough to meet the cus-
tomer’s specifications entirely. The capability of the process is now assessed using the
capability indices Cp and/or Cpk. (See the next section for the definition of these indices.)
If the capability is not adequate according to the customer’s requirements or the inter-
nal standards of the organization, experiments must be conducted to explore the process
variables and their levels to yield the product characteristic in question at the desired level
and the desired limits of variability. The experiments may have to be repeated and
remedies applied until the desired reduction in variability is achieved. When the capa-
bility goal is reached, the team should then turn their attention to the next project
on the prioritized list. The team should make sure, however, that the process that was
improved is monitored periodically to make certain that the process maintains the
gains and does not slip back to previous ways of operating. This becomes easier if the
team had been in communication with the process owners and process operators and
enlisted their cooperation in the discovery and change process. Unless the operat-
ing personnel buy into the solutions for removing the assignable causes, the solutions
would have little chance of remaining implemented on the process.
If the product characteristic to be controlled is of the attribute type, one of the
attribute charts—the P-chart, C-chart, or one of their variations—will be appro-
priate to use. The best way to differentiate between the situation when a P-chart is
needed and when a C-chart is needed is to ask the question: Is the opportunity for
a defect or a defective occurring infinity (very large)? If the opportunity is infinity,
then the attribute in question follows the Poisson law, so a C-chart is appropriate. If
the opportunity for occurrence of the defect or defective is not infinity (i.e., not really
large), then occurrence of the attribute is governed by the binomial distribution, so a
P-chart should be used. Again, if the process is not in-control, the assignable causes
must be discovered and eliminated. As mentioned, the discovery of an assignable
cause requires a good understanding of the process and its surroundings, as well as a
relational study of the control chart signals and the happenings on the process. This
requires intimate knowledge about the working of the process combined with an abil-
ity to interpret the statistical signals from the charts.
It is quite possible that when the assignable causes are found and eliminated, the
process achieves defective levels that do not exceed customer’s stipulation or the orga-
nization’s own goals.
268 A Firs t C o urse in Q ua lit y En gin eerin g
We want to emphasize a point here: the value of these attribute charts lies not so
much in bringing a process in-control (although it is a necessary first step) as it is in
using them to continuously identify and eliminate assignable causes until the proportion
defectives or average number of defects per unit is reduced to very low, near-zero levels.
The capabilities of processes that produce attribute outputs are measured in terms
of defectives per thousand units or defects per thousand opportunities. When a very
high level of quality is required, the capabilities are measured in terms of defectives
per million units or defects per million opportunities. The capability can also be
measured in the number of sigmas according to Motorola’s scale with six-sigma as a
benchmark (see Chapter 5, Table 5.10). If the capability is not adequate, experiments
should be conducted to investigate the factors (process variables) that are responsible for
the attribute and to discover the important factors and their optimal levels to bring the
attribute to the acceptable level. When the capability has reached the desired level, the
team should move on to the next project on the priority list. Figure 4.18 summarizes
the procedure for implementing the SPC methods in a production environment.
After high priority projects have been completed, an organization-wide survey should
be made to identify the key processes—and the key variable in those processes—that
determine the quality and customer satisfaction of products produced by the organiza-
tion. Through data collection on the key variables and computation of the capability
indices Cp and Cpk (discussed in the next section), the variability in the key variables,
and thus the capability of the key processes in the organization, can be established and
monitored. Of course, when a key variable is found to be not-in-control, effort must
first be made to bring the process in-control with respect to that variable.
The computerization of data collection, data analysis, and presentation of summary
information would greatly help the effort in capturing the capabilities of key pro-
cesses. The summary information on the important process capabilities will present a
perspective on the quality capability of the whole organization. Sometimes, an aver-
age Cp or Cpk, averaged over all key processes, or the proportion of the key processes
having capability Cp or Cpk equal to or above 2.0, is used as an overall measure of the
capability of an organization. Such measures can be used to monitor progress made by
the organization towards quality improvement. Processes with inadequate capabilities
must be identified and prioritized. Variability in the processes should be continuously
reduced, and the capabilities continuously improved, using repeated experiments,
which will improve product quality and customer satisfaction.
Note: When we mention that experiments are needed, this does not always mean that
a multifactor, factorial, or fractional factorial experiment is needed. Simple experi-
ments with two levels of a single factor can often reveal much-needed information.
4.6 Process Capability
As pointed out earlier, a process that is in-control may not be fully capable of producing
products that meet the customer’s specifications. An analysis is necessary to verify if
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 269
Brainstroming,
Select a variable to control process charting, cause
and effect diagram
No No
Yes Opportunity n = 2–10? n > 10 n=1
for defect
infinity Yes Yes Yes
No
Use U-chart Use C-chart Use P-chart with Use P-chart with
varying n constant n Process No Hunt for
in control? causes
Yes
Hunt for causes No Process
in control Bring process
in control
Conduct Conduct
No Process experiments, No
experiments, Is process
reduce defect rate capable? reduce variability capable?
Yes Yes
Improve
Implement control, go to the next
project in the priority list
the in-control process is also in compliance with specifications. A process that produces
“all” units of its output within specification is said to be a capable process. To assess
capability, a process, represented by a distribution, is compared with the specifications
it is expected to meet. If “all” of the process is not within specification, adjustments
to the process must be made in order to bring the process to full capability. Such an
analysis is called the “process capability analysis.” It is necessary here to re-emphasize
a point made earlier that a process must be brought in-control first before its capability
270 A Firs t C o urse in Q ua lit y En gin eerin g
When a process produces a measurable output that can be assumed to have a normal
distribution, the condition of the process can be fully described by two measures—its
mean μ and its standard deviation σ. The μ and σ can be estimated from data obtained
from a random sample (≥50) or from the data collected for control charts. If from
the former source, the average X and the standard deviation S will estimate μ and σ,
respectively. If from the latter source, then X , the CL of the X -chart will estimate μ,
and R /d2 will estimate σ, where R is the CL of the R-chart and d2 is the correction
factor that makes the R an unbiased estimator for σ. If the S-chart is used instead of
the R-chart, then S /c4 will provide an estimate for σ. Values of d2 and c4 for various
sample sizes are available in standard tables such as Table A.4 in the Appendix. The
information on how process parameters μ and σ are estimated is summarized below.
When a process is compared to a given set of specifications, there are several possible
situations that could arise, as shown in Figure 4.19. From such a figure, a qualitative
assessment of whether or not the process meets the specifications can be seen. If the
process does not fully meet the specifications, we can see whether it is the excess vari-
ability or the lack of centering of the process that is causing values outside of specifica-
tion. Often, however, a quantitative assessment of the extent to which a process meets
the specification is desired. For example, we may want to compare the capabilities of
two machines, or two vendors, or we may want to set goals and monitor progress with
regards to capability while making improvements to processes. The capability indices,
which provide numerical measures quantifying the capability of a process, then become
useful. The two most common process capability indices, Cp and Cpk, are described next.
where USL and LSL are the upper and lower specification limits, respectively, and σ
is the standard deviation of the process.
The Cp index simply compares the natural variability in the process, which is given
by 6σ based on normal distribution for the process, with the variability allowed in the
specification, which is given by (USL − LSL). If the value of C p is 1.0, the process
is just within the specifications. To be exact, 99.73% of the process output is within
specification. If the value is less than 1.0, the process variability is larger than the vari-
ability allowed by specifications, and so is producing rejects; if it is larger than 1.0, the
process variability is smaller than the variability allowed in the specifications, and so
nearly all of the products will be within specification. In general, the larger the value
of Cp, the better is the process, as shown in Figure 4.20.
This means that by using an estimate of σ, we can get an estimate, a point estimate, for
Cp. We should realize that this point estimate is just an observation of a statistic and is
subject to sampling variability. That is, if we obtain estimates of Cp from different samples
from the same process, the estimates will all be different. However, if the sample size
is large (≥100), then the variability can be considered negligible (see Krishnamoorthi,
Koritala, and Jurs 2009), and the estimate can then be used with confidence.
Customers usually stipulate that the (estimated) value of Cp should be at least 1.33 in
order to make sure that the process variability is well within specification. This allows
for the fact that most processes do not stay at one central location all the time and have
a tendency to drift around the target. The requirement that Cp = 1.33 is meant to ensure
that the product units will remain within specification even when some (small) change
occurs in the process mean. The 1.33 requirement will also provide for possible error
caused by sampling variability in the value of Cp computed from sample data.
We should realize when estimating Cp that, if the sample size is small, the estimate
is not reliable. Therefore, it is advisable to estimate the capability index from a sample
size of at least 100, which is equivalent to 25 subgroups of four units each if a control
chart is used to monitor the process. If such a large sample is not available, we have to
resort to a confidence interval for the index, which is discussed in Chapter 5.
Example 4.10
18 18
24 24
40 64 40 64
(a) (b)
Solution
Using the formulas given above:
Both processes have the same C p in spite of the fact they are different in terms of
their capabilities. One produces almost all values inside specification, whereas the
other produces a sizeable proportion of values outside specification.
From the above example, we see that the C p index is not able to recognize the lack
of centering in the process shown in Figure 4.21b; it only evaluates the process vari-
ability in comparison to the specification variability. This drawback of the Cp index is
rectified in the next capability index, Cpk, which is defined as:
3σ 3σ
Cpk < 1.0 Cpk > 1.0
Figure 4.22 How the Cpk index measures the capability of a process.
2 74 A Firs t C o urse in Q ua lit y En gin eerin g
customers usually require a value of at least 1.33 for Cpk to cover possible drift in the
process center and provide for possible sampling error.
Example 4.11
A process that has been brought in-control using control charts has process aver-
age X = 41.5 and R /d 2 = 0.92. If the specification for the process calls for values
between 39 and 47, calculate the capability indices C p and C pk for this process in its
present condition.
Solution
47 − 39
Cp = = 1.45
6 × 0.92
Min ( 47 − 41.5),( 41.5 − 39) Min[ 5.5, 2.5] 2.5
C pk = = = = 0.91
3 × 0.92 3 × 0.92 3 × 0.92
The process passed the C p test (≥1.33) because of small variability, but it failed the
Cpk test (<1.33) because of lack of centering, as shown in Figure 4.23.
The Cpk is a superior index for measuring the capability of a process, because it
checks on process centering as well as process variability. Often, however, it helps to
compute the value of both Cp and Cpk and to make the comparison, as done in the above
example. Such a comparison will reveal the condition of the process with respect to its
variability and centering and help in determining what needs to be done to improve
the process capability.
Note: The Cpk is always less than or equal to C p. Cpk = Cp when process is centered
with respect to the specifications. The value of C pk will give erroneous results includ-
ing negative values if the process center is outside of the spec lines. If the value of C pk
is a negative value, it means the process center is totally missing the specs.
39 41.5 43 47
When the output of a process is an attribute, the capability requirements are specified
in terms of the maximum number of defectives that will be tolerated in the output of
the process. Specifications such as “not more than 1% defective,” “not more than 10
errors per 1000 opportunities,” or “not more than 50 parts per million (ppm) defec-
tive” are commonly used. If attribute control charts are in use for controlling a process,
then the capability of the process can be read directly from the charts. The CL of the
P-chart gives the average proportion of defectives in the process, and the CL of the
C-chart gives the average number of defects per unit in the process. These quantities
can be compared with the capabilities required by the customer or the capability goals
established by the producer.
We see that the capability of a process with attribute output is measured differently
from the capability of a process with measurable output. Such a lack of uniform measures
of capability applicable to both measurable and attribute outputs gave rise to the creation
of capability measures in “number of sigmas” by the Motorola statisticians (Motorola
Corporation 1992). They proposed to measure the capability of a process by the distance
at which specifications are located from the process center measured in number of stan-
dard deviations (σs) of the process. For processes with measurable outputs, which can be
assumed to be normal, it can be calculated if the mean and standard deviation are known
or estimated. For processes that produce attribute output, the capability is expressed as
the number of sigmas of a normal process that produces the same proportion of defectives
as the process in question. Suppose that a process is producing 2% defectives we first find
the normal process that produces 2% defectives. It can be shown (using the normal tables)
that a normal process that has specifications at 2.327σ on either side produces a total of
2% defectives. So, the attribute process that produces 2% defectives has capability 2.327σ.
Thus, this method provides a measure to evaluating process capability of service processes,
where most outputs are attributes and manufacturing processes where most outputs are
measurements. A more detailed description of how process capability is designated using
the number of sigmas, with 6-sigma as the benchmark, is given in Chapter 5.
The measurement system analysis is done to make sure that measuring instruments of
adequate capability are available for taking measurements on the product characteris-
tics and process variables of a given process. The term “capability” is explained further
in this section; it essentially means that the instrument will be able to measure the
process variable or product characteristic with reasonable accuracy and precision. In
simpler words, capability refers to the ability of the instrument to measure the prod-
uct characteristics and process variables truthfully. The term “measurement system”
is used to imply that sometimes a scheme, a system, including several instruments,
appliances, operators, and a procedure is needed to generate a measurement. We will
use the terms “instrument,” “gage,” and “measurement system” to mean the same. The
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 277
main reference for this section on measurement system analysis is the Measurement
System Analysis (MSA)—Reference Manual published by the big three automakers
(Chrysler LLC, Ford Motor Company, and General Motors Corporation 1995 &
2010). Definitions and standards are based on the two editions of this manual.
4.7.1 Properties of Instruments
Large bias with large variability Large bias with small variability
Small bias with large variability Small bias with small variability
Bias
No linearity problem
Value range bias remains the same over
the range of values
Stable system
Time no change in precision
over time
Unstable system
Time precision changes
over time
In addition, two other properties of an instrument, stability and linearity, are also
used. “Stability” refers to how the precision of an instrument remains consistent over
time. “Linearity” refers to how the bias remains the same over the range of possible
values of the measurement. Figure 4.25 shows graphically the definition of stability
and linearity. Finally, “resolution” refers to the smallest division of the unit that the
instrument is designed to measure. For example, if the smallest division on a foot-rule
is 1/16 of an inch, then the resolution of the foot-rule is 1/16 in.
So, the properties of an instrument can be listed as:
1. Accuracy—how small the bias is
2. Precision—how small the variability is from the average
3. Stability—how constant the variability is over time
4. Linearity—how constant the bias is over the possible values
5. Resolution—the smallest value the instrument can measure
The following discussion explains how these properties of an instrument are evalu-
ated and how they can be controlled in order to provide a reliable set of measurements
for a production process. A brief discussion on measurement standards is provided
before the methods of evaluating measurement systems are described.
4.7.2 Measurement Standards
The standards for characteristics, such as length, weight, hardness, color, and so on, are
maintained by standards organizations of individual nations and are called “national
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 2 79
standards.” For the United States, for example, the national standards are maintained
by the National Institute of Standards and Technology (NIST), headquartered at
Gaithersburg, Maryland. Similarly, every industrialized nation has a national standards
organization of its own that maintains the national standards for that nation. Hence, the
standard for any unit such as ft., kg., or °C in a country is defined by that maintained at
the standards organizations of that country, and all the instruments that measure these
units must be compared with the national standards for evaluating their accuracy and
precision. This is done through a hierarchy of intermediate standards that intervene
between the national standards and the instruments on the shop floor. The Figure 4.26
shows the hierarchical structure through which this comparison takes place.
The process of comparing a lower-level standard with a higher-level standard and
making corrective adjustments to make it conform (to the degree possible) to the
higher-level standard is called “calibration.” Instruments and standards should be kept
calibrated against next higher-level standards to maintain their correctness or integ-
rity. The ability of an instrument to establish its correctness, or integrity, by remaining
in calibration with the national standard even if through the sequence of intermediate
standards is called “traceability.”
Referring to Figure 4.26, the primary standard is one that is directly compared and
calibrated against the national standard. The primary standards are owned by organi-
zations, which in the United States can be another government agency or private labo-
ratory. Primary standards are expensive, and they are too delicate for regular, routine
calibration purposes. Therefore, another set of intermediate standards—namely, sec-
ondary standards—are created by the labs and used for routine calibration. Primary
and secondary standards owned by government or private labs are maintained in
environmentally controlled premises under the supervision of specialists in metrol-
ogy. Working standards are owned by manufacturing companies and are periodi-
cally calibrated against the secondary standards. These working standards, which are
also maintained in environmentally controlled labs in manufacturing facilities, are
used to calibrate the shop-floor instruments. In a well-maintained quality system,
every instrument used in the production processes must be kept calibrated so as to
National
standard
Primary standard
Secondary standard
Working standard
Shop instrument
4.7.3 Evaluating an Instrument
1. Good resolution suitable for the purpose for which the instrument is used
2. Near-zero bias
3. Measurement variability much smaller than the variability in the product it is
used to measure
4. Variability that is small compared to the tolerance allowed in the product
5. Stability in precision
6. No linearity problem
If the same instrument is used to measure different measurements, the above
requirements should be met with respect to the most demanding measurement.
4.7.3.2 Evaluation Methods The methods (or testing procedures) for assessing the
properties of instruments and the standards for their acceptability are described
below. Many of the recommendations made here on instrument capability are those
given by the Measurement Systems Analysis (MSA)—Reference Manual (Chrysler LLC
et al. 1995/2010).
This manual recommends that before starting to test the acceptability of an instru-
ment, the question of whether the measurement in question is needed at all must
be considered. There are many measurements being taken in production shops that
serve no useful purpose. There is no use for elaborate checking of the instrument if
the measurement it provides is not needed. An examination of how the results from
the measurement are used—and their usefulness in determining the condition of a
process or the acceptability of a product—would help in deciding the necessity of the
measurement. Also, a preliminary evaluation should be made of whether the instru-
ment in question is affected by the environment in which the measurement is taken.
Ambient temperature and humidity, lighting, as well as gas and froth accumulation,
are some of the factors known to affect measuring processes. If any such factor is seen
to influence the measurement, then experiments to evaluate the instrument should be
designed taking those factors into consideration.
The procedures described below are generic procedures, and they may have to be
modified to suit a given instrument and set of special circumstances that may prevail
around its use. These procedures should be written up as part of an organization’s instru-
ment calibration program. These procedures are collectively called the “Gage R&R”
study, referring to the repeatability and reproducibility study of a gage for evaluating its
precision. Since evaluation of its precision is the major part of the study of an instrument,
the whole process of evaluating an instrument is often called the “Gage R&R study.”
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 2 81
So, the Gage R&R study is made to evaluate the following properties of an
instrument:
1. Resolution
2. Bias (accuracy)
3. Variability (precision):
a. From repeatability
b. From reproducibility
4. Stability
5. Linearity
Sample mean 1 1 1
UCL = 1.146
1.145
Mean = 1.144
LCL = 1.142
1.140 1 1 1 1 1
Subgroup 0 5 10 15
0.006 1
0.005 UCL = 0.005320
Sample range
0.004
0.003
0.002 R = 0.002067
0.001
0.000 LCL = 0
(a)
X- and R-charts for data with resolution 0.01
1* 1* 1
1.150 *1 *
1
Sample mean
1
* *
1.145 UCL = 1.145
Mean = 1.144
LCL = 1.142
1.140 1* *1 1* *1 *1 *1 1* *1 *1
Subgroup 0 5 10 15
0.010 1 1
Sample range
0.005
UCL = 0.003431
R = 0.001333
0.000 LCL = 0
(b)
Figure 4.27 (a) Checking for the resolution of an instrument with resolution 0.001. (b) Checking for the resolution of
an instrument with resolution 0.01.
recognized by too many X values outside the limits on the X -chart. The following
rules are recommended to recognize a lack of sufficient resolution in an instrument
when using X - and R-charts:
1. If the R-chart shows very few possible values (≤ 3) falling inside the control
limits, conclude that the instrument has inadequate resolution.
2. If the R-chart shows more than three possible values inside the control lim-
its but more than one-fourth of the values are zero, then conclude that the
instrument has inadequate resolution.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 283
In the example shown in Figure 4.27b, the resolution of 0.01 seems to be inad-
equate on both counts, whereas the resolution of 0.001 shown in Figure 4.27a seems
to be satisfactory on both counts. Five possible values for R fall within the limits of the
R-chart when the resolution is 0.001, whereas no R value falls inside the limits when
the resolution is 0.01. Also, there are too many zero values for R with 0.01 resolution.
Problems relative to inadequate resolution are easily resolved. Either an instrument
with finer divisions is sought or, if the measurements are taken by rounding off the
readings, the measurements are rounded to a larger number of decimal places. A rec-
ommendation made by the Measurement Systems Analysis (MSA)—Reference Manual
(Chrysler LLC et al. 1995/2010) is to choose an instrument with a resolution smaller
than 1/10 of the spread in process variability represented by 6σp, where σp represents
the actual variability in the product.
Example 4.12
The following 10 readings were obtained by measuring a 1-in. gage block (a working
standard) by a caliper. What is the bias of the caliper in this range? (A gage block is
used in labs as a working standard for measuring length. It is made of material that
resists wear and tear and is not easily affected by temperature changes.)
No. 1 2 3 4 5 6 7 8 9 10
Readings (in.) 1.032 1.045 1.035 1.030 1.030 1.035 1.040 1.035 1.040 1.035
Solution
The true value (TV) corresponding to these readings is 1.00, since the gage block is
a tool-room instrument and its designated dimension is 1.00 in.
X = 1.0357
s = 0.0048
Bias = ( X − TV ) = 1.0357 − 1.000 = 0.0357 in.
This means that the caliper gives, on average, values that are larger than the true
value by 0.0357.
28 4 A Firs t C o urse in Q ua lit y En gin eerin g
When the measurement destroys the unit being measured, it will not be possible to
make repeated measurements on the same unit; a different approach is necessary, as
shown in the Example 4.13.
Example 4.13
A sand tester for testing the tensile strength of sand mixes is being evaluated.
Twenty dog-bones were made from the same batch of (well-mixed) sand and were
divided randomly into two groups of 10 each. One group was tested by a standard
tester (tool-room instrument) and the other group was tested by the tester under
evaluation. The following readings were recorded. Calculate the bias in the shop
tester, the tester under evaluation.
No. 1 2 3 4 5 6 7 8 9 10 X
Standard tester 262.2 261.5 267.7 260.9 262.2 261.7 269.3 272.4 263.2 265.2 264.6
(lb.) (True Value)
Shop tester (lb.) 279.9 267.1 287.1 256.9 259.9 260.6 263.2 270.2 275.2 261.9 268.2
Solution
4.7.3.5 Variability (Precision) Instrument variability, or error, could come from two
sources:
1. Repeatability
2. Reproducibility
Repeatability error comes from the instrument hardware that prevents the instru-
ment from giving identical readings when measuring the same unit of a product
repeatedly using the same operator.
Reproducibility error arises from the instrument allowing different operators to do
the measurement differently, thus preventing the instrument from giving identical
readings when measuring the same unit using different operators.
Repeatability error is quantified by the standard deviation of the readings from the
instrument taken repeatedly on the same unit by the same operator. Reproducibility
error is quantified by the standard deviation of the readings taken by different opera-
tors on the same unit. Example 4.14 below shows how we design and conduct an
experiment to evaluate the repeatability and reproducibility errors of an instrument.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 285
Example 4.14
Solution
Repeatability Error
The difference among readings on the same unit by any one operator is caused by
the repeatability error. The standard deviation of the three readings on any one
unit taken by any one operator provides an estimate of the repeatability error. For
example, from the three readings of Unit 1 by Operator A, the repeatability error
can be estimated as follows. Calculate the standard deviation s of the three readings
(already done in the table) and divide it by the correction factor c4, for n = 3, taken
from Table A.4 in the Appendix. We get:
s 0.0006
σˆ e = = = 0.00068
c 4 0.8862
where the subscript “e” in σe stands for “equipment” or hardware. [The hat ( ∧ )
indicates that it is an estimate.]
The repeatability error can also be estimated from the range R of the three read-
ings as:
R 0.001
σˆ e = = = 0.0006
d 2 1.693
The value of the correction factor d2 is obtained from Table A.4 in the Appendix
for n = 3.
Forty-five such estimates, however, are possible from the 45 sets of readings made
by three operators on the 15 units of the product. These can be averaged to obtain
a better estimate for repeatability error. Using s the average value of s from the 45
sample readings, we have s = (0.0011 + 0.00154 + 0.00118)/3 = 0.00126. Therefore,
s 0.00126
σˆ e = = = 0.00142
C4 0.8862
R 0.00236
σˆ e = = = 0.00139
d2 1.693
It is interesting to see that the estimates obtained from the two methods, one
using S and the other using R, agree so closely. We will take the repeatability error
as σˆ e = 0.0014.
286
Reproducibility Error
For calculating the reproducibility error, we first calculate the x values for each of
the units under each operator. There are 45 x values. The reproducibility error of
the instrument causes the difference in x values obtained by two different operators
for the same unit. We will assume that the repeatability error has been neutralized
in each x as a result of averaging of the three readings. Thus, the range of the three
x values from the three operators, for any one unit of the product, will provide an
estimate for the reproducibility error. For example, R( X ) from Unit 1 = (1.141 −
1.13633) = 0.00467, and the estimate of the reproducibility error is:
0.0047
σˆ o = = 0.0028
1.693
where the subscript “o” in σo stands for “operator.” (d2 was chosen from Table A.4 for
n = 3 because the R came from a sample of three X s)
There are 15 values of R( X ) that can be averaged to provide R ( X ) = 0.0048,
and the estimate for σo is:
0.0048
σˆ o = = 0.0028
1.693
It is just a coincidence that the estimate from Unit 1 is the same as from the aver-
age of all 15 units.
Gage Error
Having obtained estimates for repeatability and reproducibility errors, the total
variability from the measuring system, from both repeatability and reproducibility,
can be obtained as the vector sum of the equipment and operator errors:
σ g = σ e2 + σ o2
where the subscript “g” stands for “gage.” We can call it the “gage” variability.
For the example, σˆ g = 0.0014 2 + 0.0028 2 = 0.0031.
variability should not be more than 10% of product variability. In this example, the
instrument variability is larger than the variability in the product! Obviously, this is
not an acceptable situation. On further analysis, if we consider only the repeatabil-
ity error, the ratio σe/σp = 0.0014/0.0027 = 0.52. It says that the instrument (equip-
ment) variability is about 50% of the product variability. If we calculate the ratio
σ o /σ p = 0.0028/0.0027 = 1.037 indicating that it is the reproducibility error that
gives rise to the inflated variability in the instrument. The reproducibility error can be
reduced by making the measurement procedure used by the operators more uniform
through proper written instructions and training. Also, some fool-proofing method
that will prevent the different operators using the instrument differently can be tried.
The procedure used in the example above closely follows that provided in the
Measurement Systems Analysis—Reference Manual (Chrysler LLC et al. 1995/2010)
with some simple modification. Whereas the manual uses a d 2* as the correction fac-
tor to estimate the standard deviation from the range R, we use the factor d2 for that
purpose. The d 2* , available in special tables, is equal to d 2 when the number of trials
(number of units inspected x number of trials per unit) exceeds 15. We assume that at
least 15 trials will be available in the data. Also, we have assumed that the estimate for
reproducibility error (σ0) calculated as above does not contain any of the repeatability
error as it has been neutralized by averaging of the trial readings. The procedure in the
reference manual provides a correction to remove the possible contamination of the
reproducibility error by the repeatability error.
There are several procedures available in the literature to make a Gage R&R study
including one using ANOVA. We chose this procedure, referred to as the “ X and R”
method, because it is based on first principles and the rationale for most of the steps
is easy to follow.
In the above example, Operator C was a veteran machine operator, and Operators
A and B were professors who taught statistics but could claim no experience in using
the caliper. So, the instrument was used differently by the different “operators,” which
explains the high level of reproducibility error. The repeatability error comes from
the instrument hardware. If it is a caliper, then it could be due to the friction in the
sliding surfaces or unevenness of the knife-edges. If it is a hydraulic tensile tester, it
could be from the temperature rise in the fluid or the lack of a positive grip on the
specimen. Awareness of the existence of the error and understanding its source would
help in minimizing such errors and thus improving the capability of the instrument.
Generally speaking, smaller repeatability error comes with more expensive equipment
because protection needs to be built in to prevent the various causes contributing
to this error. Understanding the existence of the instrument variability, analyzing
the possible sources and finding solutions to minimize the variability would increase
instrument capability, which in turn will improve product quality.
4.7.3.6 A Quick Check of Instrument Adequacy A quick way of comparing the vari-
ability in the instrument with the variability in the product, and thus of assessing
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 289
the adequacy of the instrument, is as follows. Take repeated measurements using the
instrument on several units of a product using one operator, as shown in Table 4.10 for
Operator A. Make X - and R-control charts, as shown in Figure 4.28. The R-chart in
this figure tracks the variability in the instrument, not the variability in the product,
since the three measurements from which an R value is calculated come from the same
unit; hence, the R-chart will show if the variability in the instrument is consistent,
which is also called the “stability.” If R values fall outside the limits, it would mean
that the variability is not stable. If R values show a trend over time, or show wide vari-
ability, it should lead to an investigation as to why the variability changes over time.
The X -chart in this figure is an interesting chart. The limits for the X -chart are cal-
culated from the variability of the instrument—not the variability of the product. The
differences in the values of X , however, represent only the variability in the product.
Therefore, this X -chart presents a comparison of product variability and instrument
variability. If the variability in the instrument is small compared to the variability in the
product, then the band width of the control limits will be small, causing a large number
of X values to fall outside the limits. Therefore, if there is a large number of X values
outside the limits—that is, the X -chart shows that the “process” is not-in-control—then
it indicates a desirable situation: That the instrument variability is much smaller com-
pared to product variability. According to the rules recommended by the Measurement
Systems Analysis (MSA)—Reference Manual (Chrysler LLC et al. 1995/2010) at least
50% of X values should fall outside the control limits in the X -chart constructed as
above for the instrument to be considered to have adequate capability.
We should remember that lack of adequate resolution of the instrument could also
cause a large number of X values to fall outside the limits in the X -chart constructed
above. The resolution question must be first resolved; that is, the instrument chosen
should have “good” resolution before the experiment is done to compare the product
and instrument variability.
1.150 1
* 1
*
Sample mean
1 1 1
* UCL = 1.146
1.145
Mean = 1.144
LCL = 1.142
*1 *1 *
1.140 1* 1* 1
Subgroup 0 5 10 15
0.006 *1
0.005 UCL = 0.005320
Sample range
0.004
0.003
0.002 R = 0.002067
0.001
0.000 LCL = 0
4.8 Exercise
4.8.1 Practice Problems
4.1 The following data were collected on the amount of deodorant in cans, in
grams, filled in the filling line of a packaging company. Prepare X - and
R-charts for the data, and comment on the condition of the process.
8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 8 9 10 11 12
TIME AM AM AM AM PM PM PM PM PM PM PM PM PM PM PM AM AM AM AM PM
1 25.0 24.3 25.2 24.3 26.2 24.1 25.1 24.2 24.1 25.1 24.6 23.9 25.1 24.8 24.4 23.9 24.5 24.4 25.2 23.1
2 24.5 25.1 24.0 24.0 25.1 24.0 25.2 24.1 24.3 25.4 24.9 23.7 27.3 24.4 24.3 23.4 24.8 24.5 25.6 23.2
3 24.5 24.2 25.1 24.3 25.2 24.4 25.0 24.3 24.1 25.2 24.1 24.2 27.3 24.1 24.9 23.8 24.8 24.8 25.3 23.4
4 25.1 24.1 24.2 24.0 24.2 25.0 25.4 24.2 25.0 24.4 24.0 24.1 27.1 24.3 25.0 24.0 24.9 24.6 25.2 23.4
4.2 For the data in Problem 4.1, prepare X - and S-charts, and compare the
results with those from the X - and R-charts.
4.3 To control the weight of jumbo ravioli packaged in a food processing plant,
the following data were collected on the weight of four packages of ravioli,
approximately every 30 minutes. Prepare X - and R-control charts to monitor
the weights, and comment on the results. Only 15 subgroups were available.
The weights are in ounces. (Data from Schmillan and Johnson 2001.)
SUBGROUP 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Time 7:30 8:15 8:45 9:15 9:45 10:15 10:45 11:15 11:45 12:15 12:45 1:30 1:55 2:15 2:45
Package 1 35.1 34.4 34.7 39.0 35.1 36.1 34.8 35.7 35.8 36.5 35.2 35.3 35.3 36.9 35.7
Package 2 36.5 33.3 35.7 35.5 37.1 35.7 34.5 34.6 36.4 37.5 37.1 34.6 36.3 35.3 36.9
Package 3 34.3 33.9 35.6 34.3 34.8 36.2 35.8 36.7 34.2 36.3 36.4 33.8 37.4 35.8 35.3
Package 4 34.3 34.6 34.9 36.4 35.4 35.6 36.6 36.6 34.9 37.6 36.1 36.3 36.5 36.3 37.7
4.4 For the data in Problem 4.3, prepare X - and S-charts, and compare the
results with those from the X - and R-charts.
4.5 A sample of 30 bottles was taken at the end of a filling line every 30 minutes;
inspected for cleanliness, proper labeling, location of traceability code, and so
on; and were classified as either good or defective. The table below shows the
data on 25 such samples. Prepare a P-chart and comment on the results.
SUBGROUP 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
No. of 0 0 4 3 2 0 1 2 0 3 0 4 1 1 1 2 1 0 0 1 1 0 2 1 0
defectives
4.6 In a fabrication shop where cabs are built for different models of bulldozers,
the cab shells are inspected for proper welds before painting. The table below
gives the total number of welds in each cab and the number found to be
defective. Recommend an appropriate control chart for monitoring the welds,
and compute the control limits.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 2 91
Cab no. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Total no. of welds 46 62 80 75 32 69 45 48 46 92 121 110 55 64 32 95 68 42 99 101 66 75
No. of defectives 7 3 6 9 5 11 10 5 6 6 8 11 2 2 3 4 12 9 14 16 8 6
4.7 The number of printing errors per page in a newspaper was recorded from
a randomly selected page, each day, for 20 days. The data are to be used
to prepare a control chart to monitor and control printing mistakes in the
newspaper. Recommend a suitable control chart, and compute the control
limits.
DAYS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
No. of errors 7 6 1 2 4 1 8 14 2 9 10 2 4 9 21 3 5 4 2 5
4.8 The inspection crew of a state department of transportation counts the num-
ber of potholes per mile on a major highway as part of their quality assurance
program. The data below show the number of holes on 32 randomly selected
miles. Is the pothole-producing process in-control? In other words, does the
incidence of potholes seem to be consistent from mile to mile, or do some
miles have significantly more potholes than the others? (The control chart is
really a series of significant tests, a test being made each time a sample is col-
lected and checked.)
Mile no. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
No of potholes 12 8 6 4 2 21 3 2 8 13 25 0 1 5 4 8
Mile no. 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
No. of potholes 11 31 3 0 1 7 0 7 5 2 9 13 0 2 12 15
4.9 The data in the table below represent the production quantities of paper
bags, and the number rejected each hour for 23 hours in a paper mill.
What type of control chart would be used for monitoring and controlling
the production of defective paper bags? Calculate the control limits for
the chart(s).
Hour 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Production 1200 1400 2100 1800 1600 2000 1300 1200 1100 1000 900 1300 1400 2300 2100 2200 1500 1700 2300 1800 1700 1600 1400
No. of 34 56 43 44 32 67 43 56 87 55 50 120 87 90 32 44 23 21 87 34 30 23 55
defectives
4.10 A group of IE majors was intent on helping a friend improve the quality
of service she provides to customers in her house-cleaning business. The IE
group visited randomly selected houses after cleaning crews had completed
292 A Firs t C o urse in Q ua lit y En gin eerin g
their jobs and obtained the following data. The data show the number of
rooms in each of the houses visited and the number of “defects” that were
found. The data also indicate the ID of the crews (A, B, or C) that cleaned
the houses. Is the cleaning process in-control? Is the cleaning process capable
if the cleaning business assures its customers that there will be “zero” defects
in the houses after the crews have done the cleaning?
Home 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
No. of rooms 8 6 10 7 6 6 8 9 7 10 11 7 8 9 9 7 7 6 6 8 7 11 12 9 8
No. of defects 1 0 1 0 1 0 0 0 1 1 5 0 1 4 0 0 0 0 1 0 1 1 2 1 1
Crew ID B B C A B A C C B C C B A C A A A C B C A C C A B
4.11 Calculate the capability indices Cp and Cpk for the process in Problem 4.1.
The LSL and USL are 24 and 26 g, respectively; the target being at the
center of the specification. Use R /d2 to estimate the process standard devia-
tion. Make sure that the estimates for the process mean and process standard
deviation are made from a process that is in-control.
4.12 Assuming the specifications for the weight of ravioli packages in Problem
4.3 are 31.5 and 32.3 oz., and the target is at the center of the specification,
find the values of Cp and Cpk. Make sure that the estimates for the process
parameters are made from a process that is in-control.
4.13 The data below are the thicknesses of rubber sheets measured in millimeters
by a thickness gage. Two measurements have been taken of each specimen by
each of the two operators. The specifications are at 0.50 and 1.00 mm.
a. Calculate the reproducibility and repeatability errors (standard devia-
tions), and separate the gage variability and part variability.
b. Verify the adequacy of the gage in resolution and precision.
OPERATOR 1 OPERATOR 2
PART TRIAL 1 TRIAL 2 TRIAL 1 TRIAL 2
1 1.00 1.00 1.04 0.96
2 0.83 0.77 0.80 0.76
3 0.98 0.98 1.00 1.04
4 0.96 0.96 0.94 0.90
5 0.86 0.83 0.72 0.74
6 0.97 0.97 0.98 0.94
7 0.63 0.59 0.56 0.56
8 0.86 0.94 0.82 0.78
9 0.64 0.72 0.56 0.52
10 0.59 0.51 0.43 0.43
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l I 293
4.14 The table below shows part of the data from Table 4.10. Complete a gage
R&R study using only this part of the data, and compare the results with
those of Example 4.14.
4.15 Surgical instruments, after being used in a surgery, undergo a process of decon-
tamination and sterilization (D&S Process) before they are used for the next
surgery. The D&S process involves both machine washing as well as hand wash-
ing followed by visual inspection. The visual inspection which includes checking
for leftover contamination as well as missing instruments provides data on the
effectiveness of the D&S process. Data were collected over a 12-month period
on the two types of “errors” and are shown in the table below.
The hospital is interested in establishing a program to reduce these two
types of errors. Use appropriate quality control tools to set up a system for the
hospital where these errors can be monitored and possibly reduced.
Assume that a “large” number of instruments are being used.
4.16 Effective hand hygiene practices among healthcare workers prevent infections
in healthcare settings. Compliance to hand hygiene practices, however, is dif-
ficult to enforce or monitor due to individual behaviors, and organizational
294 A Firs t C o urse in Q ua lit y En gin eerin g
4.8.2 Mini-Projects
Mini-Project 4.1 The table below shows data on the breaking strength of water-jacket
cores used in making cylinder-head castings in a foundry. The jacket core is in three
sections, identified as the B, C, and R sections. In other words, the three sections
together make one core for one cylinder head. These cores, which are made of sand
mixed with bonding material, should have sufficient strength to withstand handling
stresses and the stress generated when hot iron is poured around them in the mold.
Baking them in a hotbox improves their strength, but overbaking will make them
brittle. The three sections of the core are baked at different locations in a hotbox, leav-
ing room for suspicion that the three sections may not be baked uniformly and so may
not be of the same strength. It is essential that these three parts of the core are equally
strong, because the strength of the total core is equal to that of the weakest section.
The data were collected to verify if the cores have consistently “good” strength.
Two sets (each set contains a B, C, and R core) of cores were taken every half-hour
(in the night shift) and were “broken” in a transverse testing machine, which gave the
beam strength of the cores in kilograms. After 3:00 a.m., however, only one set of
cores was tested, because the production people were running short of cores and did
not want to spare cores for breaking.
Analyze the data and see if the process is producing consistently good cores. The
strength should be consistent both among the three sections and from time to time.
What should the CL and limits of the control chart be for future control? What would
your suggestion be to improve the quality of the process?
What is the current process capability? Note that no specifications are given and
you may have to come up with an appropriate set of specifications for the strength. The
strength of the cores is a process parameter. The foundry does not sell cores; it sells
the iron that is produced using the cores. Strength specifications are created to assist
in maintaining the consistency of the process. Such specifications are usually created
as natural tolerance limits (NTLs) of the process, which are obtained as ( X ± 3σˆ ).
The capability can be described by the NTLs.
Mini-Project 4.2 The data below represent the proportion of rejected castings for vari-
ous defects on a particular production line. Make a P-chart using the data on the daily
proportion of defectives. Draw any useful information that can be gleaned from these
data based on the P-chart.
DATE 5/1 5/2 5/3 5/6 5/7 5/8 5/9 5/10 5/13 5/28 5/29 5/30 5/31 6/3 6/12
No. made 115 192 146 350 284 236 353 193 212 174 329 289 238 157 162
No. inspected 26 177 128 150 230 236 204 143 171 136 276 156 234 108 85
No. defective 1 54 43 63 67 73 69 49 42 44 100 47 98 43 36
DATE 6/13 6/14 6/15 6/17 6/18 6/19 6/20 6/21 6/24 6/25 6/26 6/27 6/28 7/1 7/2
No. made 151 158 180 135 305 191 169 170 309 104 190 162 313 138 295
No. inspected 55 158 150 88 305 147 167 159 266 93 190 128 259 127 158
No. defective 29 104 112 69 137 61 73 101 126 46 94 85 135 43 51
DATE 7/3 7/5 7/8 7/9 7/10 7/11 7/29 7/30 7/31 8/1 8/2 8/5 8/6 8/7 8/8
No. made 164 174 188 149 164 134 319 135 184 180 141 152 142 182 159
No. inspected 164 170 140 94 151 83 240 124 173 166 141 103 128 161 140
No. defective 70 74 80 51 84 43 144 57 63 78 69 38 44 42 39
DATE 8/9 8/12 8/13 8/14 8/15 8/16 8/28 8/29 8/30 8/31 9/3 9/4 9/5 9/6 9/7
No. made 266 225 346 355 308 173 101 352 165 333 322 188 179 125 335
No. inspected 142 192 302 329 273 147 23 255 165 296 236 172 144 80 270
No. defective 51 68 128 161 125 66 7 49 35 79 77 36 28 25 65
References
AT&T. 1958. Statistical Quality Control Handbook. 2nd ed. Indianapolis, IN: AT&T
Technologies.
Carey G. R. and R. C. Lloyd. 2001. Measuring Quality Improvement in Healthcare, A Guide
to Statistical Process Control Applications. Milwaukee, WI: ASQ Quality Press.
Chrysler Corporation, Ford Motor Company, and General Motors Corporation. 1995.
Measurement Systems Analysis (MSA)—Reference Manual. 2nd ed. Southfield, MI: AIAG.
Chrysler Group LLC, Ford Motor Company, and General Motors Corporation. 2010.
Measurement Systems Analysis (MSA)—Reference Manual. 4th ed. Southfield, MI: AIAG.
Deming, W. E. 1986. Out of the Crisis. Cambridge, MA: MIT—Center for Advanced
Engineering Study.
298 A Firs t C o urse in Q ua lit y En gin eerin g
Duncan, A. J. 1956. “The Economic Design of X Charts Used to Maintain Current Control
of a Process.” Journal of the American Statistical Association 51: 228–242.
Duncan, A. J. 1974. Quality Control and Industrial Statistics. 4th ed. Homewood, IL: Irwin.
Grant, E. L., and R. S. Leavenworth. 1996. Statistical Quality Control. 7th ed. New York:
McGraw-Hill.
Krishanmoorthi, K. S. 1989. Quality Control for Operators and Foremen. Milwaukee, WI: ASQC
Quality Press.
Krishnamoorthi, K. S., V. P. Koritala, and C. Jurs. June 2009. “Sampling Variability in
Capability Indices.” Proceedings of IE Research Conference—Abstract. Miami, FL.
Montgomery, D. C. 2013. Introduction to Statistical Quality Control. 7th ed. New York: John
Wiley.
Motorola Corporation. 1992. Six Steps to Six Sigma. Schaumburg, IL: SSG 102, Motorola
University.
Ott, E. R. 1975. Process Quality Control—Trouble Shooting and Interpretation of Data. New York:
McGraw-Hill.
Perla. R. J., L. P. Provost, and S. K. Murray. 2011. “The Run Chart: A Simple Analytical Tool
for Learning from Variation in Healthcare Processes.” BMJ Quality & Safety 20 (1) 46–51,
qualitysafety.bmj.com.
Schmillan, P., and C. Johnson. 2001. “Evaluation of Process Control of Jumbo Ravioli.”
Unpublished project report, IME 522, IMET Department, Bradley University, Peoria,
IL.
Shewhart, W. A. 1931. Economic Control of Manufactured Product. Princeton, NJ: D. Van
Nostrand Co., Inc. Reprinted 1980. Milwaulkee, WI: American Society for Quality.
Shewhart, W. A. 1939. Statistical Method from the Viewpoint of Quality Control. Washington,
DC: Graduate School of Department of Agriculture. Reprinted 1986. Mineola, NY:
Dover Publications, Inc.
Van Sandt, T., and P. Britton. 2001. “A Quality Inspection of A+ Cleaning.” Unpublished proj-
ect report, IME 522, IMET Department, Bradley University, Peoria, IL.
5
Q ualit y in P roducti on —
P ro cess C ontrol II
5.1 Derivation of Limits
Before we look at these special control charts, we want to discuss how the formulas for
the control limits we used for the three major control charts are derived. We also want
to study the operating characteristics of these charts, which reveal their strengths
and weaknesses in discovering changes in the processes. These topics in the theory
of control charts enable a user to understand how the control charts work, which in
turn helps the user obtain the maximum benefit out of them when using them on real
processes. Furthermore, many real-world problems do not resemble the simplified
textbook versions that can be solved by direct application of the methods. Some do not
satisfy the assumptions needed, and some do not yield data in the format needed. It
then becomes necessary to modify the basic methods to fit the situation at hand. Such
modification of the methods is possible only for a user with a good understanding
of the principles behind, and the assumptions made, in the derivation of the control
chart methods. An understanding of the fundamental theory of the charts is also
necessary if one wants to read more advanced technical literature on these topics or
do research in them.
299
300 A Firs t C o urse in Q ua lit y En gin eerin g
This chapter also includes some additional topics in process capability and experi-
mental design beyond their coverage in earlier chapters. Specifically, we will discuss
in this chapter:
Derivation of the formulas for control limits of X - and R-charts
Derivation of limits for P- and C-charts
Operating characteristic curves of X - and R-charts
Operating characteristic curves of P- and C-charts
Control charts when the standards for μ and/or σ are given
Control charts for slow processes
The charts for individuals
The moving average and moving range charts
The exponentially weighted moving average chart
Control charts for short runs
Additional topics in process capability
Additional topics in design of experiments
Derivation of the limits for the X -chart is based on the assumption that the process
being controlled follows a normal distribution. We saw in Chapter 2 that if a process
is normally distributed, the sample average from that process is also normally distrib-
uted according to the following law:
σ2
If X ~ N (µ, σ 2 ), then X n ~ N µ,
n
Furthermore, according to the central limit theorem, even if the process distribu-
tion is not normal, the sample averages will tend to be normal as sample sizes become
large. That is, if X has any distribution f(x) with mean μ and variance σ2, then
σ2
X n → N µ,
n →∞ n
Suppose we have a process that is normally distributed with a certain mean and
standard deviation as shown in Figure 5.1. When we say that we want to control this
process, it means that we want to make certain the distribution of this process remains
the same, with the same mean and the same standard deviation, throughout the time
that is of interest to us. To verify this is so, we monitor the X values from samples
taken periodically from this process.
If the process distribution remains the same, then the distribution of X will also
remain the same and the observed values of X will “all” (99.73% of them) fall within
three standard deviations of X ( 3σ X ) from the mean μ. Conversely, if “all” the observed
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 3 01
Time
Process in control
values of X fall within three standard deviations from the mean, we can conclude the
process distribution remains the same over that time period. Because we know the X
values have the normal distribution, with µ X = µ and σ X = σ/ n , the limits at three
standard deviations, or the 3-sigma limits for X , are given by (see Figure 5.2):
UCL( X ) = µ + 3σ X = µ + 3σ/ n
CL( X ) = µ
LCL( X ) = µ − 3σ X = µ − 3σ/ n
Distribution
of X
σ
LCL UCL
Distribution
–
of X
σ
n
3σ 3σ
n n
µ– 3σ µ+ 3σ
n n
µ
Because the values of μ and σ are not usually known, we have to estimate them from
sample using X and R/d 2, respectively, where X is the average of about 25 sample
averages and R is the average of about 25 sample ranges from an in-control process.
The factor d 2 is a correction factor that makes R, or R, an unbiased estimator of σ. In
other words, the factor d 2 is such that the average of (R / d 2) denoted as E ( R /d 2 ) = σ.
Thus, substituting for μ and σ with estimates in the above expressions for limits,
we get:
R
UCL( X ) = X + 3
d2 n
CL( X ) = X
R
LCL( X ) = X − 3
d2 n
( )
If we make A2 = 3 / d 2 n , then:
UCL( X ) = X + A2 R
CL( X ) = X
LCL( X ) = X − A2 R
Values of A2 are computed for various values of n and provided in standard tables,
such as Table A.4 in the Appendix.
From the above derivation, we learn:
1. The control limit calculations for the X-chart are based on the assumption
that the process is normally distributed.
2. The control limit calculations are good, even for processes that are not nor-
mally distributed provided the sample sizes are large. (Even n = 4 or 5 is
known to be large enough for this purpose as long as the process distribution
in not too far from normal).
3. A2 provides 3-sigma limits for the X-chart. As mentioned in Chapter 4,
Dr. Shewhart, the author of the control chart method, recommended the
3-sigma limits because they provide the economic border line between too-
tight limits, which would produce excessive false alarms, and too-loose lim-
its, which would allow assignable causes to go undetected. If 2-sigma limits
are needed, for example, in a particular situation, as when someone wants
to detect and eliminate assignable causes quickly, the factor to use would be
2A2/3.
4. Because the control limits are at a 3-sigma distance from the centerline (CL),
there is a probability of a false alarm (Type I error) of 0.0027. In other words,
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 303
The rule for computing the limits for any 3-sigma control chart can be generalized as
follows.
If Θ is the statistic that is plotted to control a process parameter θ, then the CL
should be at the average of the statistic, or the expected value E(Θ), and the UCL and
LCL should be at E(Θ) + 3σ(Θ) and E(Θ) − 3σ(Θ), respectively, where σ(Θ) represents
the standard deviation of the statistic Θ. In other words, for any control chart, the cen-
terline will be at the average value of the statistic being plotted, and upper and lower
control limits will be placed at a distance of 3 × (standard deviation of the statistic) on
either side of the centerline. Therefore, for the R-chart, the CL should be at E(R), and
the two control limits should be at E(R) ± 3σ(R). So, we will need the expected value
of R, E(R) and the standard deviation of R to design the control limits for R.
The statistic W = R/σ, where R is the sample range and σ is the process standard
deviation, has been studied, and its distribution—along with its mean and standard
deviation—are known for samples taken from normal populations (Duncan 1974).
The statistic W is called the “relative range.” Specifically, E(W) = E(R/σ) = d2, a con-
stant for a given sample size n. Therefore, E(R) = σd2. Similarly, it is known, st. dev.
(W) = st. dev. (R/σ) = d3, a constant for a given n, which means that st. dev. (R) = σd3.
Values of d2 and d3 have been tabulated for various sample sizes drawn from normal
populations, as in Table A.4 in the Appendix. Using these, the limits for the R-chart
can be computed as:
Rd 3 3d
UCL( R ) = R + 3 = R 1 + 3
d2 d2
CL( R ) = R
Rd 3 3d
LCL( R ) = R − 3 = R 1 − 3
d2 d2
304 A Firs t C o urse in Q ua lit y En gin eerin g
3d 3d
Setting D4 = 1 + 3 and D3 = 1 − 3 , we get:
d2 d2
UCL( R ) = D4 R
CL( R ) = R
LCL( R ) = D3 R
Values of the constants D 3 and D4 have been computed and tabulated for various
values of n and are available in tables such as Table A.4 in the Appendix.
From this derivation, we note the following:
1. The control limits for the R-chart have been calculated on the assumption that
the samples are drawn from normal populations.
2. The control limits are 3-sigma limits. If, for example, 2-sigma limits are
needed, then the limits can be calculated as R(1 ± 2(d 3 /d 2 )), using d3 and d2
from Table A.4.
3. The control limits are not equidistant on both sides of the CL, because D3
will be smaller than D4 for all sample sizes.
4. These control limits are based on estimates for σ obtained from process data.
We should first recall that we want to control the proportion defectives p in the popu-
lation and we use the proportion defective P in the sample for this purpose. We want
the limits of natural variability for the P. To determine the control limits for the P, we
should first know the expected value and the standard deviation of the statistic P that
is being plotted.
P = D/n, where D is the number of defective units in a sample of size n drawn from
a population with p fraction defectives. From Chapter 2, D is a binomial variable; that
is, D ∼ Bi(n, p) and E(D) = np and V(D) = np(1–p). Therefore,
D 1 1
E ( P ) = E = E ( D ) = np = p
n n n
D 1 1 p(1 − p )
V ( P ) = V = 2 V ( D ) = 2 np(1 − p ) =
n n n n
p(1 − p )
Thus, the standard deviation ( P ) =
n
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 305
p(1 − p )
UCL( P ) = p + 3
n
CL( P ) = p
p(1 − p )
LCL( P ) = p − 3
n
Because the value of p, the proportion defectives in the population, is generally not
known, it is estimated using the average p of the observed values of P from about 25
samples. Hence, the limits are:
p (1 − p )
UCL( P ) = p + 3
n
CL( P ) = p
p (1 − p )
LCL( P ) = p − 3
n
In the above derivation we have used the following notations:
p is the proportion of defectives in the population to be controlled;
P is the statistic from sample used as an estimator for p;
pi is an observed value of P in the i-th sample; and
p is the average of the pi values.
The parameter that we want to control is c, the average number of defects per unit in
the population. We plot the statistic C, which represents the number of defects in any
sample unit.
The random variable C has the Poission distribution, and from Chapter 2, we know
that: E (C) = C and std. deviation (C ) = c .
Therefore, the 3-sigma limits for the statistic C are:
UCL(C ) = c + 3 c
CL(C ) = c
LCL(C ) = c − 3 c
306 A Firs t C o urse in Q ua lit y En gin eerin g
Because the value of c, the population parameter, is usually not known, we can esti-
mate it by c , or the average of the observed values of C from about 25 sample units.
The limits, then, are:
UCL(C ) = c + 3 c
CL(C ) = c
LCL(C ) = c − 3 c
c is the average of the number of defects per unit in the population, which is to
be controlled;
C is the statistic used as an estimator for c;
ci is an observed value of C from the i-th sample unit; and
c is the average of the ci values from samples.
The operating characteristics (OC) curve of a control chart describes the ability of the
control chart to discover assignable causes of various magnitudes. When they occur on
a process, assignable causes could disturb the process mean as well as the process stan-
dard deviation. In the following discussion, we will assume, for the sake of simplicity,
that an assignable cause changes only the process mean and not the standard devia-
tion. Therefore, the magnitude of an assignable cause is designated by the amount of
shift it causes in the process mean, measured in number of standard deviations of the
process.
The change in the process mean is represented on the x-axis by k, where k is the dis-
tance, in number of standard deviations, that the process mean is moved by an assign-
able cause. The ability of the control chart to detect a change is represented on the
y-axis by β. β is in fact the probability that the control chart will accept the changed
process even if it has changed, and (1- β) will be the probability that the change in the
process will be detected and rejected. In Figure 5.3, two examples of OC curves are
shown. The OC curve of Chart 1 indicates that the chart will accept a process with
a probability of about 0.9 (reject with probability 0.1) when its mean has been moved
by two standard deviations; whereas according to the OC curve of Chart 2, this chart
will accept such a change with a probability of only about 0.5. Chart 2 is the stricter,
more discriminating chart because it will accept a process with the given change with
a smaller probability. Thus, the OC curve will be able to tell how good a chart is in
detecting changes in the process it monitors.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 307
1.0
0.75
OC of chart 1
0.5
0.25
OC of chart 2
k
2 4 6 8 10
5.2.1.1 Computing the OC Curve of an X -Chart Figure 5.4 shows a process that has a
mean equal to μ being moved by an assignable cause to a new location μ + kσ, where
σ is the standard deviation of the process, which we assume remains unchanged. The
process is controlled by an X -chart with control limits at µ ± 3σ / n . When the pro-
cess is moved to the new location, the distribution of X also moves to the new location.
The shaded area in the new distribution of X represents the probability that the X
values will fall within the original control limits after the process has moved to the new
µ µ + kσ
kσ
Distribution
of X
σ
n
3σ 3σ
n n
3σ 3σ
µ– µ+
n µ n
location, and thus the probability that the control chart will still accept the process now
located at the new mean. This area represents β and its value can be calculated as below.
β = P ( LCL ≤ X ≤ UCL | µ = µ + kσ )
= P (µ − 3σ / n ≤ X ≤ µ + 3σ / n | µ = µ + kσ )
µ − 3σ / n − µ − kσ µ + 3σ / n − µ − kσ
= P ≤Z ≤
σ/ n σ/ n
= P ( −3 − k n ≤ Z ≤ 3 − k n ) = Φ( 3 − k n ) − Φ( −3 − k n )
We notice that β is a function of k, which represents the size or magnitude of the
assignable cause; and n, the sample size used for the control chart. The following
example shows how the OC curve is calculated and graphed.
Example 5.1
Calculate the OC curves for a 3-sigma control chart when sample size n = 4 and
when n = 16.
Solution
Calculating the OC curve involves calculating the values of β for several selected
values of k. The calculated values of the OC function are shown in Table 5.1a and
Table 5.1b for n = 4 and n = 16, respectively. As an example of the calculations,
suppose that n = 4 and k = 1. Then, using the CDF values of the standard normal
distribution Φ(.) from the standard normal tables,
Notice that in Example 5.1 the X-chart with n = 4, a commonly used chart, called
the conventional chart, has a probability of about 0.8 of accepting (or 0.2 of rejecting) a
process that has moved through a 1σ distance. Only when the process shift is greater than
1.5σ does this X-chart have any reasonable probability (>0.50) of rejection. Also, notice
that when the sample size increases to 16 the probability of acceptance for 1σ change
drops to 0.16 and the power (1 − β) for detecting the change increases to 0.84. Thus, when
n increases the power of the chart to discover a given amount of change in the process
mean increases.
TABLE 5.1a OC Function of an X -Chart with 3-Sigma Limits and n = 4
k 0.0 0.5 1.0 1.5 2.0 2.5
β 0.9973 0.9772 0.8413 0.5 0.1587 0.0228
β
1.0
0.75
n=4
0.5
n = 16
0.25
k
0.5 1.0 1.5 2.0 2.5
We can see from the above OC curves that the conventional X -chart with n = 4 and
3-sigma limits is not quite sensitive to changes in the process mean unless the change in
the mean is more than 1.5σ distance. This may not be a bad feature for the control chart
when such small changes need not be discovered. In fact, Dr. Shewhart, the author of the
control charts, designed it to be that way. When small changes in the process mean are
important, however, an increase in power can be obtained by increasing the sample size.
The OC curve of an R-chart will show the probability of acceptance by the chart when
the process standard deviation changes from the original value. The change in the
process standard deviation is denoted by λ = σ1/σ, where σ and σ1 are the original and
the changed (new) standard deviations, respectively. The probability that the process
will be accepted by the chart is denoted as β and is calculated as follows. This calcula-
tion of β makes use of the distribution of the relative range R/σ:
β = P ( LCL ≤ R ≤ UCL | σ = σ1 )
LCL R UCL
= P ≤ ≤
σ1 σ1 σ1
LCL UCL
= P ≤w≤
σ1 σ1
(
σ d 2 − 3d 3
= P ≤w≤
) (
σ d 2 + 3d 3 )
σ1 σ1
Dd Dd
= P 3 2 ≤w ≤ 4 2
λ λ
because, D 3 = (d2 – 3d3)/d2, D4 = (d2 + 3d3)/d2, and λ = σ1/σ.
310 A Firs t C o urse in Q ua lit y En gin eerin g
For a given n, β can be calculated for different values of λ using tables of distribu-
tion of w, which are available, for example, in Duncan (1974). Figure 5.6, which is
reproduced from Duncan (1951), shows the OC curve of the R-chart with 3-sigma
limits for several values of n.
We notice in Figure 5.6, for the R-chart with n = 4, when the standard deviation
becomes twice that of the original value (i.e., λ = 2.0), the probability of acceptance
is about 0.66. This means that the probability of rejection when such a large change
occurs is only 0.34. The probability of rejection becomes greater than 0.5 only when
λ > 2.4. This again shows that the R-chart with the conventional sample size of n = 4
is not very sensitive, either, to changes in the process standard deviation.
Pa
1.00
Pa = Probability of a point falling within control limits
on first sample taken after an increase in process
standard deviation
0.90
0.80
0.70
0.60
n=3 n=2
0.50
n=4
0.40
0.30
n=5
0.20 n=
6
n=
8
0.10 n = 10
n = 12
n = 15
0
1 2 3 4 5 6
σ1
λ= = Ratio of new to old process standard deviation
σ
Figure 5.6 OC curve of an R-chart for various sample sizes. (Reproduced from Duncan, A.J. “Operating Characteristics
of R-Charts.” Industrial Quality Control, pp. 40–41. Milwaukee: American Society for Quality, 1951. With permission.)
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 311
Average run length (ARL) is the average number of samples needed for a control chart
to signal a change when a change occurs on the process being controlled. This needs
some explanation. Suppose that a process is currently being controlled by an X -chart
at an average level μ0, and suppose it is disturbed by an assignable cause that moves the
process to a different average level μ1, as shown in Figure 5.7. Will the X value from
the sample taken immediately after the change occurred fall outside the limits on the
chart? It may, then, it may not. There is a probability that this X -value will fall outside
the limits and there is a probability that it will not. The probability that it will equals
the area shown shaded in the distribution of X in Figure 5.7. The question then arises:
How many samples will it take for the chart to indicate that the change has occurred?
The number of samples needed to discover the change is a random variable.
If Y is the random variable that represents the number of samples needed before an
X value falls outside the limits after the change has occurred, then Y has the geomet-
ric distribution with parameter p, where p is the probability that an X value will fall
outside the limits, given that the change has occurred. It can be seen that p = 1 − β,
where β is the probability of acceptance calculated for the OC curve. The distribution
of the random variable Y, the geometric distribution, is represented by the probability
mass function:
p( y ) = (1 − p ) y − 1 p , for y = 1, 2, 3,…
The average of Y can be shown to be:
∞
E (Y ) = ∑ y(1 − p) y −1
p=
1
p
y =1
UCL
New mean µ1
Original mean µ0
Time
LCL
Process in control
This average value of Y is called the ARL. Thus, it is the number of samples needed,
on average, for the control chart to discover an assignable cause when it occurs. The
ARL is a function of p, which in turn depends on the size of the assignable cause and
the parameters of the control chart—sample size and spread for the limits. It is used as
a measure of how well a control chart is able to discover changes in a process.
Example 5.2
Calculate the ARL values for the X-chart with 3-sigma limits when n = 4 and when
n = 16 for different assignable causes (i.e., for different values of k). Draw the ARL
curves.
Solution
The ARL values for the two X-charts are shown in Table 5.2a and Table 5.2b. These
tables use the data on OC curves computed in Example 5.1. The graphs of the ARL
as a function of k are shown in Figure 5.8.
Note that the ARL curve for n = 4 shows that if the change in the process mean
is less than 1σ, for instance, 0.75σ, then the ARL is very large, meaning that it will
take many, many samples to discover such a change. Such a change, however, will be
discovered by the chart with n = 16 in reasonable number of sample. If the change is
only 0.5σ even the chart with n = 16 may not ever detect such a change.
The ARL curves have the same information in them as the OC curves; however,
the ARL is a more meaningful measure of performance than the OC function. Hence,
it is used more often to evaluate and compare the performance of control charts by
researchers who design newer, improved control charts.
ARL
50
40
30
n=4
20
10 n = 16
0 k
0 1 2 3
Example 5.3
Solution
The limits for the chart to control at p = 0.1 would be:
( 0.1)( 0.9)
UCL( P ) = 0.1 + 3 = 0.19
100
CL( P ) = 0.1
( 0.1)( 0.9)
LCL( P ) = 0..1 − 3 = 0.01
100
Suppose that the process fraction defective becomes p1. The probability of accep-
tance Pa(p1) is then given by:
(
β = P an observed value of P falls in between the control limits given p = p1 )
= P ( 0.01 < P < 0.19 | p = p1 )
= P (1 < 100 P < 0.19 | p = p1 )
= P (1 < D < 0.19 | p = p1 )
where D is the number of defectives in the sample of n = 100. (We have assumed
that on-the-line is out.)
Therefore,
β = P ( D ≤ 18 | p = p1 ) − P ( D ≤ 1 | p = p1 ), where D ∼ Bi (100, p1 )
314 A Firs t C o urse in Q ua lit y En gin eerin g
We can assign different values for p1 and obtain the values for β using the Poisson
approximation to binomial and the cumulative Poisson tables (Table A.5 in the
Appendix). The OC function is shown in Table 5.3, and the OC curve is shown in
Figure 5.9. For example, when p1 = 0.01,
β = P ( D ≤ 18 | p = 0.01) − P ( D ≤ 1 | p = 0.01)
where D is a Poisson variable with mean np = 100 × 0.01 = 1.0. Then, using the
Poisson table,
1.0
0.9
0.8
0.7
0.6
β 0.5
0.4
0.3
0.2
0.1
0.0
0.00 0.05 0.10 0.15 0.20 0.25
p1
The OC curve of a C-chart designed for controlling a process at, say, c = c 0, will show
the probabilities of the chart accepting the process when the value of c changes to
values other than c 0. Drawing the OC curve of the C-chart is illustrated using an
example.
Example 5.4
Suppose that a process produces an average number of defects per unit c 0 = 9 and we
use a C-chart to control the process at this c value. We want to find the OC curve
of this control chart.
Solution
The control limits for the C-chart to control this process at its current level
would be:
UCL(C ) = 9 + 3 9 = 18
CL(C ) = 9
LCL(C ) = 9 − 3 9 = 0
The OC curve of the chart should show the probability of acceptance β of pro-
cess conditions with various values for c besides c 0. The β is the probability that an
observed value of C falls within the control limits given that c is not at c 0, but at
another value of c, c 1:
β = P ( 0 < C < 18 | c = c1 )
( )
= P C < 18 | c = c1 − P (C ≤ 0 | c = c1 )
= P (C ≤ 17 | c = c1 ) − P (C ≤ 0 | c = c1 )
The OC function is calculated in Table 5.4, and the OC curve is shown in Figure
5.10. The probability of acceptance, Pa, in Table 5.4 is obtained using the cumula-
tive Poisson table. (We have again assumed that on-the-line is out.) As an example,
when c = 2, the Pa is given by:
1.0
0.9
0.8
0.7
0.6
0.5
β
0.4
0.3
0.2
0.1
0.0
0 10 20
c
not be an undesirable feature for the control chart when the discovery of a change less
than 75% of the process average is not important. When smaller changes are impor-
tant, however, the conventional C-chart is not very powerful. The use of warning
limits and runs can improve the sensitivity of the C-chart.
The study of the control charts using their operating characteristics reveals some
of their important characteristics. It tells us when the charts are effective in discover-
ing changes in a process and when they are not. Such studies have led researchers to
create newer and better control charts for occasions where new features are needed to
overcome some of the drawbacks inherent in the basic charts. These topics are covered
in more advanced literature on the subject.
When we discussed the X - and R-charts, we used the following formulas to calculate
the CL and control limits:
Control limits for the R-chart: Control limits for the X -chart:
UCL( R ) = D4 R UCL( X ) = X + A2 R
CL( R ) = R CL( X ) = X
LCL( R ) = D3 R LCL( X ) = X − A2 R
where A2, D 3, and D4 are tabled factors that give 3-sigma limits for the charts.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 317
These formulas use X and R obtained from the data collected from the process
to estimate process mean and process standard deviation, respectively. When we use
these limits, we should realize that we are controlling the process to behave consis-
tently at the current level of the process mean and within the current level of process
variability. In other words, we are not imposing any outside standards on the process.
Therefore, we say that we use these limits to maintain “current control.” In many pro-
cess situations, such maintaining of current control is necessary or desirable. In many
others, however, we may be given a standard for the mean, a target, or a standard for
the standard deviation, or both, and required to control the process to conform to
those given standards. For example, there may be a process producing 3/4-in. bolts,
and the bolt diameter is being controlled using X- and R-charts. It may make sense
for this process to control the process average at 3/4 in., in effect forcing the process
to a mean of 3/4 in. When we have to control a process against given standards, we
have to use formulas for calculating the limits and the center line different from the
ones used for controlling at the “current” average. Two cases arise.
5.3.1.1 Case I: μ Given, σ Not Given This case arises when the process mean should
be controlled at a given standard average level but the variability at which the process
must be controlled is not given. In this case, data will be collected as for the regular
X- and R-charts, and the R value (from at least 25 subgroups of an in-control pro-
cess) will be used to estimate the process variability σ. The following formulas will
be used, which employ the given standard for the mean, T (for target), as the center
of the X -chart and an estimate of σ using R for calculating the spread of the limits.
Similarly, the CL and limits for the R-chart are computed from the estimate of σ:
Control limits for the R-chart: Control limits for the X -chart:
UCL( R ) = D4 R LCL( X ) = T + A2 R
CL( R ) = R CL( X ) = T
LCL( R ) = D3 R LCL( X ) = T − A2 R
These formulas are no different from those given for current control, except for T
in place of X.
5.3.1.2 Case II: μ and σ Given In this case, we do not use process data either to esti-
mate the process mean or the process standard deviation. The formulas are calculated
out of the given standards and reflect the limits of variability that we should expect in
the statistics X and R, given the values of the mean and standard deviation of this
318 A Firs t C o urse in Q ua lit y En gin eerin g
process. Suppose that the standard given for the mean is T and that for the standard
deviation is σ0. The formulas will then be as follows:
Control limits for the R-chart: Control limits for the X -chart:
UCL( R ) = D2 σ 0 UCL( X ) = T + A σ 0
CL( R ) = d 2 σ 0 CL( X ) = T
LCL( R ) = D1σ 0 LCL( X ) = T − A σ 0
The values for A, D 1, D 2 , and d 2 are available from standard tables, such as Table
A.4 in the Appendix. The reader should be able to see why these formulas are
good for the case when σ is given. The CL of R is the E(R) in terms of the given
standard deviation σ0, and D 2 and D 1 give the 3-sigma limits for R in terms of σ0.
In fact, D 2 = (d 2 + 3d 3), and D 1 = (d 2 − 3d 3). To obtain the limits for X we use A,
which is simply equal to 3/ n because, σ x = ( σ 0 / n ).
Example 5.5
Prepare X - and R-charts for the data in Table 5.5 if the process is to be controlled
at a target T = 10. No standard value for σ is given.
Solution
The limit calculations for the case when μ is given but the value of σ is not are shown
below. From Table 5.5, the value of R is obtained as 3.307. The control limits for
the R-chart are calculated using values for D 3 and D4 for n = 5.
All R values are within limits, and the R value from the in-control process is
used to calculate the limits for the X -chart. The control limits for the X-chart are:
Figure 5.11 shows the graph of the control charts drawn using Minitab.
Incidentally, notice the close agreement between the calculated values for the limits
and those computed by Minitab.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 319
TABLE 5.5 Data for X - and R-Charts When µ Is Given but σ Is Not
NO. X1 X2 X3 X4 X5 X RANGE
1 10.09 11.32 10.81 9.19 14.93 11.269 5.739
2 10.16 8.88 9.35 11.16 7.07 9.323 4.092
3 10.93 9.70 9.63 10.89 9.99 10.227 1.296
4 8.36 10.06 9.42 11.06 1.0.25 9.828 2.702
5 8.89 9.62 11.27 9.03 11.49 10.060 2.605
6 10.66 9.91 10.03 10.45 8.51 9.913 2.143
7 7.68 8.38 9.13 11.29 10.04 9.301 3.610
8 13.21 8.94 10.95 12.48 11.96 11.509 4.273
9 10.07 10.85 11.25 11.06 11.78 11.002 1.719
10 6.89 8.12 11.09 12.67 13.80 10.514 6.919
11 8.01 10.80 10.65 9.20 9.42 9.615 2.780
12 11.84 10.32 10.01 10.22 10.09 10.498 1.836
13 11.11 10.31 8.88 12.52 10.70 10.704 3.644
14 9.94 9.61 8.76 12.18 10.42 10.184 3.419
15 9.21 12.02 10.05 9.32 8.53 9.829 3.488
16 11.61 12.22 1116 12.05 10.52 11.514 1700
17 11.68 9.52 10.43 9.03 12.92 10.715 3.891
18 8.87 11.78 7.41 13.14 10.17 10.274 5.724
19 11.36 8.13 6.78 11.73 10.74 9.747 4.951
20 6.35 9.67 9.94 11.42 10.31 9.537 5.075
21 8.67 10.28 9.55 10.12 10.17 9.758 1.614
22 10.24 9.59 11.19 11.30 10.32 10.528 1.710
23 10.30 9.93 10.86 12.09 9.99 10.631 2.160
24 10.32 8.31 8.13 10.73 10.44 9.586 2.598
25 9.14 10.55 10.27 7.96 7.58 9.100 2.975
R= 3.307
12 UCL = 11.91
Sample mean
11
10 Mean = 10
9
8 LCL = 8.093
Subgroup 0 5 10 15 20 25
8
7 UCL = 6.992
6
Sample range
5
4
3 R = 3.307
2
1
0 LCL = 0
Example 5.6
Make X- and R-charts for the data in Table 5.5 for the process to be controlled at
the given target T = 10 and a given standard deviation σ0 = 1.0.
Solution
The limits for the control charts are calculated as follows:
Control limits for the R-chart:
Figure 5.12 shows the control charts drawn for Example 5.6 using Minitab. We
again see a close agreement between the limits calculated above and those computed
by Minitab. We also see several R values and X values outside the control limits
in the new chart because an external standard on variability has been imposed on
the process.
11.5 1 1
UCL = 11.34
Sample mean
10.5
Mean = 10
9.5
LCL = 8.658
8.5
Subgroup 0 5 10 15 20 25
8 1
7
1 1
Sample range
6 1 1
5 UCL = 4.918
4
3
2 R = 2.326
1
0 LCL = 0
We should remember that in order to use the X - and R-charts, we need four or five
sample units from the process so that the measurements from them can be used to
calculate X and R. These sample units must be taken (approximately) at the same time
to represent the condition of the process at the time the sample is taken. There are
situations where it may not be possible to take such multiple sample units at the same
time, or in quick succession. Chemical processes are examples of such slow processes,
in which production cycle times are long and the analysis and reporting of sample
measurements take time. The simplest approach in such a situation is to plot the indi-
vidual values against the control limits calculated for the individual value X. Such a
chart is called the “control chart for individuals” or the “X-chart.”
5.3.2.1 Control Chart for Individuals ( X-Chart) The control chart for individuals, or the
X-chart, is normally used along with a chart for successive differences, which is known
as a moving range chart, or MR chart, with subgroup size n = 2. The MR chart will
track the variability in the process, and the X-chart will keep track of the process mean.
The moving range is the absolute value of the difference between the current value and
the previous value. There will be no R-value corresponding to the first observation.
The limits for the two charts are calculated using the following formulas if no stan-
dards are given:
R
UCL( X ) = X + 3
UCL( R ) = D4 R d2
CL( R ) = R CL( X ) = X
LCL( R ) = D3 R R
LCL( X ) = X − 3
d2
If standards are given—say, T for the mean and σ0 for the standard deviation—then
the limits for the X- and MR charts are calculated as follows:
UCL( R ) = D2 σ 0 UCL( X ) = T + 3σ 0
CL( R ) = d 2 σ 0 CL( X ) = T
LCL( R ) = D1σ 0 LCL( X ) = T − 3σ 0
The values for d2, D1 D2, D3, and D4 are chosen for n = 2 from Table A.4 in the Appendix.
322 A Firs t C o urse in Q ua lit y En gin eerin g
Example 5.7
The first 20 observations of data shown in Table 5.6 come from a process whose
mean is expected to be at 10 and the standard deviation at 1.0. It is known that the
process mean changed to 11.5, a distance of 1.5σ from the original mean, after the
20th observation and the data after the 20th observation come from the changed
process. Prepare a control chart for individuals with given standards of T = 10 and
σ0 = 1.0, and see how the control chart responds to the change that we know has
occurred in the process mean after the 20th sample.
This is a small simulation experiment where the first 20 observations are gener-
ated using Minitab as random variates from a normal distribution with μ = 10 and
σ = 1.0 and the next 20 observations are generated from a normal distribution with
μ = 11.5 and σ = 1.0. Both sets of data are plotted sequentially on the same graph to
verify how the control limits calculated for the original data can discover the change
that has occured in the new data after the 20th sample.
Solution
We prepare the control limits using the given standards T = 10 and σ0 = 1.0. The
values for D 2, d2, and D 1 for the MR chart are chosen for n = 2 from the Table A.4
in the Appendix.
TABLE 5.6 Data for Control Chart for Individuals (X-Chart) and MR Chart
i Xi MR i Xi MR
1 10.09 21 12.16 1.32
2 10.62 0.53 22 11.11 1.05
3 10.52 0.1 23 9.91 1.2
4 10.16 0.36 24 11.77 1.86
5 9.94 0.22 25 11.58 0.19
6 10.72 0.78 26 11.34 0.24
7 8.77 1.95 27 12.63 1.29
8 8.91 0.14 28 11.37 1.26
9 10.31 1.4 29 12.29 0.92
10 10.84 0.53 30 9.44 2.85
11 10.8 0.04 31 11.58 2.14
12 10.53 0.27 32 9.85 1.73
13 8.61 1.92 33 11.1 1.25
14 9.81 1.2 34 10.68 0.42
15 10 0.19 35 11.9 1.22
16 12.39 2.39 36 13.01 1.11
17 10.19 2.2 37 12.33 0.68
18 9.31 0.88 38 12.69 0.36
19 10.57 1.26 39 10.9 1.79
20 10.84 0.27 40 10.62 0.28
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 323
13.5 1 UCL = 13
12.5
Individual value
11.5
10.5
Mean = 10
9.5
8.5
7.5
LCL = 7
6.5
Sample no. 0 10 20 30 40
4
UCL = 3.686
3
Moving range
1 R = 1.128
0 LCL = 0
Figure 5.13 Example of a control chart for individuals and chart for successive differences.
The charts prepared using Minitab are shown in Figure 5.13. The chart shows
the process to be in-control when the first 20 observations were taken, as we would
expect. The change in the process mean occurring after the 20th observation has
been detected by the X-chart, but only after 16 samples had been taken from the
time the change really occurred. The X-chart is not a powerful chart and, because
the individual values have a lot more variability compared to the averages, it is dif-
ficult to see the signals in the presence of large noise.
Also, the performance of the X-chart is very sensitive to deviation of the process
distribution from normal as the X values do not have the robustness present in the
X values (from central limit theorem). Therefore, the moving average and moving
range charts, to be discussed next, are preferred in these circumstances. The X-chart,
however, has some advantages; one of them is that it is simple to use and easy to
understand. Another is that the specification lines can be drawn on the chart and the
characteristic can be monitored against the given specifications. Such specification
lines, as we have mentioned before, cannot be and should not be drawn on charts for
averages. For, specifications are limits for individual values and averages should not be
compared with limits for individual values.
5.3.2.2 Moving Average and Moving Range Charts The Moving Average (MA) and
Moving Range (MR) charts use sample averages and sample ranges, as do the regular
X - and R-charts, but the method of forming the samples or subgroups is different.
324 A Firs t C o urse in Q ua lit y En gin eerin g
Suppose the sample size is n. When starting the control chart, the first subgroup
is made up of the first n number of consecutive observations. [There are no MA or
MR values for the first (n – 1) observations.] A new subgroup is formed when a new
observation arrives after the nth observation, by adding it to the current subgroup and
discarding the earliest observation from it. The following is the mathematical expres-
sion to obtain the i-th moving average with a subgroup of size n:
xi + xi − 1 +…+ xi − n + 1
MX i = ,i ≥ n
n
The limits for the two charts are calculated using the following formulas, which are
the same as the ones used for the regular X - and R-charts for current control (with
no standards given).
UCL( R ) = D4 R UCL( X ) = X + A2 R
CL( R ) = R CL( X ) = X
LCL( R ) = D3 R LCL( X ) = X − A2 R
If standards are given for the process average—say, T—and for the process standard
deviation—say, σ0 —then the limits will be calculated using the following formulas:
UCL( R ) = D2 σ 0 UCL( X ) = T + A σ 0
CL( R ) = d 2 σ 0 CL( X ) = T
LCL( R ) = D1σ 0 LCL( X ) = T − A σ 0
The factors d2, D 1, D 2, D 3, D4, A, and A2 are the same as used for regular X and
R-charts for the standards given case. The following example illustrates the use of the
MA and MR charts.
Example 5.8
We will continue with the simulation experiment to evaluate how the MA and MR
charts perform in detecting changes in the process.
In this example, the first half of the data, which are shown in Table 5.7, come
from a process in which the average is 10 and the standard deviation is 1.0. After the
20th sample, the process average increases to 11.5, a distance of 1.5σ. The standard
deviation of the process remains the same at 1.0. We want to see how the MA and
MR control charts designed for the original 20 observations are able to detect the
change in the process average when it occurs.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 325
Solution
The computation of the moving averages and moving ranges for n = 3 are shown in
Table 5.7. Notice how the samples are formed. The first subgroup is formed with the
first three observations. The following subgroups are formed by taking in the new-
est observation and dropping out the earliest. The charts are drawn using Minitab.
The limit calculations are verified using the formulas given above. The MR and MA
charts drawn from Minitab are shown in Figure 5.14a and b.
The limits for the two charts are calculated using the given standards: T = 10 and
σ0 = 1.0.
UCL( R ) = D2 σ 0 UCL( X ) = T + A σ 0
= 4.358 × 1.0 = 4.358 = 10 + 1.732 × 1.0 = 11.732
CL( R ) = d 2 σ 0 CL( X ) = T = 10
= 1.693 × 1.0 = 1.693 LCL( X ) = T − A σ 0
LCL( R ) = D1 σ 0 = 0 = 10 − 1.732 × 1.0 = 8.268
The reader may have noticed that this is the same data used to prepare the control
chart for individuals (X-chart) and the MR chart in the previous example. In that
example, the change in the process mean was discovered by the X-chart at the 16th
sample after the shift. The same change has been discovered by the MA chart at the
seventh sample in the current example.
326 A Firs t C o urse in Q ua lit y En gin eerin g
5
UCL = 4.358
4
Moving range
3
2
R = 1.693
1
0 LCL = 0
0 10 20 30 40
(a) Sample number
13.5
12.5
UCL = 11.73
Moving average
11.5
10.5
Mean = 10
9.5
6.5
0 10 20 30 40
(b) Sample number
Figure 5.14 (a) MR chart for data in Table 5.7. (b) MA chart for data in Table 5.7.
Although we should be cautious in making general conclusions out of this one trial,
the added power as a result of averaging from larger subgroups should be expected.
This example shows the power advantage of a chart using averages over a chart using
individuals.
A Caution: While reacting to plots outside the control limits on the MA and MR charts,
caution must be exercised when interpreting the charts. Suppose that an adjustment is made
to a process because of a moving average falling outside a limit; the next couple of averages
may still be outside the limits even after making the adjustment, because the observations
generated when the process was not-in-control may still be in the calculations and may
influence some of the subsequent plots. Operators must be warned against overreacting to
signals under these circumstances.
We noted while studying the operating characteristics of the X-chart that the conven-
tional X-chart with n = 4 or 5, and 3-sigma limits is not very sensitive to small changes
(<1.5σ magnitude) in the process mean. We also noted that their sensitivity could be
improved through the use of rules with 1-sigma and 2-sigma warning limits, or with
runs. Another approach to obtaining improved sensitivity in control charts is through
the use of statistics that accumulate information from several past sample observations
instead of relying on just one current sample, as is done with the X - and R-charts.
Two important examples of such charts are the exponentially weighted moving average
(EWMA) and the cumulative sum (CUSUM) control charts. Both these charts use pro-
cedures to borrow information from previous samples and create a measure at a sample
point that contains accumulated information from several previous samples. They differ
in the manner of accumulation and the resulting measure. Researchers, however, have
established that the EWMA chart and the CUSUM chart are equal in their effective-
ness (Lucas and Saccucci 1990) in discovering small changes in the process mean. It must
be said, however, that the EWMA chart is easier to understand and construct compared
to the CUSUM chart. Therefore, we will focus on the EWMA chart only here.
The EWMA chart is typically used in situations where the sample size n = 1,
although it could be equally useful where n > 1. If xi is the observation from the i-th
sample, the exponentially weighted moving average for this sample wi is calculated as:
wi = λxi + (1 − λ )wi − 1
(
wi = λxi + (1 − λ ) λxi − 1 + (1 − λ )wi − 2 )
= λxi + λ(1 − λ )xi − 1 + (1 − λ)2 wi − 2
Expanding this even further, we get:
( )
i −1
wi = λxi + λ(1 − λ )xi − 1 + λ(1 − λ )2 xi − 2 + λ(1 − λ )3 xi − 3 + + λ 1 − λ x1
328 A Firs t C o urse in Q ua lit y En gin eerin g
From the above expansion, we can see that the weighted average wi includes infor-
mation from the most recent observation along with information from previous obser-
vations each weighted with exponentially decreasing weights. By choosing the value
of λ appropriately, the relative importance given to the most recent and past values can
be changed. Larger values of λ will give more importance to the most recent observa-
tion, and smaller values of λ will give more importance to past values. The λ has to be
chosen specific to a process depending on whether the control chart should respond to
individual samples alone or should consider historical data as well. The example below
explains how the EWMA chart is constructed and used.
Example 5.9
We will continue with the simulation experiment to evaluate the performance of the
EWMA chart as well.
Table 5.8 has two sets of data. Column 2 has data from a normal process with
mean μ = 10 and σ = 1.0 up to Sample 20. The data after the 20th sample come from
a process with an average of 11.0, a change of one standard deviation (k = 1) in the
mean of the original process. Column 5 has data from a normal process with μ = 10
and σ = 1.0 up to the 20th sample, and data after the 20th sample is from a process
with a mean of 11.5, a change of 1.5 standard deviations (k = 1.5) in the mean of the
original process. In both cases, the standard deviation is assumed to remain constant.
Using these two sets of data, we want to compare the performance of the EWMA
chart with that of the Shewhart chart by investigating when the charts are able to
detect the change occurring in the data sets. We want to see how the charts react
when the change in the process is small and when it is large. Also, we would want to
see how changes in the values of λ changes the performance of the EWMA chart.
Solution
We will use the Shewhart chart for individuals and two EWMA charts, one with
λ = 0.6 and another with λ = 0.2. The Shewhart and EWMA charts all have 3-sigma
limits (calculation of limits for the EWMA chart is explained later) and were pro-
duced by Minitab software. Columns 3, 4, 6, and 7 of Table 5.8 show the EWMA
calculations. Columns 3 and 6 use λ = 0.6 and Columns 4 and 7 use λ = 0.2. To show
how the wi is calculated, see, for example, under Column 3,
w1 = x1 = 9.86
w 2 = ( 0.6)9.3 + ( 0.4 )9.86 = 9.53
w3 = ( 0.6)10.667 + ( 0.4 )9.53 = 10.22
Figures 5.15a to f show the charts for the two processes for the two different λ
values. Figure 5.15a shows the Shewhart chart for the process with a 1σ shift in the
process mean, and Figures 5.15b and c show EWMA charts with λ = 0.6 and λ =
0.2, respectively, for the same process.
The Shewhart chart did not detect the change of 1σ distance in the mean. The
EWMA chart with λ = 0.6 detected the change at the 16th sample after the change,
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 329
TABLE 5.8 Data and Calculation of EMWAs for Two Different Values of λ
1 2 3 4 5 6 7
SAMPLE X-CHART EWMA EWMA X-CHART EWMA EWMA
NO. Xi (k = 1) wi (λ = 0.6) wi (λ = 0.2) xi (k = 1.5) wi (λ = 0.6) wi (λ = 0.2)
1 9.86 9.86 9.86 10.09 10.09 10.09
2 9.30 9.53 9.75 10.62 10.40 10.19
3 10.67 10.22 9.93 10.52 10.47 10.26
4 8.33 9.09 9.61 10.16 10.28 10.24
5 10.93 10.19 9.88 9.94 10.08 10.18
6 11.51 10.98 10.20 10.72 10.46 10.29
7 7.98 9.18 9.76 8.77 9.45 9.98
8 10.18 9.78 9.84 8.91 9.13 9.77
9 9.99 9.91 9.87 10.31 9.84 9.88
10 10.22 10.09 9.94 10.84 10.44 10.07
11 8.59 9.19 9.67 10.80 10.65 10.22
12 10.55 10.00 9.85 10.53 10.58 10.28
13 9.41 9.65 9.76 8.61 9.40 9.94
14 9.25 9.41 9.66 9.81 9.64 9.92
15 9.38 9.39 9.60 10.00 9.86 9.93
16 10.19 9.87 9.72 12.39 11.37 10.42
17 10.88 10.48 9.95 10.19 10.66 10.38
18 10.63 10.57 10.09 9.31 9.85 10.16
19 10.52 10.54 10.17 10.57 10.28 10.24
20 10.18 10.32 10.17 10.84 10.61 10.36
21 11.26 10.88 10.39 12.16 11.54 10.72
22 9.80 10.23 10.27 11.11 11.28 10.80
23 10.44 10.36 10.31 9.91 10.46 10.62
24 11.28 10.91 10.50 11.77 11.24 10.85
25 11.59 11.32 10.72 11.58 11.44 11.00
26 8.53 9.65 10.28 11.34 11.38 11.06
27 10.89 10.39 10.40 12.63 12.13 11.38
28 11.02 10.77 10.53 11.37 11.67 11.38
29 11.62 11.28 10.74 12.29 12.04 11.56
30 11.06 11.15 10.81 9.44 10.48 11.14
31 11.57 11.40 10.96 11.58 11.14 11.22
32 10.79 11.03 10.93 9.85 10.37 10.95
33 11.55 11.34 11.05 11.10 10.81 10.98
34 10.81 11.02 11.00 10.68 10.73 10.92
35 12.18 11.72 11.24 11.90 11.44 11.12
36 12.55 12.22 11.50 13.01 12.38 11.50
37 10.43 11.15 11.29 12.33 12.35 11.66
38 12.00 11.66 11.43 12.69 12.55 11.87
39 10.63 11.04 11.27 10.90 11.56 11.67
40 9.81 10.30 10.98 10.62 10.99 11.46
330 A Firs t C o urse in Q ua lit y En gin eerin g
13.5
UCL = 13
12.5
Individual value
11.5
10.5
Mean = 10
9.5
8.5
7.5
LCL = 7
6.5
Sample no. 0 10 20 30 40
4
UCL = 3.686
3
Moving range
1 R = 1.128
0 LCL = 0
(a)
12.5
UCL = 11.96
11.5
10.5
EWMA
Mean = 10
9.5
8.5
LCL = 8.036
7.5
0 10 20 30 40
(b) Sample number
12
11 UCL = 11.00
EWMA
10 Mean = 10
9 LCL = 9.000
0 10 20 30 40
(c) Sample number
Figure 5.15 (a) Shewhart chart for individuals on a process with 1σ change in the process mean. (b) EWMA chart
(λ = 0.6) for a process with 1σ change in the process mean. (c) EWMA chart (λ = 0.2) for a process with 1σ change
in the process mean. (Continued)
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 3 31
13.5
UCL = 13
12.5
Individual value
11.5
10.5
Mean = 10
9.5
8.5
7.5
LCL = 7
6.5
Sample no. 0 10 20 30 40
4
UCL = 3.686
3
Moving range
1 R = 1.128
0 LCL = 0
(d)
13
12 UCL = 11.96
11
EWMA
10 Mean = 10
8 LCL = 8.036
0 10 20 30 40
(e) Sample number
12
11 UCL = 11.00
EWMA
10 Mean = 10
9 LCL = 9.000
0 10 20 30 40
(f) Sample number
Figure 5.15 (Continued) (d) Shewhart chart for individuals on a process with 1.5σ change in the process mean.
(e) EWMA chart (λ = 0.6) for a process with 1.5σ change in the process mean. (f) EWMA chart (λ = 0.2) for a process with
1.5σ change in the process mean.
332 A Firs t C o urse in Q ua lit y En gin eerin g
and the EWMA chart with λ = 0.2 detected the change at the 13th sample after the
change. In Figure 5.15d, e, and f, similar charts are shown for the process with the
mean changed by 1.5σ distance. The Shewhart chart barely detects the change at
the 16th sample after the change, whereas the EMWA with λ = 0.6 takes only seven
samples to discover the change. The EMWA chart with λ = 0.2 detects the change
even earlier—at the fifth sample.
We should be careful in making general conclusions out of one example, but it can
be safely stated that the EMWA chart has improved sensitivity compared with the
Shewhart chart, and that this sensitivity can be adjusted by varying the value of λ.
When large changes are significant, large λ values are needed. (An EWMA chart
with λ = 1 is equivalent to the Shewhart chart.) When small changes need to be
discovered, small values of λ can be used (Crowder 1989). Roberts (1959), the author
of the EWMA control scheme, claimed this as an advantage for the charts he was
proposing. Vardeman and Jobe (1999) discuss how to select the value of λ in order
to detect the desired magnitude of change in the process average with desired ARL
values.
In the above illustration, the sample size was taken as n = 1 for both the Shewhart
chart and the EMWA chart. The EMWA chart is best suited for situations in which
the sample size has to be one out of necessity. However, the EWMA chart can be used
with X from samples of any size. In addition, the improved sensitivity observed in
the EMWA chart with n = 1 can also be expected with charts with larger n, mainly
because the information is accumulated over a larger number of samples, and the
cumulative measure integrates the information better and discloses the changes sooner.
5.3.3.1 Limits for the EWMA Chart If wi = λxi + (1−λ)wi−1, it can then be shown (Lucas
and Saccucci 1990) that
λ
σ w2 i = σ 2 1 − (1 − λ )2i
2 − λ
provided that the xi values are independent and come from a normal population with
variance σ2. The Kσ limits for wi would then be:
λ
UCL = µ + K σ 1 − (1 − λ )2i
(2 − λ)
CL = µ
λ
LCL = µ − K σ 1 − (1 − λ )2i
(2 − λ)
We note that the limits are dependent on the value of i, the sample number, and so
have to be calculated for each sample individually. However, when i becomes large,
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 333
the quantity within the square parentheses inside the radical sign becomes 1.0, and
the limits tend to reach the steady-state value:
UCL λ
= µ ± Kσ
LCL (2 − λ)
If the value of K is taken as 3, for simplicity, we will have 3-sigma limits. In the
example above, for instance, the 3-sigma control limits for the second sample, when
i = 2, for the case when λ = 0.2, are given by:
0.2
UCL = 10 + 3(1) 1 − (1 − 0.2)4 = 10.77
2 − 0.2
CL = 10
0.2
LCL = 10 − 3(1) [1 − (1 − 0.2)4 ] = 9.23
2 − 0.2
The control limits for the steady-state condition, when λ = 0.2, are given by:
0.2
UCL = 10 + 3(1) = 11.0
2 − 0.2
CL = 10
0.2
LCL = 10 − 3(1) = 9.0
2 − 0.22
Montgomery (2001a) recommends the use of K between 2.6 and 2.8 for λ ≤ 0.1. A
practical approach seems to be to use K = 3 with a starting value of λ = 0.5. Based on
experience, the value of λ can be increased or decreased depending on how well small
changes need to be detected. Crowder (1989) provides a step-by-step procedure for
designing a EWMA chart of desired sensitivity through the optimal choice of λ and
K. From this study, it appears that a EWMA chart with K = 3 and λ = 0.5 would be
a good, middle-of-the-road chart useful for many occasions. It can be fine-tuned to
whatever sensitivity the situation requires.
As pointed out earlier, the EWMA chart will normally be used where the sample
size is one, in situations where the chart for individuals (X-chart) would other-
wise be used. The EWMA is an average, so it tends to make the chart robust with
respect to the normality of the population, whereas the Shewhart chart, with a
sample size of one, cannot claim such robustness. As mentioned, the EWMA chart
can also be used with the sample average X when n > 1. The Xi from sample i
will be replaced by X i , and σ in the expressions for the limits will be replaced
by σ X = σ / n . Charts have also been developed for controlling process standard
334 A Firs t C o urse in Q ua lit y En gin eerin g
When a process produces products for short orders—as happens in many job shops,
where only a few units of a part number are produced per order—there will not be
enough data for computing the control limits from any one part number. Recall, it is
recommended to use at least 25 subgroups from an in-control process for calculating
the limits for the X- and R-charts. If there are not enough subgroups, the limits cal-
culated will have considerable error in them. Therefore, assignable causes could escape
detection, or additional false alarms could be generated.
Even in a mass-production environment, if control charting is desired in the early
stages of production when not enough units of product have been produced, there will
not be an adequate number of subgroups from which to compute the limits. Hillier
(1969) estimated the changes in the false-alarm probability of the conventional con-
trol charts (with 3-sigma limits and n = 5) when the limits are computed from a
small number of subgroups. The false-alarm probability of the X-chart that equals
0.004 when the limits are computed from 25 subgroups increases to 0.0067 when
the limits are computed from 10 subgroups. (The false-alarm probability equals the
theoretical value of 0.0027 only when the limits are computed from an infinite, or a
very large, number of subgroups.) Correspondingly, the false-alarm probability for the
R-chart increases from 0.0066 to 0.0102. Hillier suggested a remedy, which amounts
to computing the limits in two stages. In the first stage, when only a small number of
subgroups are available, modified values of the constants A2, D 3, and D4, modified to
account for the large errors in the estimates of the parameters, are used. In the second
stage, the limits are updated progressively as more subgroups become available.
The solution for the job-shop situation, however, lies in combining data from simi-
lar products (i.e., products/part-numbers belonging to the same family or of a similar
design) and creating a data set that reflects the behavior of the process, not of the
individual products. The resulting data set should be such that the data can be con-
sidered as coming from one homogeneous population. For example, suppose that
a machining center produces flanges of different sizes, and suppose that the pitch
diameter of the hole centers is being controlled. Flanges in any one size may not
provide enough data to calculate the limits, so the data from flanges of many dif-
ferent sizes are grouped together to form a larger number of observations. The mea-
surement plotted is not the diameter itself, but rather the deviation of the observed
diameter from the target or the nominal diameter. The parameters of the population
of deviations will be controlled. This is the concept employed in the “deviation from
the nominal” (DNOM) chart.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 335
5.3.4.1 The DNOM Chart For the DNOM chart, the statistic plotted for the i-th
sample is the deviation of the sample average from the nominal, or target:
X i − Ti
The units within any sample average X i should be from the same product or part
number and should have the same nominal value. Assuming that the Xi values are
normally distributed, then under the assumption that the process is centered,
X i − Ti
~ N ( 0, 1)
σi / n
where σi is the standard deviation of the individual observations in the i-th sample.
If we make the assumption that σi is a constant = σ for all i, then a pooled estimate
for σ can be obtained as:
∑ Si2
S=
m
where the values of Si are the sample standard deviations from the m samples. Then,
as described by Farnum (1992),
X i − Ti
≈ N ( 0, 1)
S/ n
X − Ti
P −3 ≤ i ≤ 3 = 0.9973
S/ n
(
P −3S / n ≤ X i − Ti ≤ 3S / n 0.9973 )
Therefore, the 3-sigma control limits for the “deviation” ( X i − Ti ) are:
UCL = 3S/ n
CL = 0
LCL = −3S/ n
The control procedure is to calculate the averages of the observations in each subgroup,
the deviation of the averages from their respective targets, and to plot these deviations on
a chart with the limit lines drawn at values determined by the formulas above. The pooled
standard deviation S in the formulas is calculated according to the formula given earlier.
336 A Firs t C o urse in Q ua lit y En gin eerin g
Example 5.10
The following example has been adapted from Farnum (1992). The data, shown in
Table 5.9, come from different part numbers, each having a different nominal value.
The nominal values, or Ti values, are shown in Column 7. The X i values of the
sample averages and the si values of the sample standard deviations, calculated for
each subgroup, are shown in Columns 8, and 9, respectively. A control chart is to be
prepared to verify if the process that generated the data is in-control.
Solution
We use a DNOM chart. The deviations of the X values from their corresponding
nominal are shown in Table 5.9 under Column 12.
The pooled standard deviation is calculated as:
∑ Si2 0.0000923
s= = = 0.00214
m 20
0.002
0.001
X–T 0.000 CL
–0.001
–0.002
5 10 15 20
Sample no.
3( 0.00214 )
UCL = = 0.00287
5
3( 0.00214 )
LCL = − = −0.00287
5
The deviations are plotted on the chart in Figure 5.16. There is one value of the
DNOM below the lower limit, indicating that the process is not-in-control. The
process also seems to be trending down after the 15th sample. The process is not
producing the measurements at consistently the same level according to this chart.
5.3.4.2 The Standardized DNOM Chart In the above DNOM chart, an assumption
was made that the standard deviations of the measurements in each subgroup (i.e.,
in each part number), were the same and equal to σ. This assumption, however, may
not always be true. Farnum (1992) suggests it is more reasonable to assume that the
coefficient of variation (CV) = σi /Ti is a constant. This latter assumption means that,
for example, if we are measuring the diameters of flanges of different sizes, then the
amount of variability is dependent on the size, with the larger flanges having larger
variability and the smaller flanges having smaller variability. An estimate for the
pooled coefficient of variation CV is obtained as:
2
Si
CV =
1
m ∑ T
i
i
338 A Firs t C o urse in Q ua lit y En gin eerin g
where Si /Ti is the estimate of the CV from individual subgroups. Then, according to
Farnum (1992):
X i − Ti
≈ N ( 0, 1)
(CV )Ti / n
and
X i − Ti
P −3 ≤ ≤ 3 = 0.9973
(CV )Ti / n
3(CV )Ti 3(CV )Ti
P − ≤ X i − Ti ≤ = 0.9973
n n
X − Ti
P −3(CV ) / n ≤ i ≤ 3(CV ) / n = 0.9973
Ti
X
P −3(CV ) / n ≤ i − 1 ≤ 3(CV ) / n = 0.9973
Ti
X
P 1 − 3(CV ) / n ≤ i ≤ 1 + 3(CV ) / n = 0.9973
Ti
Thus, if we plot the statistic X i /Ti, which is the ratio of the individual sample aver-
age to the corresponding target, the 3-sigma control limits will be:
UCL = 1 + 3(CV )/ n
CL = 1
LCL = 1 − 3(CV )/ n
where (CV) is the pooled coefficient of variation as defined above.
For the data in Table 5.9, the pooled CV is calculated from Column 13 of the table as:
0.05002
(CV ) = = 0.0025 = 0.05
20
Then,
3( 0.05)
UCL = 1 + = 1.0671
5
3( 0.05)
LCL = 1 − = 0.9329
5
The control chart is shown in Figure 5.17.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 339
1.1
UCL = 1.0671
X/T 1.0 CL
LCL = 0.9329
0.9
5 10 15 20
Sample no.
Figure 5.17 Standardized DNOM chart for the data in Table 5.9.
None of the values are outside the limits in the standardized DNOM chart, indi-
cating that the process is in-control. A possible explanation for the difference in the
results of the two types of DNOM charts for the same data may be that, the data
in Table 5.9 do not strictly follow the assumption that the standard deviation of the
measurement remains constant from one part number to another.
This can be verified by plotting the sample standard deviations of the samples
against corresponding target values as in Figure 5.18. We find that the assumption of
equal standard deviation is not quite true in this case; the graph shows an increasing
trend in the value of Si with an increase in target value. In this case, it may be more
appropriate to use the standardized DNOM chart.
Therefore, it may make sense to verify, before using a DNOM chart, if the assump-
tion of constant standard deviation is true. If the standard deviation is constant over
0.005
0.004
0.003
Si
0.002
0.001
0.000
0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10
Ti
Figure 5.18 Plot of the standard deviation of the readings from different part numbers in Table 5.9.
340 A Firs t C o urse in Q ua lit y En gin eerin g
the sizes, a simple DNOM chart can be used; otherwise, the standardized DNOM
chart should be used.
Process capability was defined in Chapter 4 as the ability of a process to meet the
customer-given specifications. This has to be assessed after the process is brought in-
control. The capability indices are used to quantify this capability of a process. The
indices Cp and Cpk—the basic indices that are popular in industry—were defined and
their use was explained in Chapter 4. In this chapter, we introduce another such index
Cpm, which is claimed by its proponents to have some superior properties compared to
those of Cp and Cpk.
The C pm index was first introduced by Chan, Cheng, and Spiring (1988) and is
defined as:
USL − LSL
C pm =
6σ ′
where σ ′ = ∑( X − T )i
2
n , T is the target for the process and USL and LSL are
the upper and lower specification limits, respectively. This definition of Cpm closely
resembles that of Cp but the difference, an important difference, is in the quantity used
in the denominator.
Figure 5.19 shows the difference between the usual process standard deviation σ
and the σ′ used in the definition of Cpm. Whereas the usual process standard deviation
Process Process
LSL center Target USL LSL center Target USL
σ Measures variability about the mean σʹ Measures variability about the target
σ measures the variability of the individual values from the process mean, σ′ measures
the variability of the individual values from the target. It can be shown that:
(
C pm = C p / 1 + V 2 , )
where V = (T − μ)/σ, and μ and σ are the process mean and process standard deviation,
respectively (see Exercise 5.14 and the solution manual for the proof).
Example 5.11
Calculate C p, C pk and C pm for the process with the following data: USL = 30, LSL =
10, T = 20, process mean μ = 22, and process standard deviation σ = 3.
Solution
20 − 22 2
V = =−
3 3
30 − 10 20
Cp = = = 1.11
6×3 18
8
C pk = = 0.88
9
1.11
C pm = = 0.93
1 + ( 2 / 3) 2
We see in this example, the value of Cpm falls between the values of Cp and Cpk,
but this need not always be the case. The following comparison of the three indices
illustrates the difference among them.
The major advantage claimed for the C pm index is that it does not depend on the
availability of the usual two-sided tolerance limits, which are dubbed the “goal-
post” limits (Bothe 1997). The goal-post limits imply that no loss is incurred
as long as the characteristic falls within these limits, whereas according to the
Taguchi loss function the loss is zero only when the characteristic is on target
and the loss increases quadratically when the values fall away from the target.
Construction of the C pm index follows this loss function philosophy. The value
of C pm becomes smaller and smaller as the process mean moves farther from the
target, and maximizing the value of the C pm amounts to minimizing the distance
of the process mean from the target, and so minimizing the loss function. The
value of the C pm also increases when the process variability, the variability about
3 42 A Firs t C o urse in Q ua lit y En gin eerin g
the mean decreases. So, either decrease in variability or decrease in the distance
between the process center and target will increase the value of C pm.
A good comparison of the three indices is provided by Kotz and Johnson (1993).
Their comparison is in the context of using the indices where the usual bilateral (goal-
post) tolerance is applicable. If the process is centered—that is, the process center μ is
equal to the target T, which is at the midpoint of the specification limits—then Cp =
Cpk = Cpm. If the process is not centered, then Cpm and Cpk will be different from Cp,
and both will be smaller than Cp. The Cpk and the Cpm both evaluate the lack of center-
ing of a process in addition to the variability, although they follow different approaches.
The first thing to note is that Cpm will never have a negative value, whereas Cpk can
assume negative values if the process center is outside the specification limits. Second,
Cpk is calculated from the specification limits, and there is an implied assumption that
the target is at the center of the specifications. On the other hand, the calculation
of Cpm does not use the specification limits but instead uses a specified value for the
target. If the target happens to be at the center of the specifications, as it happens to
be in the majority of cases, then the Cpm and Cpk are comparable. Both indices will
decrease in value when the process variability increases or when the process mean gets
away from the target. When the target does not fall at the center of the specifications,
these two indices behave totally differently.
An example (from Boyles 1992) that assumes the usual two-sided specification
with the target at the center of the specifications is shown in Figure 5.20. In this fig-
ure, all three processes (A, B, and C) have the same specifications and target but have
different process means and standard deviations.
Looking at Process A, when the process mean is at the center of the specification,
which is also the target, the values of C p, C pk, and C pm are all the same. In Processes
B and C, when the process standard deviation decreases, the C p values increase, but
35 50 65 35 50 57.5 65 35 61.25 65
As mentioned in Chapter 4, the value of C p or Cpk calculated from the data using the
following formulas is only an estimate, or a point estimate, for the true population
value of the index: (The “hats” on Cˆ p, µˆ , and σˆ indicate that they are estimates.)
USL − LSL
Cp =
6σ
Min (USL -µ ),(µ − LSL )
C pk =
3σ
The estimate will have a sampling error; that is, for the same population, different
samples will not give exactly the same value for the index. If the estimate is made
from a large size sample (n ≥ 100), however, the research has shown that the error
can be considered negligible. Then, for all practical purposes, we can use the estimate
as the population value. The simulation study reported in Krishnamoorthi, Koritala,
and Jurs (2009) supports this statement. If, for some reason, the estimate must be
made from a smaller sample, the point estimate will not be reliable and we need to
create confidence intervals (CIs) to estimate the true value of the index. The following
formulas, taken from Kotz and Johnson (1993), provide the confidence intervals and
confidence bounds for Cp:
χ2 χ2
100(1 − α )%CI for C p : n − 1,1 − α / 2 C p , n − 1, α / 2 C p
n −1 n −1
344 A Firs t C o urse in Q ua lit y En gin eerin g
χ2
100(1 − α )% upper confidence bound for C p : n −1,α C p
n − 1
χ2
100(1 − α )% lower confidence bound for C p : n −1,1− α C p
n − 1
2
In the formulas above, Ĉ p is the point estimate for Cp, and χ ν, α is such that
P( χ 2ν > χ 2ν, α ) = α .
Example 5.12
A sample of 15 gear blanks gave an average thickness of 1.245 in. and a standard
deviation of 0.006. The specification for the thickness is 1.25 ± 0.015 in.
Calculate an estimate for C p.
Calculate a 95% CI for C p.
Calculate a 95% lower bound for C p.
Solution
1.265 − 1.235
Cˆ p = = 0.83
6 × 0.006
For calculating a 95% CI, α/2 = 0.025, and (n − 1) = 14. From the χ tables, Table
2
2 2
A.3 in the Appendix, χ14 ,0.025 = 26.12 , and χ14 ,0.975 = 5.63. Thus, the 95% CI for
C p is:
5.63 26.12
× 0.83, × 0.83 = [ 0.526, 1.134 ]
14 14
2
For calculating a 95% lower bound, α = 0.05, 1 − α = 0.95, and χ14 , 0.95 = 6.57.
Therefore, the 95% lower bound is:
6.57
× 0.83 = 0.57
14
We can make the statement that the C p for this process is not less than 0.57 at
95% confidence.
The CI formula for the Cpk index is a long expression and is not provided here. The
expression becomes long because, in estimating Cpk, two parameters—the mean and
the standard deviation of the process—are estimated. These expressions for CIs are
also known to be very conservative because of assumptions and approximations made
to err on the safe side. For example, a simulation study by Ghandour (2004) estimated
that the above formula for the CI for Cp gives intervals that are seven times wider than
those that are empirically derived.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 345
5.4.4 Motorola’s 6σ Capability
In the last chapter, and again in this chapter, we have studied how the capability indi-
ces Cp, Cpk, and Cpm are used to quantify the capabilities of processes, or their ability
to produce products that meet customers’ specifications. We will now discuss another
way of measuring process capability, in number of sigmas, which was originally pro-
posed by the Motorola Corporation, the electronics and communication manufacturer
headquartered in Schaumburg, Illinois. They introduced this method to measure the
capability of processes as part of their huge effort to reduce variability and improve
the capability of processes. They made a strategic decision in 1986 that they would
improve every one of their processes, whether it produces products or services, to have
6σ capability. According to their definition, a process is said to have 6σ capability if
the defects produced in the process are less than 3.4 defects per million opportunities
(DPMO). This definition for process capability, or process quality, provides a com-
mon yardstick whether quality is checked by measuring a characteristic of a product,
counting the number of blemishes on a surface, or by tallying the number of mistakes
made in delivering a service.
The genesis for the 6σ capability, however, is in the normal distribution used to model
many measurable characteristics and the areas lying under different regions of the nor-
mal curve. Figure 5.21 shows the percentages of a normal population lying within dif-
ferent regions of the curve. We can see that the percentage under the curve between
(μ − 6σ) and (μ + 6σ)—that is, the proportion of a normal population lying between 6σ
distance on either side of the mean—is 99.9999998. This means that if a process is nor-
mally distributed and the specifications it is expected to meet are located at ± 6σ distance
from the center of the process, only 2 of 109 (or 0.002 ppm) units in such a population
would be outside the specification. Such a process is said to have 6σ (6-sigma) capability.
Notice that, in a process with 6σ capability (a 6σ process), assuming the process is
centered, the distance between the process center and one of the specification limits,
(USL-LSL)/2, called the half-spec, is equal to 6σ (see Figure 5.22a). Then, in a 6σ
process, the (half-spec)/σ = 6. In another process, if (half-spec)/σ = 5, then it will be
called the 5σ-process and so on.
346 A Firs t C o urse in Q ua lit y En gin eerin g
µ – 6σ µ – 5σ µ – 4σ µ – 3σ µ – 2σ µ – 1σ µ µ + 1σ µ + 2σ µ + 3σ µ + 4σ µ + 5σ µ + 6σ
68.26% of
the population
95.44% of the population
Figure 5.21 Proportions of a normal population under different regions of the curve.
LSL USL
σ
6σ capability
3.4 ppm out of spec
σ
5σ capability
230 ppm out of spec
σ
4σ capability
6200 ppm out of spec
σ
3σ capability
66,800 ppm out of spec
σ
2σ capability
308,700 ppm out of spec
σ 1σ capability
697,700 ppm out of spec
(b)
Figure 5.22 (Continued) (b) Processes with capabilities designated by number of sigmas. (Out-of-spec quantities
calculated when process center is 1.5σ away from target.)
Coming back to the 6σ process, most processes do not have their mean exactly at
the center of the specification because processes tend to drift off center. Therefore,
Motorola provides for the fact that processes could be off as far as 1.5σ from the speci-
fication center, or target. A process with 6-sigma capability and its mean 1.5σ distance
off of the target will produce 99.99966% of the products within specification. That is,
the 6-sigma process will produce only 3.4 defects per million (DPM) of total units even
in the worst condition when the process center is 1.5σ distance away from the target.
Thus, a 6-sigma process with its center on target produces only 0.002 DPM and
will produce, at most, 3.4 DPM at the worst possible condition of the process center
being away from target by 1.5σ distance. Such a process was designated as the bench-
mark for minimum variability.
348 A Firs t C o urse in Q ua lit y En gin eerin g
For the sake of uniformity, any process that produces 3.4 DPM or less, whether the
process produces a characteristic that can be measured or an attribute characteristic
that can be counted, is said to have 6-sigma capability. In the case of processes pro-
viding a service, the quality is measured by counting the number of occasions that a
“defective” service is provided out of all the opportunities for providing such a service.
Processes that are producing higher levels of defects will have smaller sigma capa-
bilities. Figure 5.22b shows process conditions with different sigma capabilities and
the proportions of defects produced in each case when the process center is off of the
specification center by 1.5σ. Table 5.10 summarizes the relationship between sigma
capabilities and the proportion of defectives produced.
The percentage yield and DPMO figures in Table 5.10 can be easily computed by
finding the probabilities under the normal curve inside and outside the specifications,
respectively, given the process center is 1.5σ away from the specification center. For
example, the percentage yield for a 3-sigma process is obtained as follows.
If X ∼ N(μ, σ2), then:
Restaurant bills
Doctor prescription writing
10K
Payroll processing
Average Order write-up
company Journal vouchers
Wire transfers
1K Purchased Airline baggage handling
material lot
reject rate (233 ppm)
100
10 Domestic
airline flight
fatality rate
Best in
class (3.4 ppm)
1
2 3 4 5 6 7
Sigma (0.43 ppm)
Figure 5.23 Examples of processes with different sigma capabilities. (From Motorola Corporation. 1992. Utilizing the
Six Steps to Six Sigma. Personal Notebook. SSG 102, Motorola University, Schaumburg, IL. Reprinted with permission of
Motorola Inc.)
The y-axis of the graph shows DPMO, and the x-axis shows the number of sigma
capabilities.
The Motorola figure further shows that the average quality level of many common
services, such as order write-ups, airline luggage handling, and so on, measured in
the number of sigmas. The figure also shows that going from the “average” level
of 4-sigma capability to the level of 6-sigma capability means a quality improve-
ment, or defect reduction, on the order of better than 1000-fold from the original
condition.
The sigma measure is a useful way of measuring the quality produced by processes
when we want to create a baseline, set goals, and monitor progress as improvements
are achieved. Motorola followed a “six steps to 6-sigma” procedure (popularly known
as the “Six Sigma procedure”), which is a step-by-step methodology to improve pro-
cesses to the 6-sigma level of quality. More detail on the 6-sigma process as a system
for continuous improvement is provided in Chapter 9.
Before we leave this section, we want to make clear the terminology we use. We
have used the “6σ” or “6-sigma” to denote the capability or quality of a process. The
term “Six Sigma” is used in Chapter 9 to refer to the procedure for achieving the
350 A Firs t C o urse in Q ua lit y En gin eerin g
6-sigma capability in processes. The term “Six-Sigma” is also used to name the meth-
odology used to create a system where “all” processes will have 6-sigma capability.
When the design of experiments was discussed in Chapter 3 for choosing product and
process parameters, the factorial experiment was introduced but the discussion was
limited to only those designs where the factors had only two levels. The methods for
analyzing data from experimental trials to calculate factor and interaction effects were
explained. A method to test whether the effects were significant using an approximate
95% confidence interval was included.
Another method, the analysis of variance (ANOVA) procedure, is often used for
analyzing data from experiments. Many computer programs use ANOVA to inter-
pret data and present results. The ANOVA procedure is used to decide if a significant
difference exists among the means of populations of outcomes generated by different
treatment combinations in an experiment.
The ANOVA method is explained here with reference to a factorial experiment
with two factors, A and B, with a number of levels in Factor A, and b number of lev-
els in Factor B. There are n replicates of the trials, so there are n observations in each
treatment combination. The data from the experimental trials are laid out in Table
5.11. This design includes an equal number of observations in the cells. Such designs
with an equal number of observations in each cell are called “balanced” designs.
n b
y = (1/a )
∑
i =1
yi .. = (1/b )
j =1
∑
y. j . is the overall average.
… … … … … … …
Level i yi11,…, yi1n … yij1,…,yijn … yib1,…, yibn
Average: yi1. Average: yij. Average: yib. yi..
… … … … … … …
Level a ya11,…, ya1n … yaj1,…,yajn … yab1,…, yabn
Average: ya1. Average: yaj. Average: yab. ya..
Average y.1. … y.j. … y.b. y...
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 3 51
An observation in any cell in Table 5.11 can be considered as generated by the fol-
lowing model:
where μ is the overall mean, αi is the effect caused by level i of Factor A, βj is the effect
caused by level j of Factor B, (αβ)ij is the effect of the interaction between Factors A
and B, and εijk is a random error introduced at each cell. These errors are assumed to
come from a normal distribution, N(0, σ2), independently, at each cell.
This model is known as the fixed-effect model, because the levels in the factors
are assumed to be the only possible levels in the factors. There could be a situation
in which the levels chosen in the design are random selections from several possible
levels available for the factors. In such situations, the factor and interaction effects
become random variables, and the analysis method becomes different. For models of
this latter kind, see Montgomery (2001b).
In the above model, we can substitute the overall mean and the factor effects with
their estimates from data as:
µˆ = y…
αˆ i = ( yi .. − y )
βˆ = ( y − y )
j . j.
) is the remainder in the cell average
The estimate for the interaction effects (αβ ij
after accounting for the overall mean and the factor effects
) = { y − ( y − y ) − ( y − y ) − y } = ( y − y − y + y )
(αβ ij ij . i .. . j. ij . i .. . j.
The estimate for the error is the difference between the individual values and the
cell average:
Thus, the observation in a cell can be represented as being comprised of the com-
ponents as:
By squaring both sides and summing over all observations in all cells, it can be
shown, because the sum of cross-products such as ∑ ∑ ∑( y
i j k
i .. − y )( y. j . − y ) are
all equal to zero, that the following equation involving the sum of squares (SS) is true:
a b n a b n a b n
∑∑∑( y
i =1 j =1 k =1
ijk − y ) = 2
∑∑∑( y
i =1 j =1 k =1
i .. − y ) + 2
∑∑∑( y
i =1 j =1 k =1
. j. − y )2
a b n
+ ∑∑∑( y
i =1 j =1 k =1
ij . − yi .. − y. j . + y )2
a b n
+ ∑∑∑( y
i =1 j =1 k =1
ijk − yij . )2
Therefore,
a b n a b
∑∑∑( y
i =1 j =1 k =1
ijk
2
− y ) = nb ∑( y
i =1
i .. − y ) + na 2
∑( y
j =1
. j. − y… )2
a b
+n ∑∑( y
i =1 j =1
ij . − yi .. − y. j . + y )2
a b n
+ ∑∑∑( y
i =1 j =1 k =1
ijk − yij . )2
We can write:
SS T = SS A + SSB + SS AB + SSE
Each of the sums of squares (SS) in the above equation has associated with it a
certain number of degrees of freedom (df ) equal to the number of independent
squared terms that are summed to produce the sum of squares. Take, for example,
a b n
SS E = ∑∑∑( y
i =1 j =1 k =1
ijk − yij . )2. Although abn number of squared terms go into this sum
of squares, there are only ab(n − 1) independent terms in this sum. Because, in each
cell, even if n terms are summed, only (n – 1) independent terms are summed, because
all the terms are calculated using the cell average yij . , which itself is an estimate
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 353
calculated out of the same cell observations. Therefore, there is only a total of ab(n − 1)
independent terms in the error sum of squares. We say that the SSE has ab(n − 1)
degrees of freedom. Similarly, the SSA, SSB, and SSAB each have (a − 1), (b − l), and
(a − 1)(b − 1) degrees of freedom, respectively. The sums of squares and the associated
degrees of freedom are listed in Table 5.12, which is called an ANOVA table.
The mean square (MS) column in the ANOVA table gives the sum of squares
divided by the corresponding degrees of freedom. The ANOVA table, in effect, shows
the total variability in the data divided into contributions coming from different
sources. The mean squares represent the standardized contributions of variability by
different sources, standardized for the number of degrees of freedom within each
source.
The mean square error (MSE) represents the variability in the data solely due to
“experimental error,” or variability not explained by the factors in the experiment,
which can be considered as coming from natural random noise. If the variability from
a factor is much larger than the variability from experimental error, we can then con-
clude that factor to be significant. Thus, the ratio of each of the mean squares to
the MSE is taken, and if it is “large” for any source, we then conclude that source
to be significant. It can be proved, assuming that the populations of observations
generated by the treatment combinations are all normally distributed, that the ratio
MS (Source)/MSE has the Fv1,v2 distribution, where ν1 and ν2 are the degrees of
freedom for the numerator and denominator, respectively, of the F distribution (Hogg
and Ledolter 1987). If the computed value of the F statistic for a source is larger than
the critical value Fα,ν1,ν2 then that source is significant; otherwise, that source is not
Factor A (a − 1) nb ∑( y
i =1
i .. − y )2 SSA/(a − 1) MSA/MSE
Factor B (b − 1) na ∑( y
j =1
.j . − y )2 SSB/(b − 1) MSB/MSE
a b
Interaction
AB
(a − 1)(b −1) n ∑∑( y
i =1 j =1
ij . − y i .. − y . j . + y )2 SSAB/(a − 1)(b − 1) MSAB/MSE
a b n
a b n
significant. The critical values of the F distribution for α = 0.01 and 0.05 are available
in Table A.6 in the Appendix.
This is the basic principle of the ANOVA method for determining the significance of a
source. We will illustrate the use of this method with an example. However, we first want
to recognize the existence of expressions to compute the sum of squares, which are equiva-
lent to the expressions given in Table 5.12 but are computationally more efficient. The
following equivalent expressions for the sum of squares can be easily proved using algebra:
∑∑∑ ( y ) = ∑∑∑ y ( )
2 2 2
SST = ijk − y ijk − abn y
i j k i j k
∑( y ) ∑( y ) ( )
2 2 2
SS A = bn i .. − y = bn i .. − abn y
i i
∑( y ) ∑ ( y ) − abn ( y )
2 2 2
SS B = an . j. − y = an . j.
j j
∑∑ ( y )
2
SS AB = n ij . − yi − y. j . + y …
i j
∑∑ ( y ) − abn ( y ) − SS
2 2
=n ij . A − SS B .
i j
∑∑∑ ( y )
2
SS E = ijk − yij . = SST − SS A − SS B − SS AB
i j k
Example 5.13
Analyze the data from the 22 experiment for designing the lawnmower product
parameters in Example 3.7 of Chapter 3 using the ANOVA method, and draw
conclusions.
Solution
The data from Example 3.7 is reproduced in the Table 5.13, and the main and interac-
tion effects are recalculated using the formula: Effect = (Contrast)/2k−1 = (Contrast)/2.
The contrasts and effects are shown in the rows at the bottom of the table of
contrast coefficients. The data are rearranged in Table 5.14 to facilitate calculation
of the sums of squares, and we get:
TABLE 5.14 Data from a 22 Experiment for Designing Lawnmower Product Parameters
Factor B (Deck Height)
5 IN. 7 IN. AVERAGE
Factor A 12° 72, 74 59, 61
(blade angle) y 11. = 73 y 12. = 60 y 1.. = 66.5
The calculated sums of squares are entered in the ANOVA table (Table 5.15).
From the completed ANOVA table, we see that Factor A is not significant, whereas
Factor B and Interaction AB are significant. This is the same conclusion reached
using the approximate 95% CI for the effects in Example 3.7.
There is also a formula to calculate the sums of squares directly from the table of
contrast coefficients Table 5.13:
where n is the number of observations in each cell and the number of contrast coef-
ficients is the number of signs (+ or −) under each factor.
For the lawnmower example, the sums of squares calculated using the above
formula are seen to agree with those in the ANOVA table.
TABLE 5.15 ANOVA Table for the 22 Experiment for Lawnmower Design
SOURCE SS DF MS F F0,05,n1n2 SIGNIFICANT?
Factor A 2 1 2 2/1.5 = 1.33 F0.05,1,4 = 7.71 No
Factor B 242 1 242 242/1.5 = 161.3 F0.05,1,4 = 7.71 Yes
Interaction AB 1,152 1 1152 1152/1.5 = 768 F0.05,1,4 = 7.71 Yes
Error 6 4 1.5
Total 1,402
356 A Firs t C o urse in Q ua lit y En gin eerin g
When the idea of using designed experiments was introduced in Chapter 3, the two
most important designs, the 22 and the 23, were explained with examples. The objec-
tive then was to emphasize the need for experimentation to select the product and
process parameters, and to explain the fundamentals of designed experiments and
related terminology using simple designs. Some additional topics on designed experi-
ments will now be discussed to make some additional designs available for occasions
when the simpler ones are not adequate. The designs discussed below are useful when
several (or many) factors are influencing a response and the ones that are most impor-
tant in terms of their effect on the response have to be culled out. These designs are
called the “screening designs” because they are used to screen out less influential fac-
tors and select the important ones. The important ones will possibly be included in
another experiment, following the screening experiment, to determine the best com-
bination of their levels to yield the desired results in the response. Specifically, we will
discuss the 2k designs, involving k factors, k > 3, each at two levels. We will start with
an example of a 24 design.
We want to point out that although we are concentrating here on the designs in
which factors are considered at only two levels, on some occasions these two lev-
els may not be adequate. The two-level designs will be appropriate if the response
changes between the levels of a factor linearly. If the response is nonlinear, with respect
to the factor levels, then the two-level designs will not be adequate. In these situa-
tions, more than two levels must be considered for a factor, and designs such as 2 × 3
factorials (two factors at three levels each) or 3k (k factors, each at three levels) designs
must be considered. Interested readers should refer to books on the design of experi-
ments, such as Montgomery (2001b), listed in the references at the end of this chapter.
5.5.3 The 24 Design
In the 24 design, there are four factors of interest, each being considered at two levels.
For example, in the 23 design that was used for process design (Case Study 3.1), if
“Iron Temp” is included as the fourth factor, a 24 factorial design will result, which
can be represented graphically as shown in Figure 5.24. The design has a total of 16
treatment combinations, as indicated by the dark dots at the corners.
The major problem with a 2k design is that when k increases, the number of trials
needed increases exponentially—even for a single replication of the experiment. If
multiple replications are needed to estimate the experimental error, the number of
trials becomes impractical. For example, if k = 5 and two replicates are needed, the
required number of trials is 64, which is an enormous number considering the time
and other resource constraints for real processes. Therefore, statisticians have devised
methods to eliminate the need for a second replicate to estimate the error. They have
also devised fractional factorial designs, which call for running only a fraction of the
total number of trials needed in a full factorial, with the fraction being chosen in such
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 357
Oven Oven
temp temp
7 8 15 16
3 4
11 12
Vent Vent
5 6 13 14
Sand Sand
1 2 9 10
Low High
Iron temp
a way that most useful information from the experimental results can be extracted
while only some noncritical information is sacrificed.
In an effort to avoid too many trials, statisticians try to make do with only one trial at
each treatment combination when k is large (k > 3). When there are no replicates, the
SSE is zero. To test for the significance of effects, the sum of squares from higher-order
interactions, higher than two-factor interactions, are pooled together and used as the
error sum of squares. This is done assuming that interactions higher than two-factor
interactions do not exist, so the sums of squares of higher-order interactions are, in
fact, experimental error. This seems to be a somewhat artificial way of obtaining the
estimate for experimental error, and if any higher-order interactions do exist, then
the results of the significance tests will become questionable. The approach preferred
when multiple replicates are not available in a 2k experiment is to use the normal prob-
ability plot (Hogg and Ledolter 1987).
If none of the factor effects is significant, the observations from the treatment com-
binations should all be from one normal distribution with a mean and a variance.
Then, the factor effects, which are differences of the averages of normally distributed
observations, should also all be normally distributed. If a normal probability plot (as
described in Chapter 2) is made of the factor effects, then all the factor effects should
plot on a straight line. If an effect is significant it will plot as an outlier, which can
then be identified as such.
Example 5.14
Effect 1.36 2.34 0.71 1.94 Ave. = 1.68 1.04 –1.04 1.94 –1.16 0.51 –0.71 –4.71 4.21 1.04 –1.99 –1.24
Source: Data from an exercise in Ch 15 of Walpole et al. 2002. Probability and Statistics for Engineers and Scientists. 7th ed. Upper Saddle River, NJ:Prentice Hall. Used with permission of
Pearson Education Inc.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 359
99
95
90
80
70
60
Percent
50
40
30
20
10
5
–4 –3 –2 –1 0 1 2 3 4 5
Data
Figure 5.25 Normal probability plot of effects of the 24 factorial experiment shown in Table 5.16.
Solution
The effects of the main factors and interactions of the 24 experiment are calculated
in Table 5.16 and shown at the bottom row. The effects are calculated as:
(Contrast ) (Contrast )
Effect = =
2 k −1 8
A normal probability plot of the effects was made using Minitab software, and
the resulting graph is shown in Figure 5.25. The normal probability plot shows that
the effect due to Interaction ABC is an outlier in the graph and is therefore consid-
ered to be significant. No other effect shows as being significant.
As already mentioned, in this approach we perform only a fraction of the number of tri-
als needed for a full factorial. Take, for example, the full 23 factorial design, which has
eight treatment combinations. Suppose that only four trials (one-half of the number of
trials in the full factorial) can be performed in this experiment because of resource con-
straints. The question then arises: Which four of the treatment combinations should be
chosen to obtain the best possible information on factor and interaction effects?
One choice is the four treatment combinations represented by the four corners
marked with large dots on the cube in Figure 5.26. This intuitively seems to be a good
“representative” selection of four treatment combinations out of the eight. Also, this
selection will result in a full 22 factorial design when collapsed on any one of the three
factors. This is an advantage, because when one of the factors is found not to be sig-
nificant, the design can be collapsed on that factor, the data at the corners aggregated,
360 A Firs t C o urse in Q ua lit y En gin eerin g
B bc abc B
b ab
C
c ac C
(1)
a A
and the resulting design analyzed as a 22 design. Figure 5.26 shows the 23 factorial
design collapsed on one factor, Factor A.
The above choice of one-half fraction of the 23 design is designated as 23–1. Note
that there is another half-fraction consisting of the four treatment combinations not
included in the first half-fraction. The first half-fraction is sometimes referred to as
the “principal fraction” of the 23 factorial design, and the second is called the “alter-
nate fraction.” The two half-fractions are shown in tabular form in Table 5.17a and
5.17b. Either of the fractional designs can be run when a one-half fraction is desired;
in terms of the information generated, the two are equal.
In the above selection of one-half fraction of a 23 design, we selected the treatment
combinations to be included in the fractional design guided by intuition. When the
number of factors becomes large, we need a formal procedure to select the treatment
combinations to make up the fractional design. Described below is a general algo-
rithm for generating fractional designs of 2k factorials. We will illustrate the proce-
dure using it to generate the half-fraction of the 23 design, and we will indicate the
5.5.5.1 Generating the One-Half Fraction With reference to Table 5.17a, when we have
three factors and want a design with only four runs, we first write down the design
for a 22 factorial design (which has four runs) using the (−) and (+) signs under A and
B of the design columns, as shown. Then, we generate the column for Interaction AB
as the product of Columns A and B, and we assign Factor C to this column. In other
words, make C = AB. We are replacing or superimposing the Factor C on to the inter-
action effect. And this gives the one-half fraction of the 23 factorial design.
In this design, for example, Run 1 will be made with Factors A and B at the low
level and C at the high level, and this run will be designated with the code c, according
to the coding convention. The other runs will all be coded accordingly, as shown in
the table. The other half of the factorial is generated in a similar fashion, except in this
case, Factor C is equated to −AB. The fraction indicated in Figure 5.26 with circled
treatment combinations can be seen as same as that shown in Table 5.17a.
5.5.5.2 Calculating the Effects To calculate the effects, we must first complete the cal-
culation columns of the above tables showing the columns of signs for the other inter-
action terms. The effects of the factors and interactions can be then calculated as:
Thus, in the case of the principal fraction, the effect of Factor A is given by (a −
b − c + abc)/2. Notice, however, that this is also the effect of Interaction BC, because
the Columns A and BC are identical. In fact, the effect calculated is the sum of the
effect of Factor A and the Interaction effect BC. There is no way that the effect of
Factor A can be separated from the effect of Interaction BC in this design, and the
main effect of Factor A is said to be confounded with the Interaction BC. We write
A = BC. Similarly, it can be seen that B = AC and C = AB. Each of the main fac-
tors is confounded with a two-factor interaction. In this one-half fractional design,
however, the main factors are not confounded among themselves.
In the above fractional designs, the effects that are confounded with each other are
said to be the “alias” of each other. Thus, A is the alias of BC, B the alias of AC, and C
is the alias of AB. The basic relationship C = AB, which was used to generate the 23−1
design, is called the “generator” of the fractional design. Another equivalent way of
expressing this relationship that defined the one-half fraction is to multiply both sides
of the generator by C to obtain CC = ABC. Here, A, B, and C represent the column
vectors in the design matrix, and AA = BB = CC = I, where I is the identity vector
with (+) sign for each element. Therefore, C = AB is equivalent to I = ABC. The latter
expression is in a more generalized form and can be adopted for generating fractional
362 A Firs t C o urse in Q ua lit y En gin eerin g
factorials of any 2k design. Once this generator is identified, it is easy to obtain the
confounding pattern as seen below:
Generator : I = ABC
Multiply both sides of the generator by A, and obtain IA = A 2BC, which is equiva-
lent to A = BC. Multiply both sides of the generator by B, and obtain IB = AB2C,
which is equivalent to B = AC. Multiply both sides of the generator by C, and obtain
IC = ABC2, which is equivalent to C = AB. (A column multiplied by itself equals the
Identity vector.)
For the alternate fraction:
Generator : I = −ABC
Multiply both sides of the generator by A, and obtain IA = −A 2BC, which is equiv-
alent to A = −BC. Multiply both sides of the generator by B, and obtain IB = −AB2C,
which is equivalent to B = −AC. Multiply both sides of the generator by C, and obtain
IC = −ABC2, which is equivalent to C = −AB.
The above discussion was meant to explain the algorithm used to generate frac-
tional factorials. The key to the algorithm is the “generator,” which helps in identify-
ing the aliases or confounded factors.
5.5.6 Resolution of a Design
because one-factor main effects are confounded with two-factor interactions, giving
(1 + 2) = 3. Only when higher-level interactions are confounded among themselves
or with lower-level interactions or main effects, will the resolution number be large.
When lower level interactions are confounded among themselves or with main effects,
the resolution number will be small. As we would not like to see the main effects or
two-level interactions confounded among themselves, we would not prefer designs
with small resolution numbers. When the resolution of a design is large, which means
a high-level interaction is confounded with a two-level interaction or main effect, a
calculated effect can be taken to be that of the two-level interaction or of the main
effect since the higher-level interactions can be assumed to be negligible.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 363
In the two 23−1 designs above, we have the main factors confounded with the two-factor
interactions. This is not good, as we would want to be able to identify the main effects and
two-factor interactions separately. For this reason, fractional designs are chosen for only 2k
factorials when k > 3. The three-factor experiments are usually run as full factorials unless
the experimenter is concerned only about the main effects and not about the two-factor
interactions. Because it was convenient to explain the fractioning procedure and the asso-
ciated terminology, the 23 factorial was chosen as an example in the above discussion. In
the next example, we will generate the 24−1 design, a one-half fraction of the 24 factorial.
Example 5.15
Create a 24−1 design that will have only eight runs, or half of the 16 runs needed in
the full 24 factorial.
Solution
As in the previous example, we first generate the design for the 23 factorial that has
eight runs. Then, we assign the fourth factor to the column for the highest-level
interaction. The highest level interaction is ABC, so we make D = ABC.
Then, we write out the ABC interaction column as shown in Table 5.18 and
make it Column D. Now the design is complete, and the codes for the responses
are assigned following the usual convention. For example, the response of the treat-
ment combination where factors A and D are at the higher levels is coded as ad, the
response from the treatment combination in which factors B and C are at high levels
is coded as bc, and so on.
The defining relationship of the 2 4−1 design is D = ABC, and the generator is
I = ABCD. The confounding pattern will be found as:
AI = AABCD ⇒ A = BCD
Similarly,
B B
bc abc bcd
abcd
b bd
ab abd
C C
cd
c ac acd
A A
(1) a d ad
(–) (+)
The treatment combinations in the 2 4−1 design are shown marked with a dark
dot on the graph of Figure 5.27. This design is of Resolution IV and is designated
as 24IV− 1 because the main effects are confounded with three-factor interactions,
giving (1 + 3 = 4), and the two-factor interactions are confounded among them-
selves, giving (2 + 2 = 4). In determining the resolution number, we use the small-
est of the sum of the numerals designating the levels of the confounded effects.
Thus, the resolution number helps in understanding the confounding patterns in
a design. For example, we can understand that, in a design of Resolution III, the
main effects are not confounded among themselves. In fact, the main effects will not
be confounded among themselves if the resolution is at least III. Similarly, the main
effects will not be confounded with two-factor interactions if the resolution is at least
IV. This is how the designation of designs by resolution numbers helps the experi-
menter understand the merit of a design.
There are two alternate one-half fractions available when we want to choose a
one-half fraction of a full factorial. There will be four alternate one-fourth fractions
when we want a one-fourth fraction of a full factorial. When the number of factors
k increases and even smaller fractions—such as one-eighth and one-sixteenth frac-
tions—are sought, there will be multiple alternate fractions available from which to
choose. There will then be design choices with different resolution numbers. The frac-
tional design with the largest resolution will be the most desirable.
Choosing fractions smaller than one-half can be done following the same general
logic employed in choosing the one-half fraction. However, when a one-fourth frac-
tion is needed, for example, from a 25 factorial design, two main factors will have to
be confounded with two higher-order interactions for choosing the generator. The
interactions to be confounded have to be chosen judiciously in order to obtain the frac-
tion of maximum resolution. [DeVor, Chang, and Sutherland (1992) provide a good
explanation of the procedure for selecting the generator and the defining relationship
for selecting fractional factorials.] Fortunately, tables of fractional designs for a given
number of factors, and a given number of runs allowed, are available wherein the
authors have provided information on the fractional factorials regarding their design
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 365
generators and resolution numbers. See, for example, the ready-to-use designs avail-
able in Montgomery (2013, P608). Taguchi’s orthogonal arrays are also good exam-
ples of ready-made fractional factorials. Most statistical software packages present
alternate design options once the number of factors is specified along with the number
of runs the experimenter wants to run.
Table 5.19 is an example of such design options given by Minitab.
To see how to make use of this tables: Suppose an experimenter wants to design
an experiment for six factors but can afford only eight runs. The experimenter
needs one-eighth of the 26 factorial (i.e., a 26−3 design). Table 5.19 shows that such
a design is a Resolution III design. Driving down further using the “Help” routine
in Minitab, we will find that such a design can be created using design generators
D = ±AB, E = ±AC, and F = ±BC. Note that there are eight alternative one-eighth
fractions available. For example, D = +AB, E = +AC, and F = +BC gives one; D =
−AB, E = −AC, and F = −BC gives another; D = +AB, E = +AC, F = −BC gives
the third; and so on. All the designs will be of Resolution III.
The objective of the discussion on fractional factorials is to introduce to the student
the terminology and to explain the basic method of generating the fractional factori-
als with desirable properties. Readers interested in the topic should refer to books
such as DeVor et al. (1992) and Montgomery (2001b). Readers who understand the
concepts explained here, however, we believe, would have gained enough knowledge
to confidently use the designs available in many computer software packages. They
should be able to run experiments, analyze data, and interpret results knowing how
the designs have been created and how the data are analyzed. They would also know
the strength and weakness in the information derived from them. We end this dis-
cussion on 2k design with a case study to show how data from a fractional factorial
design is analyzed.
Full III
8 23 24-1 25-2 26-3 27-4
full IV III III III
16 24 25-1 26-2 27-3 28-4 29-5 210-6 211-7 212-8 213-9 214-10 215-11
full V IV IV IV III III III III III III III
32 25 26-1 27-2 28-3 29-4 210-5 211-6 212-7 213-8 214-9 215-10
full VI IV IV IV IV IV IV IV IV IV
64 26 27-1 28-2 29-3 210-4 211-5 212-6 213-7 214-8 215-9
full VII V IV IV IV IV IV IV IV
128 27 28-1 29-2 210-3 211-4 212-5 213-6 214-7 215-8
full VIII VI V V IV IV IV IV
366 A Firs t C o urse in Q ua lit y En gin eerin g
FACTORS LEVELS
LOW HIGH
A: RX Temperature 130°F 150°F
B: Acid recipe 990 lb. 1050 lb.
C: ORP set point 0 mV 30 mV
D: Slurry concentration 23% 25%
B 3311 B
3706
2429 2736
C C
2632 2346
A A
2120 d 2345
(–) (+)
Figure 5.28 A 24−1 design for improving the yield from a chemical process.
99
95
90
80
70
60
Percent
50
40
30
20
10
5
0 400 800
Yield in lbs
Figure 5.29 Normal probability plot of effects from a 24−1 design (Case Study 5.1).
The analysis of data from the above experiment has provided some useful informa-
tion. Often this is enough for deciding on process improvement choices for obtaining
better quality and/or productivity. Further analysis could be carried out by estimating
the error standard deviation and making significant tests, but we did not include them
here. Readers are referred to books on experimental design if details on such further
analysis are needed.
5.6 Exercise
5.6.1 Practice Problems
5.1 The height of 3-in. paper cubes produced in a printing shop is known to be
normally distributed with mean μ = 3.12 in. and standard deviation σ = 0.15. A
sample of nine cubes is taken from this process, and the average is calculated.
a. What is the probability that this sample average will exceed 3.42 in.?
b. How would the answer be different if the process distribution is not quite
normal but is instead a little skewed to the left, although with the same
mean and standard deviation? Why?
5.2 The height of paper cubes is being controlled using X - and R-control charts
with n = 5. The current CL of the X -chart is at 3.1 in., and that of the
R-chart is at 0.08 in.
a. Calculate the 3.5-sigma limits for the X -chart.
b. Calculate the 3.5-sigma limits for the R-chart.
c. What is the probability of a false alarm in the 3.5-sigma X-chart?
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 369
5.3 Draw the OC curve of an X-chart with a sample size of 9 and 3-sigma
control limits. Choose values of k = 0.5, 1.0, 1.5, and 2.
5.4 Draw the OC curve of an X-chart with a sample size of 4- and 2-sigma
control limits. Choose values of k = 0.5, 1.0, 1.5, and 2, and compare it with
the OC curve from Exercise 5.3.
5.5 A process is to be controlled at fraction defectives of 0.05. Calculate the 2.5-
sigma control limits for the fraction defectives chart to use with this process.
Use n = 50.
5.6 Calculate the OC curve of the 2.5-sigma P-chart to control the fraction
defectives at 0.05. Use p values of 0.01, 0.03, 0.05, 0.07, and 0.09.
5.7 A control chart is used to control a process by counting the number of defects
discovered per engine assembly. Each engine assembly has the same number
of opportunities for defects, and the control chart is designed to control the
defects per assembly at an average of nine. Calculate the 2.5-sigma control
limits for this chart.
5.8 Calculate the OC curve of the 2.5-sigma C-chart to control the number of
defects per engine assembly at nine. Use c values of 3, 5, 7, 9, 10, 15, and 20.
5.9 The data in the table below represent the weight in grams of 15 candy bars
weighed together as a sample. Two such samples (of 15 bars) were taken at
15-minute intervals and weighed to control the weight of candies produced
on a production line.
a. Calculate the control limits for X- and R-charts to maintain current
control of the process at the current levels of average and standard
deviation.
TIME WEIGHT OF WEIGHT OF TIME WEIGHT OF WEIGHT OF TIME WEIGHT OF WEIGHT OF
PERIOD SAMPLE 1 SAMPLE 2 PERIOD SAMPLE 1 SAMPLE 2 PERIOD SAMPLE 1 SAMPLE 2
1 893.9 894.5 17 880.1 880.6 33 897.0 899.0
2 883.0 893.0 18 891.0 890.0 34 891.0 892.0
3 892.0 893.8 19 892.0 891.0 35 892.0 891.0
4 890.2 891.4 20 890.0 889.0 36 890.0 891.0
5 891.0 893.0 21 890.0 890.0 37 889.0 890.0
6 889.0 881.0 22 889.0 891.0 38 886.0 889.0
7 890.0 891.0 23 888.0 889.0 39 891.8 890.9
8 895.0 893.0 24 890.0 890.0 40 892.4 892.2
9 891.0 890.0 25 889.0 889.0 41 890.0 897.0
10 895.0 892.0 26 889.0 890.0 42 889.0 889.0
11 894.0 890.0 27 891.0 891.0 43 891.0 891.0
12 895.0 896.0 28 891.0 890.0 44 890.0 891.0
13 893.0 891.0 29 891.0 890.0 45 890.0 891.0
14 890.0 898.4 30 900.0 901.0 46 890.0 891.0
15 890.8 896.1 31 900.0 901.0 47 890.0 890.0
16 876.2 878.4 32 891.0 893.0 48 891.0 890.0
370 A Firs t C o urse in Q ua lit y En gin eerin g
b. Calculate the control limits for the process if the process is to be con-
trolled at an average weight of 882 g (for 15 bars). Use the current vari-
ability in the process to compute limits. Check if the process is in-control
by drawing the graphs of the control charts.
c. Calculate the control limits for the process if it is to be controlled at an
average of 882 g and a standard deviation of 3 g. Draw the graphs of
control charts, and check if the process is in-control.
5.10 Use only the first sample weight from each time period from the table in
Problem 5.9 and use it as the single observation from each time period to
prepare:
a. The chart for individuals and the MR chart
b. The MA and MR charts with n = 3
c. The EWMA chart with λ = 0.2 and λ = 0.6
If you are using computer software, verify the control limit calculations
in each case.
5.11 The following data on 15 samples come from a job shop with short produc-
tion runs.
a. Use a DNOM chart, and check if the process is in-control.
b. Use a standardized DNOM control chart and check if the process is
in-control.
5.12 Calculate the capability index C pm for the process in Problem 4.1. The
lower and upper specification limits are 24 and 26 g, respectively, with
the target being at the center of the specification. Use R /d 2 to estimate
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 3 71
for process standard deviation. Make sure the estimates for process mean
and process standard deviation are made from a process that is in-control.
5.13 Suppose the specifications for the weight of ravioli packages in Problem 4.3
are 31.5 and 32.3 oz., respectively, and the target is at the center of the
specification. Find the values of Cpm. Make sure the estimates for process
parameters are made from a process that is in-control.
Cp T −µ
5.14 Show that C pm = , where V = .
1 +V 2 σ
5.15 In Motorola’s scheme of evaluating process capability, if a process is said to
have 2.2σ capability, what would be the proportion outside specifications in
ppm?
5.16 In Motorola’s scheme of evaluating process capability, if a process has 30,000
ppm out of specification, what is the capability of the process in number of
sigmas?
5.17 An experiment is conducted so that an engineer can gain insight about
the influence of the following factors: Platen temperature – A; time of
application – B; thickness of sheet – C; and pressure applied – D on the
chicken scratches produced on laminations of art work. A one-half fraction
of a 24 factorial experiment is used, with the defining contrast being I =
ABCD. The data are given in the table below. Analyze the data, and assume
all interactions of three factors and above are negligible. Use α = 0.05.
A B C D RESPONSE
– – – – 7
+ – – + 7
– + – + 8
+ + – – 6
– – + + 9
+ – + – 7
– + + – 11
+ + + + 7
5.6.2 Mini-Projects
Mini-Project 5.1 A printing shop that produces advertising specialties produces paper
cubes of various sizes, of which the 3.5 in. cube is the most popular. The cubes are cut
from a stack of paper on cutting presses. The two sides of the cube are determined by
the distance of stops on the press from the cutting knife and remain fairly constant;
however, the height of the cube varies depending on the number of sheets included in
a “lift” by the operator. The lift height varies within an operator and between opera-
tors. The difficulty is in judging, without taking much time, what thickness of lift
will give the correct height when it comes under the knife and is pressed and cut.
372 A Firs t C o urse in Q ua lit y En gin eerin g
The humidity in the atmosphere also contributes to this difficulty because the paper
swells with the increase in humidity, thereby making it more difficult to make the
correct judgment. The operators tend to err on the safe side by lifting a thicker stack
of paper than is necessary.
The company management believes that the cubes are being made much taller than
the target, thus giving away excess paper and causing a loss to the company. They have
received advice from a consultant that they could install a paper-counting machine,
which will give the correct lift containing exactly the same number of sheets each time
a lift is made. However, this will entail a huge capital investment. To see if the capital
investment would be justifiable, the company management wants to assess the current
loss in paper because of the variability of the cube heights from the target.
Data were collected by measuring the heights of 20 subgroups of five cubes and are
provided in the table below. Estimate the loss incurred because of the cubes being too
tall. A cube that is exactly 3.5 in. in height weighs 1.2 lb. The company produces three
million cubes per year, and the cost of paper is $64 per hundred-weight (100 lb.).
Note that the current population of cube heights has a distribution (assume this to
be normal) with an average and standard deviation, and the target (ideal) population
of the cubes is also a distribution with an average of 3.5 in. and a standard deviation
yet to be determined. (Every cube cannot be made to measure exactly 3.5 inches in
height.) The target standard deviation should be less than the current standard devia-
tion, especially if the current process is affected by some assignable causes. You must
check if the process is under the influence of any assignable causes and decide what
would be the best standard deviation of the process if the assignable causes can be
found and eliminated.
Estimate the current loss in paper because of the cubes being too tall. You may first
have to determine the attainable variability before estimating the loss. If any of the
necessary information is missing, make suitable assumptions and state them clearly.
Mini-Project 5.2 The data in Columns 2 and 5 of Table 5.8, being the process data
from a normal distribution, were generated using the Minitab random number gen-
erator. The first 20 observations have μ = 10 and σ = 1.0, and the second set of 20
observations in Column 2 come from the process that has the mean shifted through
a standard deviation of 1.0, and those in Column 5 come from a process that has the
mean shifted through a 1.5 standard deviation. In each case, the standard deviation
remains unchanged. Create another set of data of this kind, and draw the EWMA
charts with λ = 0.2 and λ = 0.6. See how these charts react to changes in the process.
Also, compare the performance of the EWMA chart with the chart for individuals
and the MA and MR charts.
Q ua lit y in P r o d u c ti o n — P r o c e s s C o n t r o l II 3 73
References
Bothe, D. R. 1997. Measuring Process Capability. New York: McGraw-Hill.
Boyles, R. A. 1992. “Cpm for Asymmetrical Tolerances.” Technical Report. Portland, OR:
Precision Castparts Corporation.
Chan, L. K., S. W. Cheng, and F. A. Spiring. 1988. “A New Measure of Process Capability,
Cpm.” Journal of Quality Technology 20: 160–175.
Crowder, S. V. 1989. “Design of Exponentially Weighted Moving Average Schemes.” Journal of
Quality Technology 21 (3): 155–162.
DeVor, R. E., T. Chang, and J. W. Sutherland. 1992. Statistical Quality Design and Control. New
York: McMillan.
Duncan, A. J. March 1951. “Operating Characteristics of R-Charts.” Industrial Quality Control,
Milwaukee: American Society for Quality. 40–41.
Duncan, A. J. 1974. Quality Control and Industrial Statistics. 4th ed. Homewood, IL: Irwin.
Farnum, N. M. 1992. “Control Charts for Short Runs: Nonconstant Process and Measurement
Error.” Journal of Quality Technology 24: 138–144.
Ghandour, A. 2004. “A Study of Variability in Capability Indices with Varying Sample Size.”
Unpublished research report (K. S. Krishnamoorthi, advisor). IMET Department,
Bradley University, Peoria, IL, 2004.
Hillier, F. S. 1969. “X- and R-Chart Control Limits Based on a Small Number of Subgroups.”
Journal of Quality Technology l (l): 17–26.
Hines, W. W., and D. C. Montgomery. 1990. Probability and Statistics in Engineering and
Management Science. New York: John Wiley.
Hogg, R. V., and J. Ledolter. 1987. Engineering Statistics. New York: Macmillan.
Kotz, S., and N. L. Johnson. 1993. Process Capability Indices. London: Chapman & Hall.
Krishnamoorthi, K. S., V. P. Koritala, and C. Jurs. June 2009. “Sampling Variability in
Capability Indices.” Proceedings of IE Research Conference—Abstract. Miami, FL.
Lucas, J. M., and M. S. Saccucci. 1990. “Exponentially Weighted Moving Average Control
Schemes: Properties and Enhancements.” Technometrics 32 (1): 1–12.
Montgomery, D. C. 2001a. Introduction to Quality Control. 4th ed. New York: John Wiley.
Montgomery, D. C. 2001b. Design and Analysis of Experiments. 5th ed. New York: John Wiley.
Montgomery, D. C. 2013. Introduction to Quality Control. 7th ed. New York: John Wiley.
Motorola Corporation. 1992. Utilizing the Six Steps to Six Sigma. Personal Notebook. SSG 102,
Motorola University, Schaumburg, IL.
Roberts, S. W. 1959. “Control Chart Tests Based on Geometric Moving Averages.”
Technometrics 1 (3): 239–250.
Vardeman, S. B., and J. M. Jobe. 1999. Statistical Quality Assurance Methods for Engineers. New
York: John Wiley.
Walpole, R. E., R. H. Myers, S. L. Myers, and K. Ye. 2002. Probability and Statistics for Engineers
and Scientists. 7th ed. Upper Saddle River, NJ: Prentice Hall.
http://taylorandfrancis.com
6
M anag in g for Q ualit y
This chapter covers two major topics relating to managing an organization for produc-
ing quality results. The first deals with managing an organization’s human resources,
and the second deals with planning for quality, making quality as one of the param-
eters for strategic planning of an organization, along with financial and marketing
metrics.
For an organization pursuing quality and customer satisfaction, success greatly depends
on how the workforce contributes to this effort. “If total quality does not occur at the
workforce level, it will not occur at all” (Evans and Lindsay 1996). The contribution
from the workforce depends on how it is recruited, trained, organized, and motivated.
This chapter discusses the issues involved and the approaches taken to optimize efforts
in developing, organizing, and managing human resources for achieving excellence in
product quality, process efficiency, and business success.
Dr. Deming recognized the importance of the contributions that people make in
an organization toward producing quality products and achieving customer satisfac-
tion. Nine of the 14 points he recommended to organizations for improving quality,
productivity, and competitive position were related to people. The points he made in
this regard include (Deming 1986):
• Institute training on the job
• Adopt and institute a new form of leadership
• Drive out fear
• Break down barriers between staff areas
• Eliminate slogans, exhortations, and targets for the workforce
• Eliminate numerical quotas for the workforce and numerical goals for
management
• Remove barriers that rob people of pride in workmanship
• Eliminate the annual rating or merit system
• Institute a vigorous program of education and self-improvement for everyone
These points are discussed in detail in Chapter 9. For now, note that Dr. Deming
considered people’s contributions to be paramount in achieving quality results in an
375
376 A Firs t C o urse in Q ua lit y En gin eerin g
6.1.2 Organizations
6.1.2.2 Organizational Culture The culture of an organization can be seen in how the
members of the organization make decisions, how they treat one another within the
organization, and how they treat their contacts outside the organization. The culture
of an organization is largely shaped by its leadership—through the values they believe
in, their communication style, and how forcefully they influence the other members of
the organization. Although a common culture is discernible among the members of a
large organization, it is not unusual to observe islands of cultures within an organiza-
tion that may be different, in degrees, from the overall culture of the larger organiza-
tion. Such islands are products of individual local leaders or managers who influence
their subordinates by the power of their personality and convictions.
When Dr. Deming said: “It is time to adopt a new religion in America” (Deming
1983, 19), he meant that many American organizations needed to change their cul-
ture. This strong statement reflects the frustration he felt with the prevailing attitude
of people—and especially of management—towards doing quality work. He cited the
example of a nametag presented to a friend at the guard gate of a large chemical com-
pany when they were visiting that company. Everything on the nametag was correct
except for the name and date. Dr. Deming was riled by the indifference of organiza-
tions to the errors in the work performed. He was outraged by the “anything goes”
attitude. He suggested that in the new culture, everyone should get into the habit of
doing everything right the first time—that should be the principal hallmark of the
quality culture.
Furthermore, in an organization where quality culture prevails, everyone from
the top executive to the operator on the floor is focused on knowing the customer’s
needs and how those needs should be satisfied. The senior executives are all com-
mitted to producing quality, to actively participating in the quality-related activities,
and to encouraging their subordinates to do so. Decision making at all levels will be
M a n agin g f o r Q ua lit y 3 79
6.1.3 Quality Leadership
6.1.3.1 Characteristics of a Good Leader Good leaders have vision: they think and act for
the future, not just for the present. They can envision what their organization should
be like in order to meet the needs of the customer. They see the big picture, fill out
the details, draw the blueprints, and communicate them effectively throughout their
organizations. They also anticipate problems and devise solutions to forestall them.
380 A Firs t C o urse in Q ua lit y En gin eerin g
Leaders have good intuition, and they follow it. This is the faculty that helps in
making decisions when there is not enough information to analyze and arrive at those
decisions logically. Good intuition helps leaders to take risks in the face of uncertainty
and arrive at good conclusions.
Good leaders are good learners. They know their strengths and weaknesses and are
not afraid of learning from whatever source that provides them with information they
do not already have. They have an appetite for new knowledge, and they know how
to process that knowledge into useful skills that they can employ in their work. They
are also good listeners and communicators, and they use their learning to educate their
associates.
Good leaders are guided by good sets of values. Values are basic beliefs about how
one should conduct himself or herself in relation to others, other organizations, or the
community. Trust, respect for an individual, openness, honesty, reliability, and com-
mitment to excellence are some of the values that good leaders cherish. They practice
these values in their day-to-day work, and they make them the guiding principles for
their organizations in dealings with employees, customers, and suppliers.
Good leaders are democratic in the sense they believe in consultations and building
consensus before making decisions. They willingly delegate their power and respon-
sibility. They are not afraid of empowering their subordinates to assume authority for
decision making. They recruit the right people, who share in their vision, and create
leadership at all levels of the organization. Empowered employees become motivated to
participate in quality-improvement activities and contribute to customer satisfaction.
Good leaders are also caring people. They are interested in the welfare of their
people, which breeds better morale among the employees; and happy employees make
for happy customers.
Good leaders are ambitious people and set high expectations for their organiza-
tions. They are high achievers, and they expect their organizations to achieve as well.
They are risk takers, and they set high goals for themselves and their organizations
and then provide the resources necessary for achieving such “stretch goals.” Goals
such as the reduction of defects per unit of output in every operation by a 100-fold,
in four years, were set in Motorola at the initiative of their leader, Robert Galvin, in
1984. This was a very ambitious goal at that time. Such goal setting was one of the
factors that propelled Motorola to achieving excellent quality and financial results in
the 1990s.
Good leaders show deep commitment to quality, and they actively participate
in quality activities. They serve as role models by participating as members of
quality-improvement teams and being personally involved in quality projects. They
are accessible to all in the organization and they listen to customer calls. They become
part of the review committees that review reports on quality projects and provide
resources and encouragement to project teams. They are present during ceremonies
to reward quality achievements and to recognize team accomplishments. They subject
themselves to evaluation and implement any resulting suggestions for improvement.
M a n agin g f o r Q ua lit y 3 81
In other words, they “walk the talk,” create an atmosphere for quality, and remain
personally involved in the daily activities related to quality.
“The leader of the future will be persons who can lead and follow, be central and
marginal, be hierarchically above and below, be individualistic and a team player, and
above all be a perpetual learner” (Schein 1996).
6.1.4 Customer Focus
Customer focus refers to an organization remaining conscious of the needs of its cus-
tomers and making and delivering products and services to meet those needs.
“Profit in a business comes from repeat customers, customers that boast of your
product and service, and that bring friends with them” (Deming 1986). In other
words, it is not enough if the customers are merely satisfied; they should be delighted
with what they receive. Their needs and expectations should be proactively discovered,
and product and services designed and delivered to them in a manner that makes the
customers “loyal customers.” They will then voluntarily come back for repeat business
and will tell their friends about the product or service with which they are satisfied.
Customer needs and expectations change with time; hence, they must be moni-
tored on a continuous basis. Dr. Deming also said that it is necessary to understand
how the customer uses and misuses the product. Rather than making assumptions
about how a customer uses a product, it is better to find out how they actually use it.
When customer practices and habits are better understood, it is easier to provide a
product that meets their needs.
The value a customer places on a product depends on several aspects in addition to
the product’s quality in terms of possessing the desired attributes. The way in which
customers are received and treated when they make their first inquiry is one of those
aspects. Greeting customers promptly upon arrival and treating them with courtesy
can add to the value that customers place on the product. The prompt and clear expla-
nation and delivery of after-sales and warranty services serves to add further value. A
competitive price and competitive lifetime cost (i.e., the cost of running and main-
taining the product during its life) is yet another value-adding aspect. Customers place
a value on the product that is linked to all these factors, and they develop their loyalty
based on such value.
It is therefore necessary to assess how much value the customer places on a product
relative to the value placed on a competitor’s product. If, in the judgment of the cus-
tomer, such value is not on par with a competitor’s, then steps must be taken to find
opportunities for making improvements to the value and then implementing those
improvements.
Dr. Deming conveyed the idea that product quality alone is not enough to create
loyal customers through the illustration shown in Figure 6.1. He called the illustra-
tion the “three corners of quality.” His point was that a producer should address all
three corners of the triangle to provide value to the customer. The producer should
382 A Firs t C o urse in Q ua lit y En gin eerin g
Training of customer. Instructions for use. The customer and the way he uses the
Training of repairmen. Service. Replacement product. The way he installs it and
of defective parts. Availability of parts. maintains it. For many products, what the
Advertising and warranty: What did you customer will think about your product a
lead the customer to expect? What did your year from now, and three years from now,
competitor lead him to expect? is important.
Figure 6.1 The three corners of quality. (Reproduced from Deming, W. E. Out of the Crisis. Cambridge, MA: MIT Center
for Advanced Engineering Study, 1986. With permission from MIT Press.)
facilitate proper customer use of the product by providing after-sales services in its
use and maintenance. This has to be done in a proactive manner. Waiting for the cus-
tomer to complain will make it too late. In this context he made the famous statement,
“[Defective] goods come back, but not the customer” (Deming 1986).
Customer research and receiving customer input before the product or service is
designed is the best approach. Customer research can be conducted using statistical
sampling, which when properly carried out gives reliable results. The survey should
elicit information on what the customer likes and does not like in a product, and what
it is that they like in a competitor’s product—and why.
Moreover, an organization with a customer focus is constantly in communication
with its customers and is aligned with their needs at all levels of the organization.
The organization delivers what it promises, resolves customer complaints fairly and
promptly, and makes it easy for customers to do business with it. The organization
invites customer suggestions and treats customer complaints as an opportunity for
making things better. It continuously improves the processes and products to keep
customers satisfied. The entire workforce of the organization remains committed to
this philosophy and to practicing it in daily activities.
Most customers who buy and use the products are external to the organization and
they are called external customers. There also are customers inside the organization,
called the internal customers, who process the products received from the previous
station and pass them on to the next customer down the line. Almost every station,
operation, and department in an organization is a customer, and it also acts like a
producer and supplier. It is important that each supplier gives the same polite and
considerate treatment to its internal customer as they would give to an external cus-
tomer. Thus, each person, operation, or department has a dual responsibility: Treating
M a n agin g f o r Q ua lit y 383
well the next person, operator, or department in the line through meeting all their
needs, and keeping in focus the needs and expectations of the ultimate end-user—the
external customer of the organization. This is the type of customer focus that must be
cultivated within an organization.
6.1.5 Open Communications
“If total quality is the engine, communication is the oil that keeps it running”
(Goetsch and Davis 2015). Communication refers to the exchange of information
among everyone—leaders, managers, workers, suppliers, and customers—in an orga-
nization. Effective communication means that the information conveyed is received,
understood, and acted upon. Open communication means that nothing is withheld
from one person or group and that the information is transmitted voluntarily and
spontaneously. Open communication creates trust, and it motivates people to partici-
pate and be involved in the welfare of the organization. Quality organizations have
a policy of “open-book management,” in which everyone has access to data such as
product cost, material and labor cost, the cost of poor quality, the cost of capital, mar-
ket share, customer satisfaction level, customer complaints, profits, debts, and so on.
Only when such information is shared among all employees will they feel responsible
and be able to participate effectively in decision making and implementing the deci-
sions that are made.
All employees in an organization should know the vision and mission of the
organization. [The mission statement for an organization says what the purpose of
the organization is and who the customers are, and the vision of an organization
states where the organization wants to be at the end of a planning period.] In fact,
they should participate in creating the vision and mission. When employees have
been part of the effort to create these, they will become involved in working for
and accomplishing them. At quality organizations, senior management meet all
of the employees on a regular basis—either monthly or quarterly—and communi-
cate information on organizational performance in accomplishing the vision and
mission.
Open communication helps in removing fear among the employees and in fur-
thering the two-way exchange of information and ideas. Unless fear of reprisal is
removed, employees will not come forward to point out drawbacks in the system,
failures in the production process, or defects in the final product. Many authorities,
including Dr. Juran and Dr. Deming, have estimated that more than 80% of problems
causing defects in processes can be traced to root causes that only management can
correct (Garwood and Hallen 1999). Thus, if the employees are going to point out
the existence of poor machinery, poor material, or poor process, it more than likely is
the result of some management action or inaction. Unless the possibility of reprisal is
removed, the existence of poor conditions in the processes will not be identified, and
suggestions for improvement will not be forthcoming.
384 A Firs t C o urse in Q ua lit y En gin eerin g
6.1.6 Empowerment
Education differs from training in that the objectives of education are to build knowl-
edge of the fundamentals and enhance the ability to think, analyze, and communi-
cate. The objectives of training, on the other hand, are to develop the skills needed
for a specific job or function. For the sake of convenience, the term “training” in the
following discussion includes both education and training.
Necessity of Basic Skills A quality organization needs a quality workforce. The work-
force must have capabilities in the basic skills of reading, writing, and arithmetic. In
addition, they should be capable of logical thinking, and of analyzing and solving
problems. In some industries, knowledge in specialized areas, such as physics and
chemistry, is required. People should be able to communicate in the language used
within the organization so that they can participate in various teams, where people
with different levels of expertise, experience, and capabilities will be participating.
Furthermore, they should be able to take leadership roles to plan, organize, and exe-
cute projects.
Diversity in the Workplace The new workforce dynamics in the United States will
draw an unusual mix of nontraditional workers, including homemakers returning to
work after raising families, minorities, and immigrants, to the workplace. This gener-
ates the need for additional resources to train or retrain them in the basic skills—math,
science, and communications—to function effectively in the modern workplace.
Need to Improve Continuously Dr. Deming claimed that: “Competent men in every
position, if they are doing their best, know all that there is to know about their work
except how to improve it” (Deming 1986). That is a succinct statement of the fact that
the ability to analyze and improve processes is a special skill and one that is not part
of everyone’s education. The new workforce needs to have the ability to continuously
learn new ways to think, analyze, solve problems, and be responsive to the changing
needs of the customer. Supervisors and managers need training on how to be the
coaches and facilitators in the new environment.
We can see from the above discussion that training in the workplace is a necessity,
and a major portion of the responsibility for training falls on employers.
6.1.7.2 Benefits from Training The benefits that come from a trained workforce are
many. Some of the obvious benefits from a trained workforce are:
• Increased productivity
• Fewer errors in output and improved quality
• Decreased turnover rate and reduced labor cost
• Improved safety and lower cost of insurance
• Multi-skilled employees and better response to change
• Improved communication and better teamwork
• More satisfied employees and improved morale in the workplace
6.1.7.3 Planning for Training Different organizations may have different needs for
training. A manufacturer of heavy machinery may need a different training scheme
from that needed by a software developer. Each has to assess their particular train-
ing needs, plan accordingly, and execute the necessary plan. There must be a strategic
plan for training put together by the executive leadership of the organization, with
support from the human resources function and the quality professionals. The plan
should have a vision and mission statement, and strategies should be formulated with
customer satisfaction in mind. The training plan is usually made by addressing the
different segments of an organization using different syllabi. The following is one such
plan for training in quality methodology.
1. Quality awareness training (for all segments)
2. Executive training (for executive leaders)
3. Management training (for supervisors and managers)
M a n agin g f o r Q ua lit y 387
4. Technical training:
a. Basic level (for all technical personnel)
b. Advanced level (for engineers and scientists)
The typical contents for each of the courses can be outlined as follows:
1. Quality Awareness Training:
a. What is quality? What does it mean to us?
b. How does quality affect customers?
c. Who is the customer?
d. What is a process?
e. Variations in processes
f. Introduction to the seven magnificent quality tools
g. Need for quality planning, control, and improvement
h. Quality management systems (ISO 9000, Six Sigma, or Baldrige criteria)
2. Executive Training:
a. Quality awareness (Module 1 above)
b. Quality leadership
c. Strategic planning for quality
d. Customer satisfaction and loyalty
e. Benchmarking
f. Business process quality
g. Supplier partnering
h. Teamwork and empowerment
i. Reward and recognition
3. Management Training:
a. This module will be a mixture of the executive training module shown
above and the technical training module given below.
b. Managers should understand the strategic place of quality in the business,
and be able to use some of the technical tools to participate in improve-
ment projects.
c. They should be trained to be especially sensitive to the human side of the
quality system: People’s needs, their capabilities, the contributions they
can make, and their requirements for education and empowerment.
4. Technical Training (basic level):
a. Brief version of executive training module
b. Total quality system
c. Quality costs
d. Basic statistics and probability
e. Confidence interval and hypothesis testing
f. Regression analysis
g. Correlation analysis
388 A Firs t C o urse in Q ua lit y En gin eerin g
6.1.7.4 Training Methodology Training is best done in modules, each no more than
two hours in length. The two-hour segments seem to be the most suitable to fit into
the work schedules of managers and executives. The two-hour modules also provide a
good break period for the assimilation and absorption of ideas.
Training should be provided in a timely manner, because knowledge stays with
people best when they can use what they learn. For example, a module on strategic
planning for executives should be given when the executives are making or reviewing
the plans. The module on design of experiments (DOE) should be offered to a group
of engineers when they are working on an improvement project that needs a designed
experiment.
Mentoring by an outside expert will always be helpful, especially if they have
practical experience in the skills being taught. For example, an expert with experi-
ence in making strategic plans could be of great help to an executive who is making
the strategic plans for the first time. Similarly, an expert in DOE who has success-
fully performed designed experiments could help an engineer in his or her first DOE
project. It is always helpful if the trainer is from the peer group (engineers, scientists,
or managers), or higher, of the people receiving the training. It is also helpful if the
trainer has experience in the line of trade that the majority of the audience belongs
to. For example, if a class is made up of chemical engineers, a chemical engineer or a
chemist who can speak the language of the audience and has expertise in the quality
methods, would be an ideal choice. However, the choice of a trainer from the same
trade as the audience is not an absolute necessity, because many experienced qual-
ity professionals can successfully translate their experience in one field to another in
short order.
6.1.7.5 Finding Resources Training programs will usually be budgeted by the train-
ing department, with the quality department providing technical consultation in the
development of syllabuses and in choosing instructors. Volunteers from within an
organization could serve as instructors if they are suitably qualified. We are reminded
M a n agin g f o r Q ua lit y 389
here of the observations Dr. Deming made regarding the qualification of instructors
to teach statistical methods:
American Management have resorted to mass assemblies for crash courses in statistical
methods, employing hacks for teachers, being unable to discriminate between compe-
tence and ignorance … No one should teach the theory and use of control charts without
knowledge of statistical theory through at least the master’s level, supplemented by experi-
ence under a master. I make this statement on the basis of experience, seeing every day the
devastating effects of incompetent teaching and faulty application. (Deming 1986, 131)
6.1.7.6 Evaluating Training Effectiveness The test of training is in the learning that
the participants acquire. This learning usually becomes apparent in the results of
the projects they complete or in the output of the teams in which they participate.
Improvements in product and service quality, customer satisfaction, reduction in
waste, and increases in productivity are some of the outcomes of effective training.
When people have learned improvement tools, they become sensitized to opportuni-
ties where improvements can be made. When people are trained, there will be an
increased number of suggestions for improvement projects, and an increased number
390 A Firs t C o urse in Q ua lit y En gin eerin g
of successfully completed projects. All these can be counted. When employees acquire
improved communication and interpersonal skills, there will be a general improve-
ment of the social atmosphere in team meetings and the workplace in general. There
will be a decreased number of personal conflicts and turf fights, contributing further
to the exchange of information and sharing of ideas for the common goal of the orga-
nization. Learning and knowledge provide satisfaction and fulfillment to individuals,
and the general morale in the organization will improve, which can be measured by
decreased absenteeism, a reduction in complaints, and an increase in volunteers for
teams.
At the individual course levels, feedback should be obtained from the participants
on the relevancy of the course, the relevancy of the topics, the organization of the mate-
rial, and the expertise and ability of the instructor to communicate. These will help
in making changes to improve the effectiveness of the instruction and the instructors.
6.1.8 Teamwork
6.1.8.1 Team Building Team building does not occur spontaneously; a certain effort
is needed to create good teams, and there is a process for team building. The process
includes the following steps.
M a n agin g f o r Q ua lit y 3 91
6.1.8.2 Selecting Team Members Members who have the most potential for contrib-
uting to the mission of a team based on their expertise, experience, and attitude to
teamwork should be included. There must be diversity in all respects—education,
salary grade, expertise (e.g., engineering, marketing), gender, and race—to take
maximum advantage of the contribution that a diverse group can make. The team
size should be between six and 12, with the ideal being eight or nine. The team
should choose a leader and a secretary is either elected by the team or appointed by
the leader.
6.1.8.3 Defining the Team Mission A mission statement, written by the team, describes
the purpose of the team and is communicated to all in the organization. It should be
broad enough to include all that is to be accomplished and should have just enough
details to communicate the scope. It should also be simple and understandable to all.
6.1.8.4 Taking Stock of the Team’s Strength A team’s strength should be assessed ini-
tially, and at regular intervals, based on team members’ own perceptions. Strength is
assessed in the following areas:
• Direction of the Team
If everyone in the team understands the mission, goals, and time schedule.
• Adequacy of Expertise and Resources
If the team has a good knowledge of the processes they are dealing with, if
enough strength exists in their problem-solving skills, and if they have the
authority to spend time and money to meet their goals.
• Personal Characteristics of the Members
If all members work as a team with honesty, trust, responsibility, and
enthusiasm.
• Accountability
If team members understand their responsibilities, how the progress of the
team will be measured, and how corrections are to be applied.
6.1.8.5 Building the Team Team-building activities must be planned and implemented
based on the results of the assessment made in the previous step:
• If the weakness is in terms of lack of direction or how it is communicated,
then the team leader should reassess the mission and the goals in consultation
with the team and communicate them in clear terms.
• If the weakness is in expertise and other resources, then the team should
embark on an education and training program. In general, one of the team
members, who is well versed in the methods of problem solving relative to
quality—both on the statistical and the managerial side—will be made the
quality specialist. This person can provide the necessary training to other
members. An outside consultant can be engaged as well. A brief list of basic
392 A Firs t C o urse in Q ua lit y En gin eerin g
tools for quality improvement is given later in this chapter, and the list is elab-
orated in Chapter 8. If there is a shortage of expertise or other resources, the
team should approach the upper management and secure additional resources.
• If the weakness is in personal characteristics or human relations, then the help
of a human relations person (either from within or outside the organization)
should be sought. A discussion on the desirable characteristics of team mem-
bers is given later in this section.
• If the weakness is in the area of accountability, then the leader should identify
the goals, divide the responsibilities, and assign those individual responsi-
bilities clearly. Writing minutes of the meetings and making assignments on
paper will help avoid weaknesses in the accountability area.
6.1.8.6 Basic Training for Quality Teams All team members should be familiar with
the general problem-solving process and the basic tools for quality improvement. The
problem-solving process includes the following steps:
• Defining the problem and the objectives for the project
• Collecting data, and analyzing them for root causes
• Devising solutions to solve the problem
• Selecting the best combination of solutions based on the objectives
• Implementing the solution(s)
• Obtaining feedback and debugging the solutions
There are several versions of this problem-solving methodology. Dr. Deming’s “plan-
do-check-act” (PDCA) cycle and Dr. Juran’s “breakthrough sequence” are examples
of problem-solving approaches as is the DEMAIC process used in Six-Sigma meth-
odology. These approaches are discussed in detail in Chapter 8. The teams should also
have training in the basic tools for quality improvement, which are:
1. Flowcharting of processes
2. Pareto analysis
3. Cause-and-effect diagram
4. Histogram
5. Control charts
6. Check sheets
7. Scatter plots
These are known as the “magnificent seven tools,” so designated by Dr. Kaoru
Ishikawa, the Japanese professor who is recognized as the father of the quality revolu-
tion in Japan. These tools are described in detail, using examples, elsewhere in other
chapters of this text. Members of a quality-improvement team should be familiar with
these tools so that communication among team members becomes easy.
M a n agin g f o r Q ua lit y 393
Enthusiasm Enthusiastic individuals are the cheerleaders in a team. They are usu-
ally high-energy people with high productivity. They help the team to overcome road-
blocks, and they get the team going when the going gets tough. A few such members
are always needed for the team to be successful.
Initiative People with initiative do not wait for tasks to be assigned to them; they
offer to take up tasks where they can make contributions. They provide the starting
momentum for the team to get moving.
Resourcefulness Resourceful people are the ones with the ability to find creative
ways of resolving difficult issues. They find ways to make the best use of available
resources. They find ways to get to the destination when others feel they have reached
a dead end. They usually have a high intellectual capability, with an ability to think
on their feet.
Tolerance There will be differences among team members based on their educa-
tional level, gender, age, or race. There will be occasions when people will think or act
differently because of the differences arising from cultural, intellectual, or educational
differences. Unless the team members respect these differences and tolerate diversity,
a team cannot progress toward its goal. People from diverse backgrounds may bring
39 4 A Firs t C o urse in Q ua lit y En gin eerin g
diverse strengths to the team. Team members should learn to take advantage of the
positives for the sake of the team’s success.
Perseverance Patience and perseverance are the means of success in any endeavor.
There will be team members who are bright, creative, and enthusiastic, but who may
get easily disappointed and depressed when the first failure occurs. This is when peo-
ple with patience and perseverance are needed to keep the team on target and working
until the end is reached.
6.1.8.8 Why a Team? We can easily see from the above discussion of the character-
istics needed in team members why a team can succeed where individuals cannot. It
is hard to find in one person all of the qualities that team members can bring to bear
collectively on a team’s work. When several individuals bring qualities that comple-
ment one another, and when the team works together as one entity with a common
goal, then there are very few problems that cannot be overcome, and very few objec-
tives that cannot be accomplished.
6.1.8.9 Ground Rules for Running a Team Meeting The following set of rules for con-
ducting team meetings is summarized from The Team Handbook (Scholtes 1988) in
which further elaboration of these rules can be found:
• Use agendas that have been prepared and approved in the previous meeting,
and send them to the participants ahead of time. Identify the people respon-
sible for each item, and the time to be allowed for each item indicated.
• Use a facilitator who will keep the focus on the topics on the agenda; intervene
when discussions lack focus, a member dominates, or someone is overlooked;
and bring discussions to a close.
• Take minutes to record decisions made, responsibilities assigned, and the
agenda agreed on for the next meeting. The minutes will be recorded by the
secretary and distributed among team members within a reasonable time after
each meeting.
• Evaluate the meeting to obtain feedback from members on how to improve
the team dynamics and make the meetings more productive.
• Seek consensus so that no one in the team has any serious objection to the deci-
sions made. Consensus decisions are better than majority decisions, because
majority decisions produce winners and losers, which may not be conducive to
the growth of team spirit.
• Avoid interruptions. Scholtes (1988) recommends use of the “100-mile rule,”
according to which no one in the meeting will be called unless the matter is
so important that the disruption would occur even if the meeting was held
100 miles away from the workplace.
M a n agin g f o r Q ua lit y 395
6.1.8.10 Making the Teams Work A few suggestions are available from experts on how
to avoid problems that may develop during the working of a team and, if problems do
arise, how to minimize their effects and make progress toward team goals.
Making Team Members Know One Another Introduce members of a team who have
not had prior working relationships through informal introductory chats about their
jobs, families, hobbies, and so on. Begin each meeting, especially the early ones, with
warm-up exercises in which the team members, through informal small talk, warm
to each other before important agenda items are taken up. Help team members visit
one another’s workplaces to make them all familiar with each other’s working condi-
tions and in the process exchange views on the details of the projects that the team is
working on.
Resolving Conflicts Promptly Well-defined directions for the team and ground rules
for the conduct of business, communicated clearly, will prevent serious conflicts.
Disagreements among members are not bad in themselves; in fact, such disagreements
may even be healthy for the team’s work. However, any overbearing or dominant
behavior from members must be discouraged and curbed by the leader and the joint
action of the team.
Providing frequent opportunities for members to express their concerns will help
to resolve serious conflicts, because it will enable the early resolution of differences.
Training team members in managing dissent and expressing it constructively will help
avoid flare-ups.
6.1.8.11 Different Types of Teams Teams acquire their names mainly based on the pur-
pose for which they are constituted. Sometimes, the name reflects the constituents
making up the team.
Process Improvement Teams Process improvement teams are the most relevant type
related to quality in an organization. They are created in order to address quality
improvement and customer satisfaction.
6.1.8.12 Quality Circles Quality circles are teams of employees mainly working in one
area and reporting to the same supervisor, who have come together voluntarily to solve
problems relating to the quality of a product or service created in that department. The
problems could also be related to safety, environment, or other conditions in the work
area that are of concern to the team. The concept of the quality circle originated in Japan
as a way of utilizing the knowledge and expertise of workers in solving quality problems,
to complement the use of statistical and other technical methods. The Japanese workforce
M a n agin g f o r Q ua lit y 397
6.1.9 Motivation Methods
that people are motivated by what they expect to receive from the work they perform.
According to reinforcement theory, people are motivated by their experience of what
they received in the past for their level of performance—whether they got a pat on the
back or a slap on the hand.
Dr. Deming’s theory of motivation is a simple one: If people are allowed to do their job
without hindrances from a lack of proper tools or materials, and without being pressured
by a supervisor eager to meet daily schedules with no regard to the quality of the product,
then these people will do a good job. He believed in the theory that people will want to
do a good job if they are allowed to do so. The authors have seen this to be true almost
everywhere they have had opportunity to work. Almost everyone, whether a supervisor,
union worker, nonunion worker, store clerk, maintenance mechanic, or job scheduler,
wants to do his or her work in a manner that will satisfy the person who receives that
work. With few exceptions, no one likes to turn out bad work if he or she can help it.
Incidentally, one of the authors, KSK, wants to add this piece of advice to future
quality engineers. If you establish your credentials as to your ability to help people to
do their job better, you will receive cooperation and advice that will help you discover
new and better methods for doing the job, and for improving quality and productivity.
Some people may be readily convinced about your credentials and sincerity and may
offer their help instantly; others will take time to evaluate you and wait to join the
pursuit of quality. A little bit of perseverance and good communication will almost
always enlist the help of people. In short, most workers want to do quality work and
be part of the quality effort—if they are presented with the right opportunity.
Maybe it is a fact that doing quality work, or being part of it, provides an inner
satisfaction to people. The simple recognition of good work provides encouragement
for people to continue this good work. We should, of course, recognize that a totally
conducive environment, in which people are treated well, reasonably compensated for
their work, and provided with a clean and safe workplace, all contribute to motivating
people to do their best.
6.1.10 Principles of Management
The planning function for quality over a long period of time, in the five to 10-year
time frame, is called “strategic planning” for quality, and is a principal function of
management that can contribute to higher levels of quality accomplishment.
Godfrey (1999) provides a historical perspective of the quality movement in the United
States. He marks the year that quality activities began as being 1892, when inspection
procedures for telephone equipment in the erstwhile Bell System were created. This
practice of inspecting the product at the end of a production line in order to make cer-
tain that it was made according to the specifications of the designers, still exists today.
The idea of monitoring process parameters and controlling them to prevent the pro-
duction of defective products was introduced by Dr. Walter Shewhart around 1924.
The concept of quality assurance through prevention methods was born then and is
still practiced today. Around the 1960s, the importance of customer service quality to
supplement product quality was recognized, and this idea continues to occupy a place
of importance in quality organizations.
Starting in the early 1980s, further emphasis was placed on the quality of services,
which was enhanced by identifying service processes and then monitoring them using
control methods. This was the time when the luxury carmakers (Acura, Lexus, and
Infinity) paid special attention to after-sales services to attract and retain luxury car buyers.
Service quality received widespread attention throughout the economic spectrum—from
manufacturing to hospitals to hotels and the entertainment industry in the 1980s.
The systems approach to quality gained momentum with the introduction of the ISO
9000 standards in 1987 and the Baldrige Award, also in 1987. The 1990s saw the
introduction of quality at the strategic level, treating it as a parameter for business
planning along with finance and marketing measures.
In the words of Godfrey: “In the past few years, we have observed many companies
starting to integrate quality management into their business planning cycles. This inte-
gration of quality goals with the financial goals has been a major thrust of the leading
companies” (Godfrey 1999). Such an integration of quality goals with financial and
marketing goals at the planning stage—and planning for strategies that would accom-
plish those goals—is the objective of strategic planning for quality. We will see in some
detail below how planning is done and how it is implemented or deployed.
to support the key strategies. A set of values the organization will follow during the
pursuit of the vision is also a part of the plan.
A mission statement for an organization is the statement of the purpose for which
the organization exists. This is typically a short paragraph that answers the follow-
ing questions: Why do they exist? What products do they produce? Who are their
customers?
A vision statement delineates where the organization wants to be at the end of the
planning horizon. This is usually a short, pithy statement capturing the aspirations of
the organization. The vision will be ambitious yet attainable.
The mission and vision statements are prepared by the executive management, with
inputs from all ranks, and are published throughout the organization, including cus-
tomers and supply partners. Together, they provide the guidelines for decision making
regarding the day-to-day issues in the organization.
The executive management will also create a set of values, or guiding principles,
to provide the framework within which the organization will pursue its mission and
vision. The set of values will typically state what the leadership of the organization
believes their obligations are to all the stakeholders of the organization, including
investors, customers, employees, and the general public. Statements on their ethical
standards, commitment to diversity, responsibility to the environment, and relation-
ship with the community will all be part of the values.
From out of the mission and vision statements are generated key strategies, which
are the major action plans needed to accomplish the vision. The key strategies are usu-
ally broad in scope, small in number (four to six), and enable the organization to go
from where they are to where they want to be.
To understand where it currently is, the organization should first make an exten-
sive study of its present status by collecting data on several organizational param-
eters, as given below, to understand its internal strengths and weaknesses. Similarly,
a study must be conducted to learn about external opportunities and threats. This is
known as the “gap analysis,” because it explores the gap between where the organiza-
tion wants to be and where it currently is. The relevant internal and parameters to be
studied:
Key
strategy 1
Subgoal 1
Short-term tactics/projects
Key Strategic
Subgoal 2
strategy 2 goal 1
Short-term tactics/projects
Subgoal 3
Key Strategic
Vision strategy 3 goal 2
Key Strategic
strategy 4 goal 3
Key
strategy 5
Figure 6.2 Breakdown of the vision into key strategies, strategic goals, sub-goals, and projects.
The key strategies for the important organizational parameters are selected by the
executive management in close consultation with operational managers, based on the
gaps discovered between current status and the envisioned goals. Each key strategy
should have certain specific, measurable strategic goals. For example, if one of the
key strategies is to reduce the cost of poor quality, then a strategic goal would be
to reduce the warranty charges by 50% for each year during the planning period.
Another example of a strategic goal would be to reduce internal failures by 75% within
the planning period.
The key strategies and goals will next be subdivided into sub-goals, and short-term
tactics or projects will be identified to accomplish the sub-goals. For example, if one of
the key strategies is to reduce the internal failure costs by 75%, then one of the tactics
would be to implement statistical process control on each key characteristic of every
production process. The projects will be assigned to individual functional departments
or cross-functional teams. These will be short-term (eight to 12 months) projects with
clear goals to be accomplished within the specified time schedule. Figure 6.2 shows
how the vision is broken down into key strategies that lead to the identification of
sub-goals and projects.
This is the implementation phase of the strategic plan, during which the vision is
translated into action. The plan should first be distributed to all who would participate
in it, and their feedback should be obtained. The plan should then be finalized after
suitable modifications have been incorporated to answer concerns and respond to sug-
gestions. The plan is then divided into various tactics, or projects, and these are then
402 A Firs t C o urse in Q ua lit y En gin eerin g
assigned to various departments and teams. Project charters are written and given to
individuals or teams along with a time frame for completion. The resources needed
should be identified with the help of the departments and teams, and those resources
should be made available by the executive management.
Performance measures should be created based on the goals established for each
project. Progress of the projects should be monitored and any correction needed for
the goals and tactics should be included.
Cascella (2002) points out the three common pitfalls encountered during implemen-
tation that lead to poor performance of strategic plans: (1) lack of strategic alignment
at all levels; (2) misallocation of resources; and (3) inadequate operational measures to
monitor the progress of implementation. He suggests the following remedies to avoid
those pitfalls.
The planned strategy must be linked to activities to be performed at the departmental
or group level and communicated to them, so that each group knows what is expected
of them for the successful implementation of the plan. Next, it is also important to
identify the “core” processes in the organization that are critical to achieving the stra-
tegic goals and allocate resources to improve them. Otherwise, when the allocation of
resources is not made on a rational basis, the processes that are strategically important
to the business and their customers do not receive the attention they need, and stra-
tegic goals are therefore not achieved. Then, measurements need to be put in place to
determine the improvements in capabilities of the core departments as the plan imple-
mentation progresses. Such measurements would lead to the identification of under-
performing departments and to the reallocation of available resources to where needed.
Another important aspect during implementation is establishing accountability.
When the strategy is linked to individual departmental activities and is com-
municated to the departmental leaders to enlist their participation, a certain level
of accountability is already established. When they are involved in developing mea-
surement tools and making measurements of the progress made, the departments
derive a sense of participation and responsibility for achieving the goals of the plans.
Accountability can also be driven through the performance evaluation of departments
and their members, and by linking their financial and nonfinancial incentives to their
performance. Yet another way, according to Cascella, is for the business leaders to
act as role models themselves. By participating in process improvements and making
decisions based on data, they can create a healthy climate for continuous improve-
ment and create confidence in the improvement methods. Thus, they can contribute
to achieving planned goals.
Strategic planning for quality has great potential for delivering improved prod-
uct quality, reduced waste, and improved financial performance for an organiza-
tion. Standards for quality system management such as the ISO 9000 standards and
Balridge Award criteria require that organizations practice strategic planning for
quality in order to achieve improvement in quality and excellence in business results.
M a n agin g f o r Q ua lit y 403
6.3 Exercise
6.3.1 Practice Problems
6.1 Write the meaning of each of the following terms, in your own words, in a
paragraph not exceeding five sentences:
a. Quality leadership
b. Customer focus
c. Open communication
d. Participative management
e. Training and empowerment
f. Teamwork
g. Strategic planning
h. Mission, vision, and values
i. Strategies and tactics
j. Strategic plan deployment
6.3.2 Mini-Project
Mini-Project 6.1 Prepare an essay on any one of the topics, such as empowerment,
motivation, strategic planning, and so on, which are covered in this chapter. There are
many more references that relate to these topics that are not reviewed here. Limit the
essay to between 15 and 20 typed pages.
References
Cascella, V. November 2002. “Effective Strategic Planning.” Quality Progress, Milwaukee,
WI: American Society for Quality. 35 (11): 62–67.
Deming, W. E. 1983. Quality, Productivity, and Competitive Position. Cambridge, MA:
MIT—Center for Advanced Engineering Study.
Deming, W. E. 1986. Out of the Crisis. Cambridge, MA: MIT Center for Advanced
Engineering Study.
Evans, R., and W. M. Lindsay. 1996. The Management and Control of Quality. 4th ed. St.
Paul, MN: South Western Publishing Co.
Garwood, W. R., and G. L. Hallen. 1999. “Human Resources and Quality.” In Juran’s Quality
Handbook. 5th ed. Co-edited by J. M. Juran and A. B. Godfrey. New York: McGraw-
Hill. 15.1–15.29.
Godfrey, A. B. 1999. “Total Quality Management.” In Juran’s Quality Handbook. 5th ed.
Co-edited by J. M. Juran and A. B. Godfrey. New York: McGraw-Hill. 14.1–14.35.
Goetsch, D. L., and S. B. Davis. 2015. Quality Management for Organizational Excellence:
Introduction to Quality. 8th ed. Upper Saddle River, NJ: Pearson.
Gryna, F. M. 1981. Quality Circles. New York: Amacom.
Juran, J. M., and A. B. Godfrey. 1999. Juran’s Quality Handbook. 5th ed. New York:
McGraw-Hill.
Juran, J. M., and F. M. Gryna. 1993. Quality Planning and Analysis. 3rd ed. New York:
McGraw-Hill.
404 A Firs t C o urse in Q ua lit y En gin eerin g
McLean, J. W., and W. Weitzel. 1991. Leadership—Magic, Myth, or Method? New York:
American Management Association.
Oakes, D., and R. T. Westcott, eds. 2001. The Certified Quality Manager Handbook. 2nd ed.
Milwaukee, WI: ASQ Quality Press.
Schein, E. H. 1996. “Leadership and Organizational Culture.” In The Leader of the Future.
Edited by F. Hassebein, M. Goldsmith, and R. Beckherd. San Francisco, CA: Jossey-
Bass. 59–69.
Scholtes, P. R. 1988. The Team Handbook. Madison, WI: Joiner Associates, Inc.
7
Q ualit y in P ro curement
The need for procuring parts and materials of quality cannot be overemphasized—
especially in the context of modern productive organizations procuring ever larger
proportions of the assemblies they build from outside vendors. Statements such as
“you cannot make good product from bad material” and “you are as strong as your
weakest supplier,” which we often hear in production shops, only reinforce the fact
that the parts and materials that come into a production facility should be defect-free.
The modern approach to inventory reduction, which employs a just-in-time produc-
tion philosophy, makes it even more important to receive defect-free supplies, because
in the just-in-time environment, there is no cushion in inventory to make up for the
part or material that may be found defective during assembly.
Several approaches have been adopted by organizations to assure quality in incom-
ing supplies; some involve management methods, and some involve statistical tools.
We will discuss some of these approaches in this chapter. These include:
• Establishing a good customer-supplier relationship
• Choosing and certifying suppliers
• Providing complete specification for supplies
• Auditing the supplier
• Supply chain optimization
• Statistical sampling plans for acceptance
405
406 A Firs t C o urse in Q ua lit y En gin eerin g
The Customer-Supplier Division of the American Society for Quality (ASQ ), a group
of professionals in the procurement business, suggests the following as essentials of
a good supplier relationship in Chapter 1 of their Supplier Management Handbook
(ASQ 2004):
• Personal Behavior: Professional, personal behavior of parties, with mutual
respect for each other.
• Objectivity: A moral commitment, by both parties, beyond the legal contract
requirements, to attain the goal of quality for the end product.
• Product Definition: A clear, unambiguous, and complete definition of the
product requirements furnished by the customer in writing, with a willing-
ness to provide further clarification if and when needed.
• Mutual Understanding: Understanding of each other’s needs from direct, open
communication between the quality functions of the parties to avoid confusion.
• Quality Evaluation: Fair, objective evaluation of the quality of supplied goods
by the customer.
• Product Quality: Honest effort by the supplier to provide materials according
to the needs of the customer, including disclosure of any weaknesses.
• Corrective Action: Good faith effort by the supplier in making corrective action
when supplies are found to be deficient.
• Technical Aid: Willingness on the part of the customer to share technical
expertise with the supplier whenever such an exchange is needed.
• Integrity: Supplier’s willingness to provide facilities and services as needed by
the customer to verify product quality; and customer using those facilities and
services only to the extent agreed to in the contract.
• Rewards: Customer’s use of only qualified suppliers, and offer of reward and
encouragement for good performance by the suppliers.
• Proprietary Information: Protecting each other’s privileged information.
• Reputation Safeguard: Neither party making unsupported or misleading state-
ments about the other and maintaining truthfulness and professionalism in
the relationship.
It is easy to see that these are the essentials on which a healthy relationship can be
built, and that they will contribute to the exchange of quality information and quality
supplies between the supplier and the customer. An organization interested in procur-
ing quality parts and materials would do well to make sure that these elements exist
in their relationship with their suppliers.
Q ua lit y in P r o c urem en t 407
During the 1980s, 1970s, and before, organizations in the United States, in gen-
eral, believed in cultivating and retaining as many suppliers for an item as possible.
The belief was that the larger the supplier base, the better the competition; and thus,
the cheaper the price to be paid for the procured item. That was also the time when
contracts were issued routinely to the lowest bidder and materials and supplies were
bought mainly on the basis of price, without much regard for quality. Dr. Deming
described the situation in the following words:
A buyer’s job has been, until today, to be on the lookout for lower prices, to find a new
vendor that will offer a lower price. The other vendors of the same material must meet
it.… Economists teach the world that competition in the marketplace gives everyone the
best deal.… This may have been so in days gone by.… It is different today. The purchasing
department must change its focus from lowest initial cost of material purchased to lowest
total cost. (Deming 1986, 32, 33)
Dr. Deming stresses the need for the buyers to understand the quality requirements
of purchased material in the context of where that material is used in the production
process. Otherwise, when bought on the basis of price alone, the material may make
subsequent operations costly, or worse, result in defective products being made and
delivered to the final customer.
According to Dr. Deming, a single supplier must be chosen and cultivated for each
part or material, based on the supplier’s ability to provide quality parts or material and
their willingness to cooperate with the customer in the design and manufacture of the
final product. There are many advantages to choosing a single supplier compared to
having multiple suppliers for each item. These include:
• If customers decide to keep single suppliers for their various purchases, then
there will be healthy competition among the potential suppliers to become the
single supplier.
• One supplier per item will result in a smaller overall inventory for that item,
as otherwise multiple suppliers will create multiple inventories, thus adding to
the overall cost of the item.
• Having one supplier reduces the risk involved in searching and experimenting
with newer suppliers.
• Single suppliers can be included as part of the team for the concurrent design
and development of new products and processes.
Many American businesses have followed Dr. Deming’s advice on having a single
supplier and have benefited from the overall economy and quality improvements that
have resulted from this. Juran and Gryna (1993, 317) made the following observation:
A clear trend has emerged: Organizations are significantly reducing the number of mul-
tiple suppliers. Since about 1980, reductions of 50% to 70% in the supplier base have
become common. This does not necessarily mean going to single source for all purchases;
it does mean a single source for some purchases and fewer multiple suppliers for other
purchases.
For organizations in the United States, the main obstacle to realizing the ideal rec-
ommended by Dr. Deming—to retain only one source for every procured item—is the
possibility of the disruption of supplies due to work stoppages at the supplier or trans-
portation agencies as a result of labor disputes or natural disasters. The long distances
between many suppliers and customers in the United States makes this problem more
challenging.
The issue of single vs. multiple suppliers assumed much greater importance when
the supply chain expanded across the globe and the number of possible sources
became large and supply distances became really long, since early 2000s. Many
variables such as relative economic conditions and labor costs at the supply sources,
currency value differences, reliability of shipment over long distances, entered into
the equation when deciding to choose a supplier. Many studies have been made on
having one supplier vs. several suppliers for a given supply, in the global market.
The general consensus is that cost benefit analysis should be made on a case by case
basis taking into account the factors such as cost of supplies, skill level available at
the suppliers, cost of transportation, risks involved in supply interruptions, and the
political risks involved in not choosing supplies from communities where the cus-
tomers are located.
Q ua lit y in P r o c urem en t 409
7.3.2 Choosing a Supplier
From the earlier discussion, it becomes clear that using a single supplier per item can
contribute in many ways toward quality and economy in the manufacture of a product.
It then becomes necessary to choose this supplier with care, paying close attention to
their relevant qualifications. The qualifications they should have include:
1. Management with a quality philosophy enforced through policies and
procedures.
2. Control of design and manufacturing information, with completeness of
drawings, specifications, and test procedures, and a positive recall of obsolete
information.
3. Adequate procurement control to avoid poor quality in their incoming
supplies.
4. Material control to ensure that the materials in storage are protected, verified
periodically, and issued only to authorized users.
5. Use of capable machinery and qualified production personnel.
6. Use of prevention-based process control to avoid the production of defective
material.
7. Use of final inspection methods to verify that the final product meets the
needs of the customer.
8. Use of measuring instruments having sufficient accuracy and precision, and
a program to periodically verify and maintain the accuracy and precision of
these instruments.
9. Collection and processing of information regarding the performance of pro-
cesses; use of this information for improving the processes on a continuous
basis.
10. Adequate resources, manpower, and facilities to supply goods in the quantities
required to meet time schedules.
As will be discussed in a later chapter, these are the features of a good quality system.
In essence, suppliers are chosen based on their ability to create and maintain a good
quality system within their own production facilities.
7.3.3 Certifying a Supplier
One thing a customer can do to assure the quality of supplies is to make certain the
supplier fully understands the requirements for those supplies. Drawings and speci-
fications are meant to convey information about what the customer wants in the sup-
plies. These must be made part of the contract document—the purchase order (PO).
We should realize that many supplied items, including modern, complex, high-
tech hardware and software components, cannot be completely specified on paper.
Some consignments of supplies may meet the specifications fully but may offer dif-
ficulty in assembly or perform poorly when used by the end-user. It therefore is rec-
ommended (Juran and Gryna 1993)—in addition to documentation on specifications,
test requirements, and so on—that there be continuous communication between the
supplier and buyer in order to ensure that the supplier understands where and how the
Q ua lit y in P r o c urem en t 411
supplies are used. The objective of both the supplier and the buyer must be to ensure
that the supplies meet the needs not only of the manufacturing processes, but also of
the end-user of the finished product.
It often is necessary to specify in the initial documentation how the supplies will be
checked for conformance to specifications. A customer may accept a consignment on
the basis of certification by the supplier that the items meet the specification. Another
customer may require control chart documentation, showing that the process was in-
control and capable when the goods were produced. Yet another customer may require
a sampling inspection of the consignments using plans from a standard sampling
tables (e.g., ANSI-Z.14-2013), or a customer may require a 100% inspection of the
supplies because of their criticality to the performance of the final product. Whatever
the mode for acceptance, it must be made clear in the original PO.
Specifications can range from a mere statement of a dimension of a part on a
drawing—along with the allowable tolerance (e.g., 2.000 ± 0.001 in.)—to an elaborate
description of the testing procedure and the acceptable range of test results. The use
of national and trade association standards, such as those of the American National
Standards Institute (ANSI) and American Society for Testing Materials (ASTM), is
common practice for specifying material characteristics and acceptable workmanship.
There are, however, many materials for which no commonly accepted standards exist.
It is not uncommon for the buyer to specify the material in terms of its performance
when used, and the supplier then determines its composition or design and how it is
made and tested. There are also proprietary products that cannot be specified except by
the name given to them by the manufacturer (e.g., Pentium III or COREX coating).
Specifying the reliability of parts and subassemblies is another challenge.
Compressors, cooling fans, printed circuit boards, and belts and hoses have require-
ments for the length of time they will perform without failure so that the final product
can meet reliability goals. The part reliability goals should be obtained from the sys-
tem reliability goals through the apportionment exercise, and they should be specified
on the purchase documents in terms of minimum mean-time-to-failure (MTTF) or
maximum failure-rate requirements.
Providing complete specifications for the parts and materials is important—but
it is also challenging. It requires education and training on the part of buyers who
should be able to understand the properties of materials and how they are used in
product design and production processes. This is the reason why increasing numbers
of engineering organizations employ technically qualified people in their purchasing
departments towards the goal of procuring quality supplies.
An initial audit and subsequent periodic audit of a supplier’s facilities to evaluate their
quality system is one way of assuring the quality of supplies. The criteria used in the
audit are the same as those listed earlier under the criteria for selecting a supplier,
412 A Firs t C o urse in Q ua lit y En gin eerin g
the cost of material, parts, shipping and storage, tools, labor, training, inspection,
and testing; the cost of ownership, such as routine testing and maintenance; the cost
of downtime; the cost of energy; the cost of environmental control; and the cost of
disposal when the product life ends. Under the supply chain management model, the
decisions are made based not on the cost of a particular exchange of goods between a
supplier and a customer, but on the total cost of ownership, which includes the costs
incurred by all suppliers and customers during preproduction, production, and use of
the product.
The supplier relationship under the supply chain optimization is to be organized and
managed under a trilogy, patterned after Juran’s quality trilogy (see Chapter 9), with
the following three components:
• Planning for the supplier relationship
• Control for the supplier relationship
• Improvement for the supplier relationship
All activities under the three phases are to be performed by a cross-functional team
made up of a customer’s purchasing, operations, quality, and financial representatives
collaborating with a similar team of representatives “in mirror functions” of the sup-
plier. This collaboration is especially important during the “improvement” stage.
7.6.2 Planning
During the planning phase for the supplier relationship, a thorough understanding
of the needs of the customer is first obtained. Next, available sources of supplies are
researched through industry databases, company records, and other such avenues.
Data regarding costs of current expenditure as well as the current total cost of own-
ership are generated. Based on these, recommendations for consolidation of needs,
sourcing strategy, and supplier base reduction are made.
7.6.3 Control
A measure for evaluating the performance of the supply chain is defined, which will
include performance measures on quality (e.g., percentage rejects), warranty claims,
on-time delivery, and financial and environmental effects. The minimum standards for
the performance measures are specified based on customer requirements and bench-
marking on best of class. The cross-functional teams then evaluate current suppliers
based on their performance against the set standards. This evaluation will include an
assessment of the suppliers’ quality system, business plans (e.g., capacity, know-how,
facilities, and financial results), and history of product quality and associated service.
414 A Firs t C o urse in Q ua lit y En gin eerin g
Suppliers who do not measure up to the standards are eliminated, thus reducing the
supplier base.
7.6.4 Improvement
The supply chain performance must be continuously reviewed for additional opportu-
nities to create value. Improvement is expected to occur in a five-step progression as
follows:
Impressive results from the use of supply chain management programs have been
reported: A reduction in variability (quality improvement) from 20% to 70%, a 30%
to 90% reduction in cycle time, a 15% to 30% reduction in waste from cost of poor
quality, an increase in research and development (R&D) resources by a factor of three
or more, and an overall reduction of risk from sharing among the links have all been
observed (Donovan and Maresca 1999).
standard that has been published as ANSI-ASQ-Z1.4 (R 2013) has sampling plans
that are identical to those in the military standard. We continue to use the tables from
the military standard as they are available in the public domain. Those who want to
buy a copy of the standard tables for their business use, however, are advised to look
for the ANSI standard, as the MIL STD is not available for sale any more.
Generally, a sampling plan is specified by the size of the sample to be chosen from
a submitted lot, and the rules for accepting or rejecting the lot based on the number of
defectives found in the sample.
The available sampling plans can be categorized into two major categories:
1. Sampling plans for attribute inspection
2. Sampling plans for measurement (or variable) inspection
Attribute inspection is done based on characteristics such as appearance, color, feel,
and taste, and it results in the classification of products into categories such as good/
bad, bright/dark, tight/loose, smooth/rough, and so on. The data from attribute inspec-
tion will be in counts such as 3 of 20 bad, 12% too tight, and so forth. In measurement
inspection (some call this variable inspection), however, a characteristic is actually
measured using an instrument. Measuring the length of shafts, the weight of sugar in
bags, or the strength of bolts will yield measurement data such as 2.2 in., 48.63 lbs.,
45,000 psi, and so on. The sampling plans for attribute inspection will specify the
number of defectives that can be tolerated in a sample of specified size for the lot to be
accepted. They are usually easy to understand and implement. The sampling plans for
measurements, however, usually require the calculation of an average, range, or stan-
dard deviation (or a function of them) before deciding to accept or reject a lot. Thus,
implementation of sampling by measurement is rather complicated and may require
specially trained personnel. However, measurement sampling plans are more efficient
in the sense that they require less sampling compared with attribute plans.
Under each of the above two categories are single sampling plans, double sampling
plans, multiple sampling plans, and continuous sampling plans, based on the number
of samples taken per lot submitted (see Figure 7.2).
Sampling plans
Figure 7.2 Categories of sampling plans (shaded plans are covered in this chapter).
Q ua lit y in P r o c urem en t 417
The continuous sampling plans are suitable when inspecting products at the
end of a continuously producing line without the need for grouping products into
lots. The double or multiple sampling plans, which require more than one sam-
ple per lot, are more efficient in that they require less amount of sampling com-
pared to the single sampling plans, which require just one sample per lot. Double
and multiple sampling plans will be needed where inspection is expensive, either
from units destroyed during inspection or from time-consuming inspection pro-
cedures. However, single sampling plans are simple and easy to learn and use. We
will restrict our discussions here to single and double sampling plans for attribute
inspection only. Most sampling needs, however, can be met using these plans,
which are the more commonly used plans in industry. It is necessary to understand
how these plans are created, the statistics fundamentals behind them, and their
strengths and weaknesses, in order to use them correctly. Readers interested in
other plans should refer to books such as Bowker and Lieberman (1972), Grant and
Leavenworth (1996), or Schilling (1982).
A single sampling plan (SSP) involves selecting one (random) sample per lot submit-
ted for inspection, inspecting the units in the sample and then determining whether
the lot should be accepted or rejected based on the number of defectives found in
the sample. An SSP is defined by two numbers, n and c, where n is the sample size
and c is the acceptance number. The scheme of an SSP is as follows. Suppose a single
sampling plan has n = 12 and c = 1. This means that a sample of 12 items should be
taken (randomly) from each lot submitted and each item in the sample be categorized
as acceptable (i.e., conforming to specifications) or defective (i.e., not conforming to
specifications). If the number of defectives found in the sample is not more than one,
then the lot is accepted; otherwise, the lot is rejected.
7.7.2.1 The Operating Characteristic Curve When we are faced with choosing a single
sampling plan for a given situation, we have numerous alternative plans to choose
from. For example, any of the combination of numbers (10, 0), (12, 0), or (24, 2)
describes a SSP, with the first number representing the sample size and the second
representing the acceptance number. The question then arises: which sampling plan is
good for the situation at hand?
The operating characteristic curve (OC curve) is the criterion used to decide which
plan is the most suitable for a given situation. Every sampling plan has an OC curve
that tells us how the plan will accept or reject lots of different quality. The graph in
Figure 7.3 is an example of an OC curve, in which the x-axis represents the qual-
ity of the lot, in proportion defectives p, and the y-axis represents the probability of
acceptance by the sampling plan Pa. For example, the sampling plan whose OC curve
is shown in Figure 7.3 will accept lots with 1% defectives with probability 0.9,
418 A Firs t C o urse in Q ua lit y En gin eerin g
Pa
1.0
0.9
0.1
p
0.01 0.08
and lots with 8% defectives with probability 0.1. The OC curve tells us how strict,
or how lenient, a sampling plan is in accepting good lots and rejecting bad lots. We
should know how the OC curves are computed.
For any given sampling plan, the OC curve can be drawn by computing the prob-
abilities of acceptance by the plan of lots of different quality, with the quality of a lot
being denoted by the fraction of defectives in that lot. The method of calculating the
probability of acceptance by a sampling plan is explained below using an example. For
a practical understanding, the probability of acceptance can be interpreted as follows.
If a sampling plan has probability 0.9 of accepting a lot with a certain fraction defec-
tives, it means that if 100 such lots are submitted for inspection, about 90 of them will
be accepted and about 10 will be rejected. A desirable sampling plan will have high
probability (e.g., 0.95 or 0.99) of accepting good lots, and low probability (e.g., 0.10
and 0.05) of accepting bad lots.
7.7.2.2 Calculating the OC Curve of a Single Sampling Plan Suppose that a lot with
p fraction defectives is submitted to a single sampling plan with sample size n and
acceptance number c. The probability of acceptance of this lot with p fraction defec-
tives by the single sampling plan is given by Pa(p) = P(D ≤ c), where D is the number
of defective units in the sample. The random variable D is binomially distributed, with
parameters n and p, and the above probability is given by the binomial sum:
c
Pa( p ) = ∑ nx p (1 − p)
x n−x
x=0
Hence, we calculate the Pa corresponding to several chosen p values and plot Pa versus
p to obtain the OC curve.
We want to point out here that in quality control literature (e.g., Duncan 1974),
the OC curve we defined above is referred to as the “Type-B” OC curve, because
the probabilities are calculated using the binomial distribution on the assumption
that sampling is done from “large” lots. If the sampling is not from large lots, hyper-
geometric distribution should be used to calculate the probabilities of acceptance. The
OC curve is then called the “Type-A” OC curve. In our discussion here, we will
assume that the sampling is always from large lots and that all the OC curves calcu-
lated are Type B. A large lot is usually defined as a lot having 30 or more items. The
example below shows how the OC curve is drawn.
Example 7.1
Calculate the OC curve of the three SSPs (20, 2), (20, 1), and (20, 0). Assume that
the samples are taken from a large lot.
Solution
The calculations are shown in the Table 7.1a below, and the OC curve is shown in
Figure 7.4a.
To explain the calculations, take, for example, the calculation of the probability of
acceptance of 6% lots (p = 0.06) by the plan (20, 2). The probability of acceptance Pa
is the probability a Poisson variable with a mean of 1.2 (np = 20 × 0.06 = 1.2) being
less than or equal to 2. That is, Pa = P(Po(1.2) ≤ 2). This probability can be read off the
cumulative Poisson table (Table A.5 in the Appendix) as 0.879.
In this example, we calculated the OC of three different sampling plans with the
same sample size but with different acceptance numbers. The Figure 7.4 shows that
for a given sample size, the OC curve of an SSP with a smaller acceptance num-
ber becomes steeper, compared to that of a plan with a larger acceptance number. A
steeper curve indicates that the sampling plan will be more discriminating between
good and bad lots.
The next example shows how the OC of an SSP changes when the sample size is
changed keeping the acceptance number the same.
Table 7.1a Calculating the OC Curves of SSPs with the Same n but with Different cs
p 0.02 0.04 0.06 0.08 0.10 0.15 0.20 0.25 0.30 0.35
np 0.4 0.8 1.2 1.6 2.0 3.0 4.0 5.0 6.0 7.0
Pa(20, 2) 0.992 0.952 0.879 0.783 0.676 0.423 0.238 0.124 0.061 0.029
Pa(20, 1) 0.938 0.808 0.662 0.524 0.406 0.199 0.091 0.04 0.017 0.007
Pa(20, 0) 0.670 0.449 0.301 0.201 0.135 0.049 0.018 0.006 0.002 0.000
420 A Firs t C o urse in Q ua lit y En gin eerin g
1.0
×
×
(20, 2)
× ×
0.5 ×
Pa
×
×
(2
(20, 0) ×
0,
1)
× ×
×
×
× ×
0.0 × × ×
× ×
1.0 ×
×
×
×
× (20, 2)
× ×
0.5
Pa
×
×
(3
0,
(40, 2) ×
2)
×
× ×
0.0 × ×
× × ×
Figure 7.4 (a) OC curves of single sampling plans with varying acceptance numbers. (b) OC curves of single sampling
plans with varying sample sizes.
Example 7.2
Draw the OC curves of the SSPs (30, 2) and (40, 2), and compare them with that
of (20, 2).
Solution
The OC curves of SSPs with varying sample sizes with a given acceptance number
are calculated in Table 7.1b and the graphs are shown in Figure 7.4b. They show
that increasing the sample size also makes the sampling plan steeper (i.e., more
discriminating). We note, however, that increasing the sample size does not produce
as dramatic a change in discrimination as does decreasing the acceptance number.
7.7.2.3 Designing an SSP From the above discussion, we see how the OC curve for a
sampling plan is computed and how it shows the discriminating ability of a sampling
plan. Next, we will see how to select an SSP for a given OC curve. Often, a purchaser
Q ua lit y in P r o c urem en t 4 21
Table 7.1b Calculating the OC Curves of SSPs with the Same c but with Different ns
p 0.02 0.04 0.06 0.08 0.1 0.15 0.2 0.25 0.3 0.35
np = 20p 0.4 0.8 1.2 1.6 2.0 3.0 4.0 5.0 6.0 7.0
Pa(20, 2) 0.992 0.952 0.879 0.783 0.676 0.423 0.238 0.124 0.061 0.029
np = 30p 0.6 1.2 1.8 2.4 3.0 4.5 6.0 7.5 9.0 10.5
Pa(30, 2) 0.976 0.879 0.730 0.569 0.423 0.173 0.061 0.020 0.006 0.02
np = 40p 0.8 1.6 2.4 3.2 4.0 6.0 8.0 10.0 12.0 14.0
Pa(40, 2) 0.952 0.783 0.569 0.380 0.238 0.061 0.013 0.002 0.001 0.000
and a supplier agree on an OC curve to guide them in determining how the supplies
will be inspected before acceptance. The quality engineer is then asked to select the
sampling plan that will have an OC curve “equal” to the OC curve that has been
agreed to. First, we will see how to select a “suitable” OC curve for a given situation;
then, we will see how to select an SSP that will “fit” the chosen OC curve.
Pa(p)
1.0
(a) AQL p
Pa(p)
1.0
1–α
β
p
(b) AQL LTPD
Pa(p)
1–α
β
p
(c) p1 p2
Figure 7.5 (a) The ideal OC curve. (b) Specifying a practical OC curve. (c) Designing a single sampling plan for a given
OC curve.
and it represents the risk the producer accepts of their good supplies (supplies of
AQL quality) being rejected. The LTPD is larger than the AQL, and it represents
the proportion of defectives that the customer can hardly tolerate and would like
to reject. The consumer’s risk β is also a small number, such as 0.05 or 0.1, which
represents the risk the consumer is willing to take that the lots they consider to be of
poor quality is accepted. These risk parameters, (AQL, 1 – α) and (LTPD, β), must
be agreed to between the buyer and the supplier. These parameters define the two
points on the OC curve that is desired. The single sampling plan with an OC curve
that will pass through the two points (AQL, 1 – α) and (LTPD, β) is then selected
as described below.
Q ua lit y in P r o c urem en t 423
7.7.2.5 Choosing a Single Sampling Plan Let us denote AQL by p1 and LTPD by p2 (see
Figure 7.5c). The problem is to determine the values of n and c such that:
P ( D ≤ c p = p1 ) = 1 − α
P ( D ≤ c p = p2 ) = β
where D is the number of defectives in a sample of size n drawn from the lots having
the respective proportion defectives. Assuming a large lot, D is binomially distributed,
and the above probabilities can be written as:
c
∑ nx p (1 − p )
x=0
x
1 1
n−x
= 1−α
c
n x
∑ x=0
n−x
x p2 (1 − p2 ) = β
Therefore, we have two equations in two unknowns, n and c, and we have to solve
for the unknown values of n and c. These are nonlinear equations in n and c, and they
cannot be solved in a closed form. Numerical root-finding methods using a computer
may be needed; however, nomographs are available to facilitate solving these equa-
tions. One such nomograph is given in Figure 7.6a, which is a nomograph of cumu-
lative binomial probabilities. For any given n and c, the graph gives the following
cumulative probability for a given p:
c
n x
Pa = P ( D ≤ c ) = ∑ x p (1 − p )
n−x
x=0
We can recognize this as the probability of acceptance of lots with p fraction defec-
tives by a SSP defined by (n, c). For example, suppose n = 20, c = 1, and p = 0.02. To
find the above cumulative probability, find the intersection of the lines representing
n = 20 and c = 1 in the main graph. Draw a line through that intersection and the
value of p = 0.02 on the scale on the left-hand side of the graph. The intersection of
this line with the scale on the right-hand side of the graph gives Pa = 0.94. Because
designing the SSP is equivalent to finding the value of (n, c) that is common to two
given values of (p, Pa(p)), the graph can be used to design SSPs, as shown in the fol-
lowing example.
Example 7.3
Design an SSP for the following data: AQL = p1 = 0.02, Pa(p1) = 0.95, LTPD = p2 =
0.08, and Pa(p2) = 0.1.
Nomograph of the cumulative binomial distribution Nomograph of the cumulative binomial distribution
.01 .01
424
0 0
0 100 0 100
0 0
70 70
.02 5 0 .02 5
Nu
50 0 500 0
1 40 1
mb
.03 0 0 .03 0 40 0
er
30 30
Nu
o
2 0 2 0
mb
.04 0 20 .04 0 20
f tr
ence (c)
i
er
0 0
als
3 3
of
.05 0 14 .05 0 14
o
.001 .001
tr
0
ences (c)
rs
4 0 10 4 0 100
ial
.06 .06
f occurr
am
o
so
.07 50 70 .005 .07 50 70 .005
rs
.01 50 ple .01
s
.08 50 .08
am
of occurr
70 40 .02 70 40
ize
pl
30
(n)
30
es
.10 .10
Number
iz
20 .05 .05
20
e(
100 100
n)
.10 .10
Number
140 10 .20 140 10 .20
.15 .15
.30 .30
5 5
.40 .40
200 .50 .20 200 .50
.20 2
2 .60 .60
0 0 .70
.70
.25 .80 .25 .80
(c)
(c)
ces
.98 .98
ces
.35 3 .35 3
ren
.99
en
.99 4
4
Probability of p or power occurrences in n trial (Pa)
.995
cur
.995
urr
5 5
.40 .40
Oc
Occ
A Firs t C o urse in Q ua lit y En gin eerin g
7 .999 7 .999
.45 9 .45 9
.50 .50
(a) (b)
Figure 7.6 (a) Nomograph to select single sampling plans for a given OC curve. (b) Designing a SSP using the nomograph. (Reproduced from Larson, H. R., Industrial Quality Control 23 (6),
270–278, 1966, with permission of American Society for Quality, Milwaukee.)
Q ua lit y in P r o c urem en t 425
Solution
Draw a line connecting p1 on the vertical scale on the left-hand side of the graph,
and Pa(p1) on the vertical scale on the right-hand side. Draw another similar line
connecting p2 and Pa(p2). Read the values of n and c at the intersection of the two
lines. For the example, n = 98, and c = 4 (Figure 7.6b).
The availability of standard tables, such as the MIL-STD-105E or its modern ver-
sion ANSI-Z1.4, makes it even easier to choose sampling plans without the use of
nomographs such as the one presented above. Of course, these tables have been pre-
pared using the procedure outlined above.
The MIL-STD-105E tables give plans based on AQL values and a predetermined
(1 − α) value. Similarly, sampling plans known as Dodge-Romig plans will have OC
curves passing through given (LTPD, β) points. We will discuss here only the MIL-
STD-105E plans, after describing the double sampling plan and defining the average
outgoing-quality limit (AOQL). The MIL-STD-105E will provide sampling plans that
will meet most of the needs of a quality engineer.
A double sampling plan (DSP) requires taking, at most, two samples per lot for decid-
ing whether to accept or reject the lot. A DSP is defined by five numbers (compared to
the two that are needed to define an SSP). The scheme of a DSP is described in Figure
7.7. “Double and multiple sampling plans reflect the tendency of many experienced
inspectors to give a questionable lot an additional chance” (Schilling 1982). The five
numbers used to specify a DSP are as follows:
n1—the size of the first sample
c 1—the acceptance number for the first sample
r 1—the rejection number for the first sample
n2—the size of the second sample
c 2—the acceptance number for both samples
Some authors (e.g., Montgomery 2013) describe a DSP using only four numbers
instead of five, because they assume some relationship between r 1 and c 2, such as
r 1 = c 2. This assumption is made to make the mathematics easier in designing the sam-
pling plan, which is not a universally accepted assumption. The MIL-STD-105E, for
example, does not make such an assumption and uses five numbers to define DSPs.
Here, we follow the convention used by Schilling (1982) and use five numbers.
7.7.3.1 Why Use a DSP? The DSP will require, on average, a smaller amount of inspec-
tion than a comparable SSP, comparable in the sense that both have about “equal”
OC curves. We will later quantify the amount of inspection needed by a sampling
plan using the Average Sample Number (ASN), and show how the DSP compares
favorably with the SSP in this respect. Where inspection results in the destruction
426 A Firs t C o urse in Q ua lit y En gin eerin g
Yes No Yes
D1 ≤ c1 D1 ≥ r1
Yes No
Accept lot D1 + D2 ≤ c2 Reject lot
of units or the cost of inspection is high for this and/or other reasons, a double sam-
pling plan will be preferred, although administering a DSP is more difficult and may
require personnel with additional training.
7.7.3.2 The OC Curve of a DSP In the case of the DSP, there is more than one OC
curve. In fact, a DSP has a primary OC curve and two secondary OC curves. The
primary OC curve shows the relationship between the quality of a submitted lot and
the probability of it being accepted by the overall plan; that is, accepted either in the
first or the second sample. One of the secondary OC curves shows the relationship
between lot quality and the probability of acceptance in the first sample; the other
shows the relationship between lot quality and the probability of rejection in the first
sample. Figure 7.8 shows an example of the three OC curves of a DSP.
The two secondary OC curves are simply the OC curves of SSPs with sample size n1,
with acceptance number c1 in one case and (r 1 − 1) in the other case. However, the calcu-
lation of the primary OC curve is a bit more involved and is illustrated using an example.
Example 7.4
1.0 0.0
Paccept
Preject
0.5 0.5
Prob. accept in
0.0 first sample 1.0
Solution
The primary OC curve of the DSP is calculated by identifying all possible events
in which acceptance of the lot could occur and then finding the probability that
acceptance occurs through any one of the events. Referring to Table 7.1c there are
three possible ways in which acceptance of the lot could occur: Through event A, B,
or C. Event A occurs if the number of defectives in the first sample D 1 < 0. There is
no need for the second sample. Event B occurs when the number of defectives in the
first sample D1 = 1, and the number of defective in the (required) second sample D2 <
2 (so that the total in both samples is < 3). Similarly, Event C occurs if the number
of defectives in the first sample D 1 = 2 and the number of defectives in the (required)
second sample D 2 < 1. The reader can verify that there is no other possible event
through which acceptance of the lot can occur.
The calculation of the probabilities of acceptance for a lot with 5% defectives (p =
0.05) is carried out as in Table 7.1c.
The calculations in the above table using cumulative Poisson probabilities are fairly
straightforward except for the following. To find the probability that the number of
defectives in a sample (e.g., the first sample) exactly equals one, we calculate it as the
difference of the two cumulative probabilities from the Poisson tables. For example:
P ( D1 = 1) = P ( D1 ≤ 1) − P ( D1 ≤ 0 )
428 A Firs t C o urse in Q ua lit y En gin eerin g
The average sample number of a sampling plan is the average number of units that
must be inspected per lot to reach an accept/reject decision. Suppose that an SSP is
employed in inspecting a certain lot. The ASN for the plan would simply be the sample
size of the plan as long as the inspection is not curtailed (i.e., inspection is not stopped
when the number of defectives exceeds the acceptance number). Thus, the ASN of an
SSP is independent of lot quality. On the other hand, if a DSP is used, some lots will
be accepted or rejected in the first sample, and some may require two samples to reach
a decision—even if all the lots have the same quality. The ASN of a sampling plan
reflects how much inspection is required when using the sampling plan. For the DSP,
it is a function of lot quality and is calculated using the following formula:
P1 = P (lot accepted or rejected in the first samplle)
= P ( lot accepted in first sample) + P ( lot rejected in first sample)
= P ( D1 ≤ c1 ) + P ( D1 ≥ r1 )
= P ( D1 ≤ c1 ) + 1 − P ( D1 ≤ r1 − 1)
where D 1 is the number of defectives in the first sample.
The formula for ASN computes the expected value, or average, of the random vari-
able that represents the number of samples inspected per lot of a given quality p for
reaching a decision. The random variable takes two possible values, n1 and (n1 + n2),
the former with probability P 1 and the latter with probability (1 − P 1). The value of
ASN lies between n1 and (n1 + n2). The following example illustrates the calculation of
ASN for a DSP using the formula above.
Q ua lit y in P r o c urem en t 429
Example 7.5
Calculate the ASN for the following DSP with n1 = 50, c 1 = 1, r 1 = 3, n2 = 100, and
c 2 = 3 when used to inspect lots of different quality as listed in the table below.
Solution
The ASN calculations are done in the Table 7.1d. In this table,
P1 = P ( D1 ≤ c1 ) + 1 − P ( D1 ≤ r1 − 1) = P ( D1 ≤ 1) + 1 − P ( D1 ≤ 2)
The next example compares the ASN of a DSP with that of an “equivalent” SSP. We
design the equivalent SSP by first finding the probabilities of acceptance of the DSP
at certain chosen AQL and LTPD values, and then finding the SSP that will have the
same (or approximately the same) probabilities for acceptance at these selected AQL
and LTPD points.
Example 7.6
Solution
a. To find Pa(0.01) of the DSP with n1 = 50, c 1 = 1, r 1 = 3, n2 = 100, and c 2 = 3.
EVENTS LEADING TO EVENTS LEADING TO
ACCEPTANCE IN FIRST ACCEPTANCE IN SECOND
SAMPLE OF SIZE 50 SAMPLE OF SIZE 100 PROB. OF EVENTS USING POISSON
EVENT n1p = (50)(0.01) = 0.5 n2p = (100)(0.01) = 1.0 APPROXIMATION
A D1 ≤ c1 = 1 No second sample 0.909
B D1 = 2 AND D2 ≤ 1 (0.985 − 0.909) × (0.735) = 0.056
(A or B) 0.966
430 A Firs t C o urse in Q ua lit y En gin eerin g
p1 = 0.01 (1 − α ) = 0.966
p2 = 0.08 β = 0.092.
The ASN for DSP and SSP are shown together in Figure 7.9. We see that the DSP
has a smaller ASN compared to the SSP for some lot qualities, and a larger ASN for
other lot qualities. However, note that in the quality range of AQL, where we would
expect the supplier to submit lots, the ASN for the DSP is smaller. This is the advan-
tage of DSP over SSP, and so is preferred when inspection is expensive.
The MIL-STD-105E standard provides SSPs, DSPs, and multiple sampling plans, and
it is the most popular source of sampling plans among industrial users. (The multiple
77.1
75.1 75.6
75 ASN of a DSP
72.4
ASN of
68.4 68.5 equivalent
SSP
64.6
ASN
65
61.2
57.5 58.4
54.5
55
0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.12
Lot quality p
Figure 7.9 Graph of the ASN for a DSP and an equivalent SSP.
Q ua lit y in P r o c urem en t 4 31
sampling plans, which are not covered in this book, use more than two samples per lot
to arrive at an accept/reject decision. Their design is based on an extension of the logic
used in designing DSPs.) The plans in the military standard are called AQL plans,
because they are all designed such that their OC curves will pass through a chosen
(AQL, 1 − α) point. Also, only an AQL value and the size of the lot to be inspected
are needed to choose a sampling plan for a given situation. The value of (1 − α), or the
probability of acceptance corresponding to a chosen AQL, is between 0.91 and 0.99
for all plans in the military standard.
These sampling plans were created during World War II to help the U.S. War
Department procure quality material for the war effort. The tables, which have
appeared under different names since 1942, were published for the first time in 1950 as
the MIL-STD-105A. The last revision, the MIL-STD-105E, was published in 1989
(U.S. Department of Defense 1989). This revision is still available for free from several
websites. An equivalent standard ANSI/ASQ Z1.4-2003 (R 2013) was published as
an American National Standard by the ASQ since the Department of Defense dis-
continued supporting the standard. The discussion that follows however uses the MIL
STD as reference since the standard is available and is a legitimate standard, and the
discussion is fully applicable to the ANSI/ASQ standard as well, since the latter uses
the same procedure and tables as the MIL-STD.
The MIL-STD-105E provides for three levels of inspection—normal, tightened,
and reduced. The normal level is the level at which the inspection will be started when
a supplier begins submitting supplies. The level of inspection will be changed, how-
ever, if the performance of the supplier changes. If the quality of the supplies deterio-
rates, then “tightened inspection,” with its stricter acceptance criteria, will be imposed
as a way of pressuring the supplier to improve their quality. If the quality performance
is good, then the level will be changed to “reduced inspection,” which entails a smaller
amount of inspection, as a way of rewarding good performance. Switching rules are
provided by the standard to determine when a supplier’s performance is good or bad.
Figure 7.10 has been drawn to represent graphically the switching rules described in
the standard in words.
It is claimed that the tables with the three levels of inspection and the switching
rules constitute a system for obtaining quality in supplies; they are not just isolated
sampling plans. When properly used, with the inspection results being fed back to
the suppliers, they can be used to improve the processes, prevent the production of
defective units, and achieve improved product quality. The sampling inspection tables
are a valuable source of sampling plans for those situations in which supplies must be
inspected using sampling inspection before they are accepted.
The selection of a sampling plan from the military standard starts with Table 7.2
(Table 7.1 of the military standard), which is called the “sample size code table.” This
table gives the sample size based on lot or batch size. The examples below illustrate the
selection of plans from the tables of the standard.
432 A Firs t C o urse in Q ua lit y En gin eerin g
Start with
Preceding 2 out of 2, 3, 4
NORMAL
10 lots or 5 lots
level
accepted* rejected?
REDUCED TIGHTENED
level level
DISCONTINUE 5 consecutive
inspection until process lots rejected
is rectified
* In addition, the total number of defectives in the samples from the preceding 10 lots should not
exceed the limiting number specified in a special table (Table VIII of the MIL STD.). Furthermore,
production should be at steady rate, and reduced inspection is considered to be desirable by
responsible authority
** In addition, normal level will be reinstated if a lot is accepted because the number of defectives
from all samples is between the accept and reject numbers for double or multiple sampling plans.
Also, normal level will be reinstated if production becomes irregular or the responsible authority
considers the conditions warrant that normal inspection shall be institited
Figure 7.10 Switching rules for military standard sampling plans (prepared from the “switching procedures” given in
the MIL-STD-105E).
7.7.5.1 Selecting a Sampling Plan from MIL-STD-105E A sampling plan from the mili-
tary standard is selected through the following steps:
1. Enter Table 7.2 with lot size, and pick the sample size code letter. Use General
Inspection Level II if no special conditions are specified.
2. Choose the appropriate table based on whether it is single, double or multiple
sampling and whether a normal, tightened, or reduced level is required. The
tables are titled according to level of inspection and whether the plans are
single, double or multiple sampling plans. The SSP and DSP tables are repro-
duced in this text as Tables 7.3 through 7.8.
The military standard gives the OC curves for every plan listed in the tables. The
OC curves are provided both as tables and as graphs. This helps in understanding how
a chosen plan will perform against various lot quality. An example page containing
OC curves, as graphs and tables, is reproduced in Figure 7.11.
Q ua lit y in P r o c urem en t 433
Example 7.7
Choose the SSPs for normal, tightened, and reduced inspection if the lot size is 200
and the AQL is 2.5%.
Solution
From Table 7.2, the code letter for the given lot size is G. The SSP for normal
inspection from Table 7.3 is:
(n, c , r ) : ( 32, 2, 3)
where n, c, and r indicate sample size, acceptance number, and rejection number,
respectively. Similarly, the SSP for tightened inspection from Table 7.4 is (32, 1, 2),
and the plan for reduced inspection from Table 7.5 is (13, 1, 3).
Note that for the normal and tightened SSPs, the rejection number equals the
acceptance number + 1. For the reduced inspection, however, this is not so; there is a
gap between the acceptance number and the rejection number. If the number of defec-
tives in a chosen sample falls between the acceptance and rejection numbers, the lot
will be accepted, but normal inspection will be restored from the next lot. This hap-
pens only when using a “reduced” inspection plan.
Example 7.8
Choose DSPs for normal, tightened, and reduced inspection if the lot size is 200
and the AQL is 2.5%.
434
Table 7.3 Single Sampling Plans for Normal Inspection (MIL-STD-105E—Table II-A)
A 2 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22 30 31
B 3 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22 30 31 44 45
C 5 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22 30 31 44 45
D 8 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22 30 31 44 45
E 13 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22 30 31 44 45
F 20 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22
G 32 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22
H 50 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22
J 80 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22
K 125 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22
L 200 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22
M 315 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22
N 500 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22
P 800 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22
Q 1250 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22
R 2000 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22
A Firs t C o urse in Q ua lit y En gin eerin g
= Use first sampling plan below arrow. If sample size equals, or exceeds, lot or batch size, do 100% inspection
= Use first sampling plan above arrow
Ac = Acceptance number
Re = Rejection number
Table 7.4 Single Sampling Plans for Tightened Inspection (MIL-STD-105E—Table II-B)
A 2 1 2 2 3 3 4 5 6 8 9 12 13 18 19 27 28
B 3 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19 27 28 41 42
C 5 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19 27 28 41 42
D 8 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19 27 28 41 42
E 13 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19 27 28 41 42
F 20 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19
G 32 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19
H 50 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19
J 80 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19
K 125 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19
L 200 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19
M 315 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19
N 500 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19
P 800 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19
Q ua lit y in P r o c urem en t
Q 1250 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19
R 2000 0 1 1 2 2 3 3 4 5 6 8 9 12 13 18 19
S 3150 1 2
= Use first sampling plan above arrow. If sample size equals, or exceeds, lot or batch size, do 100% inspection
= Use first sampling plan above arrow
Ac = Acceptance number
Re = Rejection number
435
436
Table 7.5 Single Sampling Plans for Reduced Inspection (MIL-STD-105E—Table II-C)
A 2 0 1 1 2 2 3 3 4 5 6 7 8 10 11 14 15 21 22 30 31
B 2 0 1 0 2 1 3 2 4 3 5 5 6 7 8 10 11 14 15 21 22 30 31
C 2 0 1 0 2 1 3 1 4 2 5 3 6 5 8 7 10 10 13 14 17 21 24
D 3 0 1 0 2 1 3 1 4 2 5 3 6 5 8 7 10 10 13 14 17 21 24
E 5 0 1 0 2 1 3 1 4 2 5 3 6 5 8 7 10 10 13 14 17 21 24
F 8 0 1 0 2 1 3 1 4 2 5 3 6 5 8 7 10 10 13
G 13 0 1 0 2 1 3 1 4 2 5 3 6 5 8 7 10 10 13
H 20 0 1 0 2 1 3 1 4 2 5 3 6 5 8 7 10 10 13
J 32 0 1 0 2 1 3 1 4 2 5 3 6 5 8 7 10 10 13
K 50 0 1 0 2 1 3 1 4 2 5 3 6 5 8 7 10 10 13
L 80 0 1 0 2 1 3 1 4 2 5 3 6 5 8 7 10 10 13
M 125 0 1 0 2 1 3 1 4 2 5 3 6 5 8 7 10 10 13
N 200 0 1 0 2 1 3 1 4 2 5 3 6 5 8 7 10 10 13
P 315 0 1 0 2 1 3 1 4 2 3 3 6 5 8 7 10 10 13
Q 500 0 1 0 2 1 3 1 4 2 5 3 6 5 8 7 10 10 13
R 800 0 2 1 3 1 4 2 5 3 6 5 8 7 10 10 13
A Firs t C o urse in Q ua lit y En gin eerin g
= Use first sampling plan above arrow. If sample size equals, or exceeds, lot or batch size, do 100% inspection
= Use first sampling plan above arrow
Ac = Acceptance number
Re = Rejection number
† = If the acceptance number has been exceeded, but the rejection number has not been reached, accept the lot, but reinstate normal inspection
Table 7.6 Double Sampling Plans for Normal Inspection (MIL-STD-105E—Table III-A)
Sample Cumu-
Acceptable quality levels (normal inspection)
size Sample lative 0.010 0.015 0.025 0.040 0.065 0.10
code Sampl e size sample 0.15 0.25 0.40 0.65 1.0 1.5 2.5 4.0 6.5 10 15 25 40 65 100 150 250 400 650 1000
letter size Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re
A * * * * * * * * * *
First 2 2 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16 17 22 25 31
B Second 4 * 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27 37 38 56 57
2
First 3 3 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16 17 22 25 31
C Second 3 6 * 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27 37 38 56 57
First 5 5 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16 17 22 25 31
D Second 5 10 * 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27 37 38 56 57
First 8 8 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16 17 22 25 31
E Second 8 16 * 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27 37 38 56 57
First 13 13 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16
F 26 * 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27
Second 13
First 20 20 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16
G * 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27
Second 20 40
First 32 32 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16
H Second 32 64 * 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27
First 50 50 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16
J Second 50 100 * 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27
First 80 80 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16
K Second 80 160 * 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27
First 125 125 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16
L Second 125 250
* 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27
First 200 200 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16
M Second 200 400
* 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27
First 315 315 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16
Q ua lit y in P r o c urem en t
N *
Second 315 630 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27
p First 500 500 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16
1000
* 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27
Second 500
Q First 800 800 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16
Second 800 1600
* 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27
First 1250 1250 0 2 0 3 1 4 2 5 3 7 5 9 7 11 11 16
R 2500 1 2 3 4 4 5 6 7 8 9 12 13 18 19 26 27
Second 1250
= Use first sampling plan below arrow if sample site equals, or exceeds lot or batch size, do 100% inspection
= Use first sampling plan above arrow
Ac = Acceptance number
Re = Rejection number
* = Use corresponding single sampling plan (or alternatively, use double sampling plan below, where available)
437
Table 7.7 Double Sampling Plans for Tightened Inspection (MIL-STD-105E—Table III-B)
438
A * * * * * * * *
First 2 2 0 2 0 3 1 4 2 5 3 7 6 10 9 14 15 20 23 29
B Second 2 4 * 1 2 3 4 4 5 6 7 11 12 15 16 23 24 34 35 52 53
First 3 3 0 2 0 3 1 4 2 5 3 7 6 10 9 14 15 20 23 29
C Second 3 6 1 2 3 4 4 5 6 7 11 12 15 16 23 24 34 35 52 53
*
First 5 5 0 2 0 3 1 4 2 5 3 7 6 10 9 14 15 20 23 29
D Second 5 10 1 2 3 4 4 5 6 7 11 12 15 16 23 24 34 35 52 53
*
First 8 8 0 2 0 3 1 4 2 5 3 7 6 10 9 14 15 20 23 29
E Second 8 16 1 2 3 4 4 5 6 7 11 12 15 16 23 24 34 35 52 53
*
First 13 13 0 2 0 3 1 4 2 5 3 7 6 10 9 14
F Second 13 26 * 1 2 3 4 4 5 6 7 11 12 15 16 23 24
First 20 20 0 2 0 3 1 4 2 5 3 7 6 10 9 14
G Second 20 40 * 1 2 3 4 4 5 6 7 11 12 15 16 23 24
First 32 32 0 2 0 3 1 4 2 5 3 7 6 10 9 14
H Second 32 64 * 1 2 3 4 4 5 6 7 11 12 15 16 23 24
First 50 50 0 2 0 3 1 4 2 5 3 7 6 10 9 14
J Second 50 100 1 2 3 4 4 5 6 7 11 12 15 16 23 24
*
First 80 80 0 2 0 3 1 4 2 5 3 7 6 10 9 14
K Second 80 160 * 1 2 3 4 4 5 6 7 11 12 15 16 23 24
First 125 125 0 2 0 3 1 4 2 5 3 7 6 10 9 14
L Second 125 250 *
1 2 3 4 4 5 6 7 11 12 15 16 23 24
First 200 200 0 2 0 3 1 4 2 5 3 7 6 10 9 14
M Second 200 400 * 1 2 3 4 4 5 6 7 11 12 15 16 23 24
First 315 315 0 2 0 3 1 4 2 5 3 7 6 10 9 14
Second 315 630 *
N 1 2 3 4 4 5 6 7 11 12 15 16 23 24
First 500 500 0 2 0 3 1 4 2 5 3 7 6 10 9 14
*
p Second 500 1000 1 2 3 4 4 5 6 7 11 12 15 16 23 24
First 800 800 0 2 0 3 1 4 2 5 3 7 6 10 9 14
*
Q Second 800 1600 1 2 3 4 4 5 6 7 11 12 15 16 23 24
First 1250 1250 * 0 2 0 3 1 4 2 5 3 7 6 10 9 14
R
A Firs t C o urse in Q ua lit y En gin eerin g
= Use first sampling plan below arrow. if sample site equals, or exceeds lot or batch size, do 100% inspection
= Use first sampling plan above arrow
Ac = Acceptance number
Re = Rejection number
* = Use corresponding single sampling plan (or alternatively, use double sampling plan below, where available)
TABLE 7.8 Double Sampling Plans for Reduced Inspection (MIL-STD-105E—Table III-C)
A * * * * * * * * *
*
B * * * * * * * * * * *
C * * * * * * * * * * *
First 2 2 * 0 2 0 3 0 4 0 4 1 5 2 7 3 8 5 10 7 12 11 17
D 2 4 0 2 0 4 1 5 3 6 4 7 6 9 8 12 12 16 18 22 26 30
Second
First 3 3 * 0 2 0 3 0 4 0 4 1 5 2 7 3 8 5 10 7 12 11 17
E Second 3 6 0 2 0 4 1 5 3 6 4 7 6 9 8 12 12 16 18 22 26 30
First 5 5 0 2 0 3 0 4 0 4 1 5 2 7 3 8 5 10
F *
Second 5 10 0 2 0 4 1 5 3 6 4 7 6 9 8 12 12 16
First 8 8 * 0 2 0 3 0 4 0 4 1 5 2 7 3 8 5 10
G Second 8 16 0 2 0 4 1 5 3 6 4 7 6 9 8 12 12 16
First 13 13 * 0 2 0 3 0 4 0 4 1 5 2 7 3 8 5 10
H
Second 13 26 0 2 0 4 1 5 3 6 4 7 6 9 8 12 12 16
J First 20 20 * 0 2 0 3 0 4 0 4 1 5 2 7 3 8 5 10
Second 20 40 0 2 0 4 1 5 3 6 4 7 6 9 8 12 12 16
First 32 32 * 0 2 0 3 0 4 0 4 1 5 2 7 3 8 5 10
K 32 64
Second 0 2 0 4 1 5 3 6 4 7 6 9 8 12 12 16
First 50 50 * 0 2 0 3 0 4 0 4 1 5 2 7 3 8 5 10
L
Second 50 100 0 2 0 4 1 5 3 6 4 7 6 9 8 12 12 16
First 80 80 * 0 2 0 3 0 4 0 4 1 5 2 7 3 8 5 10
M Second 80 160 0 2 0 4 1 5 3 6 4 7 6 9 8 12 12 16
Q ua lit y in P r o c urem en t
= Use first sampling plan above arrow. if sample site equals, or exceeds lot or batch size, do 100% inspection
= Use first sampling plan below arrow
Ac = Acceptance number
Re = Rejection number
* = Use corresponding single sampling plan (or alternatively, use double sampling plan below, where available)
† = If, after the second sample the acceptance number has been exceded, but the rejaction number has not been reached, accept the lot reinstate normal inspection
439
Table X-A – Tables for sample size code letter: A
Percent of lots individual plans
440
Table X-A-1–Tabulated values for operating characteristic curves for single sampling plans
Acceptable quality levels (normal inspection)
Pa 6.5 6.5 25 40 65 1001 50 250 400 650 1000
p (in percent p (in nonconformities per hundred units)
nonconforming)
99.0 0.501 0.51 7.45 21.8 41.2 89.2 145 175 239 305 374 517 629 859 977
95.0 2.53 2.56 17.8 40.9 68.3 131 199 235 308 385 462 622 745 995 1122
90.0 5.13 5.25 26.6 55.1 87.3 158 233 272 351 432 515 684 812 1073 1206
75.0 13.4 14.4 48.1 86.8 127 211 298 342 431 521 612 795 934 1314 1354
50.0 29.3 34.7 83.9 134 184 284 383 433 533 633 733 933 1083 1383 1533
25.0 50.0 69.3 135 196 256 371 484 540 651 761 870 1087 1248 1568 1728
10.0 68.4 115 195 266 334 464 589 650 770 889 1006 1238 1409 1748 1916
A Firs t C o urse in Q ua lit y En gin eerin g
5.0 77.6 150 237 315 388 526 657 722 848 972 1094 1334 1512 1862 2035
1.0 90.0 230 332 420 502 655 800 870 1007 1141 1272 1529 1718 2088 2270
40 65 100 150 250 400 650 1000
Acceptable quality levels (tightened inspection)
Note: binomial distribution used for percent nonconforming computations: Poisson for nonconformities per hundred units
Figure 7.11 Sample OC curves for MIL-STD-105E sampling plans (MIL-STD-105E, Table X-A and X-A-1).
Q ua lit y in P r o c urem en t 4 41
Solution
Again, the code letter is G. The DSP for normal inspection from Table 7.6 is:
(n1 , c1 , r1 , n2 , c 2 , r2 ) : ( 20, 0, 3, 20, 3, 4 ).
The DSP for tightened inspection from Table 7.7 is (20, 0, 2, 20, 1, 2), and that
for reduced inspection from Table 7.8 is (8, 0, 3, 8, 0, 4).
The notations used in the above example for sample sizes and acceptance and rejec-
tion numbers are as per their definition given earlier. The gap between the acceptance
number and the rejection number noticed in the case of reduced SSP exists in the
reduced DSP as well. If the total number of defectives from both the first and second
sample falls between c 2 and r 2 when using the reduced inspection plan, then the lot
will be accepted, but normal inspection will be restored with the next lot inspected.
Quality of
in coming
lots = p
SSP: (n, c)
Proportion Proportion
accepted = Pa( p) rejected = [1 – Pa( p)]
100% detailing
AOQ = p Pa( p)
Lot quality p = 0(100% good)
quality of the lots (average proportion defective in each lot) leaving the rectifying
inspection system. The AOQ is calculated using the formula:
AOQ = p × Pa( p )
where p is the proportion defectives in the incoming lots and Pa(p) is the probability
of acceptance by the sampling plan, which is the proportion of the original number of
lots accepted by the sampling plan.
To see how this formula gives the AOQ, suppose that 1000 lots of quality p are
submitted to the inspection station. Let the probability of acceptance by the sampling
plan of lots of p quality be Pa(p). Then, 1000Pa(p) of the lots will be accepted at the
inspection station and will each have p fraction defectives. In addition, 1000(1–Pa(p))
of the lots will be rejected and detailed, and they will all have zero defectives in them.
Overall, in the 1000 outgoing lots, there will be p × 1000Pa(p) defectives. Thus, the
average number of defectives per lot in the outgoing lots will be:
Example 7.9
Solution
The calculations are shown in the table below, and a graph of the AOQ versus p is
shown in Figure 7.13. The Pa(p) is the same as the OC function we calculated for
the same SSP in an earlier example, Example 7.1.
P 0.01 0.02 0.04 0.06 0.08 0.10 0.15 0.20 0.25 0.3 0.35
np = (20p) 0.2 0.4 0.8 1.2 1.6 2.0 3.0 4.0 5.0 6.0 7.0
Pa(p) 0.982 0.938 0.808 0.662 0.524 0.406 0.199 0.092 0.04 0.017 0.007
AOQ = pPa 0.01 0.019 0.032 0.040 0.042 0.041 0.030 0.018 0.01 0.005 0.003
The graph of AOQ in Figure 7.13 shows the typical behavior of AOQ. The AOQ is
small (good) with small (good) values for incoming quality p. This is because when the
incoming quality is good, a large proportion of lots will be accepted in the first inspec-
tion, and the small proportion of rejected lots will be detailed. Thus, the AOQ will be
good, because a large proportion of the accepted lots have only a small proportion of
defectives. When the incoming quality is bad, when value of p is large, many lots will
be rejected on first inspection and rectified at detailing; as a result, a large proportion
Q ua lit y in P r o c urem en t 443
0.04
0.03
AOQ
0.02
0.01
0.00
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35
p
of lots will have 100% good quality. The AOQ will be good in this case as well. The
AOQ hits a maximum value at a quality level that is in between the two extremes.
This maximum value of AOQ is called the “average outgoing quality limit,” and it is
used as an index of the performance of sampling plans when a rectifying inspection is
used. It represents the worst possible output quality when using a sampling plan with
rectifying inspection.
The MIL-STD-105E provides tables of factors for calculating the AOQL for sam-
pling plans. Instructions are provided in the tables on how to calculate the AOQL for
different sampling plans using the factors given in the tables. An example of an AOQL
table for single sampling, normal, and tightened inspection plans is shown in Table
7.9. To illustrate how to use this table, suppose an SSP is used for lot size = 200 and
AQL = 2.5%, then the code letter will be G, and the plan will have n = 32 and c = 2.
If this plan is used with a rectifying inspection, then the factor for AOQL from the
Table 7.9 is 4.3%. Using the formula given at the bottom of the table, the exact value
of AOQL would be 4.3(1 – (32/200)) = 3.612%.
In this chapter, we discussed some of the plans contained in the MIL-STD-105E.
Remember, this standard contains the same set of plans as the ANSI/ASQ Z1.4,
which is the current version of the standard. There are several other standard tables
that give ready-to-use sampling plans, one of which is the Dodge-Romig inspection
tables. These tables provide inspection plans indexed by LTPD and AOQL. The reader
is referred to Duncan (1974) or Grant and Leavenworth (1996) for details.
7.7.7.1 What Is a Good AQL? Most companies would settle for an AQL value for a
product characteristic based on what has worked for them, both functionally and eco-
nomically, for that product characteristic. Smaller AQL values will result in large sam-
ple sizes and, hence, in increased cost of inspection. Larger AQL values might result
444
TABLE 7.9 Factors for Calculating AOQL Values for Sampling Plans from MIL-STD-105E—an Example [MIL-STD-105E, Table V-A] for Single Sampling
Normal Inspection
500 0.074 0.17 0.27 0.39 0.63 0.90 1.3 1.9 2.9
N
p 800 0.046 0.11 0.17 0.24 0.40 0.56 0.82 1.2 1.8
Q 1250 0.029 0.067 0.11 0.16 0.25 0.36 0.52 0.75 1.2
A Firs t C o urse in Q ua lit y En gin eerin g
Sample size
Note: For the exact AOQL, the above values must be multiplied by (1 – ______________ )
Lot or batch size
Q ua lit y in P r o c urem en t 445
in lenient plans that could allow more defectives to pass through inspection, causing
losses in the assembly or dissatisfaction to a final customer. The best AQL value is the
trade-off between the cost of inspection and the cost of not inspecting enough, and
mathematical calculations can be made to determine this optimal value. However, in
practice, we may start using a value, such as 1.0%, and then adjust it up or down based
on how the part or assembly performs at the place it is used, with the best value being
selected based on what works satisfactorily.
Different AQL values can be used for different products, or characteristics of the
same product, depending on the criticality of the characteristic. More critical charac-
teristics or products should be inspected with smaller AQL values.
7.7.7.3 A Common Misconception about Sampling Plans The AQL is used as an index for
indexing sampling plans in the MIL-STD-105E. Suppose that a sampling plan cho-
sen based on an AQL of 1.5% is used at an incoming inspection station. This does not
mean that all the lots accepted at this inspection station will have 1.5% defectives in
them. It only means that lots with less than 1.5% defectives will be readily accepted
at this inspection station, and lots with more than 1.5% defects will not be readily
accepted. The average quality of the lots accepted by a plan with AQL = 1.5% can be
expected to be better than 1.5% (defectives <1.5%), especially if switching rules are
applied. The switching rules put psychological pressure on the vendor to supply good
quality. If a rectifying inspection is used, the average outgoing quality will certainly
be smaller than the AQL.
implements corrective action to repair the process, then the sampling plans will also
serve the same purpose as control charts.
Some people believe that sampling plans, because they are used after the product
is produced, have no place in modern quality control systems, which should aim for
the continuous reduction of product variability and zero, or near-zero, defective rates.
No one can argue against such goals, or against the use of control charts to achieve
those ends, but we must recognize that there will always be some suppliers that are
unable to prove the quality of their products with evidence from control charts. Until
all vendors can prove the quality of their supplies using control charts and capability
measurements, customers may have to depend on sampling plans to verify quality at
receiving inspection. It is then necessary to understand how the sampling plans work
and how to use them correctly.
7.7.7.5 Variable Sampling Plans The discussion in this chapter has been limited to
attribute sampling plans only. Although they may not be the most efficient plans in
terms of the number of units to be inspected for a given lot, they are simple to use.
When the cost of inspection is high because of the destructive testing of sample units
or an elaborate, time-consuming procedure for inspection, efficient sampling plans
may be required. In such situations, variable sampling plans (sampling plans using
measurements)—which are known to be efficient—may be appropriate. In variable
sampling plans, some characteristic, such as length, strength, or percentage carbon,
will be measured from the sample units. The average and range (or standard devia-
tion) of the measurements will be calculated, and a measure that includes both the
average and range (or standard deviation) will be computed to reflect the quality
of the sample. This computed quality measure will be compared with critical val-
ues for these measures available in standard tables, which have been calculated to
provide the α and β protection levels. One such set of standard tables is available
in MIL-STD-414 (ANSI/ASQ Z 1.9, 2003 (R2013). Use of the variable plans is
a bit involved, however, and they may require specially trained personnel to imple-
ment them. The reader is referred to the above military standard or books that dis-
cuss these plans, such as Wadsworth, Stephens, and Godfrey (1986) and Grant and
Leavenworth (1996).
7.8 Exercise
7.2 Draw the OC curve of an SSP with n = 30 and c = 2. Choose p = 0.01, 0.02,
0.05, 0.1, 0.08, 0.15, 0.2, 0.25, and 0.3. (Use the Poisson table or binomial
nomograph.)
7.3 Draw the OC curve of an SSP with n = 60 and c = 1. Choose p = 0.01,
0.02, 0.05, 0.08, 0.1, 0.15, and 0.2. (Use the Poisson table or binomial
nomograph.)
7.4 An SSP is being used at an inspection station with n = 90 and c = 3.
a. If the AQL = 0.03, what is the value of the producer’s risk α?
b. If the LTPD = 0.08, what is the value of the consumer’s risk β?
7.5 An SSP has n = 40 and c = 2. If α = 0.05 and β = 0.05, what are the values
of the AQL and LTPD? Use the binomial nomograph.
7.6 Select an SSP for AQL = 0.015, α = 0.01, LTPD = 0.1, and β = 0.02. Use the
binomial nomograph.
7.7 Select an SSP for AQL = 0.02, α = 0.01, LTPD = 0.2, and β = 0.02. Use the
binomial nomograph.
7.8 Prepare the OC curve of a DSP with n1 = 50, c1 = 1, r1 = 3, n2 = 50, and
c2 = 3. Choose p = 0.01, 0.03, 0.05, 0.07, and 0.1.
7.9 Prepare the OC curve of a DSP with n1 = 20, c1 = 0, r1 = 3, n2 = 20, and
c2 = 3. Choose p = 0.01, 0.05, 0.1, and 0.15.
7.10 Draw the ASN curve of the DSP in Exercise 7.8. Choose an equivalent SSP,
and compare the ASN of both the DSP and the SSP.
7.11 Draw the ASN curve of the DSP in Exercise 7.9. Choose an equivalent SSP,
and compare the ASN of both the DSP and the SSP.
7.12 Select SSPs for normal, reduced, and tightened inspection from MIL-
STD-105E for the following data: Lot size = 200, and AQL = 1.5%.
7.13 Select SSPs for normal, reduced, and tightened inspection from MIL-
STD-105E for the following data: Lot size = 2000, and AQL = 2.5%.
7.14 Select DSPs for normal, reduced, and tightened inspection from MIL-
STD-105E for the following data: Lot size = 200, and AQL = 1.5%.
7.15 Select DSPs for normal, reduced, and tightened inspection from MIL-
STD-105E for the following data: Lot size = 2000, and AQL = 2.5%.
7.16 Calculate the AOQ for the normal inspection plan from Exercise 7.12 at
p = 0.02.
7.17 Calculate the AOQ for the normal inspection plan from Exercise 7.13 at
p = 0.02.
References
ANSI/ASQ Z1.4-2003 (R 2013): Sampling Procedures and Tables for Inspection by Attributes,
Milwaukee, WI-ASQ.
ANSI/ASQ Z1.9-2003 (R 2013): Sampling Procedures and Tables for Inspection by Variables,
Milwaukee, WI-ASQ.
448 A Firs t C o urse in Q ua lit y En gin eerin g
ASQ , Customer-Supplier Division. 2004. The Supplier Management Handbook. 6th ed.
Milwaukee, WI: American Society for Quality Control—Quality Press.
Bowker, A. H., and G. J. Lieberman. 1972. Engineering Statistics. 2nd ed. Englewood Cliffs,
NJ: Prentice Hall.
Deming, W. E. 1986. Out of the Crisis. Cambridge, MA: MIT—Center for Advanced
Engineering Study.
Duncan, A. J. 1974. Quality Control and Industrial Statistics. 4th ed. Homewood, IL: Irwin.
Donovan, J. B., and F. P. Maresca. 1999. “Supplier Relations.” In Juran’s Quality Handbook.
5th ed. Co-edited by J. M. Juran and A. B. Godfrey. New York: McGraw-Hill.
Grant, E. L., and R. S. Leavenworth. 1996. Statistical Quality Control. 7th ed. New York:
McGraw-Hill.
Juran, J. M., and F. M Gryna. 1993. Quality Planning and Analysis. 3rd ed. New York:
McGraw-Hill.
Larson, H. R. 1966. “A Nomograph of Cumulative Binomial Distribution.” Industrial Quality
Control 23 (6): 270–278.
Montgomery, D. C. 2013. Introduction to Statistical Quality Control. 7th ed. New York: John
Wiley.
Schilling, E. G. 1982. Acceptance Sampling in Quality Control. New York: Marcel Dekker.
U.S. Department of Defense. 1989. Military Standard, Sampling Procedures and Tables for
Inspection by Attributes (MIL-STD-105E). Washington, DC.
Wadsworth, H. M., K. S. Stephens, and A. B. Godfrey. 1986. Modern Methods for Quality
Control and Improvement. New York: John Wiley & Sons.
8
C ontinuous I mprov ement
of Q ualit y
This chapter is about the problem-solving methodology and the tools used for ana-
lyzing problems, discovering the root causes and then devising solutions to eliminate
those problems. Elimination of the problems lead to improvement of the quality of
the process and quality of the products produced by them. Discovering problems or
improvement opportunities in processes and making process improvements to achieve
improved product quality and customer satisfaction is a non-ending, continuous
process.
As has already been pointed out in previous chapters, improving product and service
quality on a continuous basis is the key to improving customer satisfaction, increasing
productivity, and remaining competitive in the marketplace. Opportunities always
exist in productive enterprises for making improvements to the quality of products
and services by using the resources already available. We have come across many peo-
ple in the corporate world who tend to believe that problems relating to quality can
be resolved only by investing more capital to buy newer equipment, with all possible
automation and the latest bells and whistles. Dr. Deming used to decry this tendency
toward launching into extensive capital investments in the name of quality improve-
ment. He cited several examples in his writings where improvements in quality and
productivity were accomplished using the same machinery and resources that had
been condemned earlier as being incapable of meeting quality needs. We have seen
this happen in our own experiences as well.
Poor quality, waste, and customer dissatisfaction all result largely from a lack of
understanding of how processes behave and a lack of knowledge regarding how pro-
cess variables and their interactions contribute to the quality of the final product or
service. If proper tools are employed to understand the interrelationships among the
variables underlying a process—and if appropriate levels for these variables are chosen
to provide the desired levels of the product characteristics—then quality and produc-
tivity can often be improved without much additional investment in new machinery.
It does not, however, mean that new machinery will never be needed, just that the
capability of existing machinery is not often fully exploited to achieve needed quality
improvements.
449
450 A Firs t C o urse in Q ua lit y En gin eerin g
In this chapter, we will discuss the tools that are used in discovering opportunities
for improvement, making the improvements, and maintaining the improved posi-
tions. Some of the tools are simple, some involve a certain level of analysis, and some
require a bit of advanced mathematics. We will discuss here some of the commonly
used tools that are valuable in terms of the impact they make on the improvement
process. Before discussing the improvement tools, however, we will discuss the gen-
eral problem-solving methodology, or the framework for the process of completing
improvement projects.
An opportunity for an improvement project exists wherever there is a problem. A
problem exists wherever there are undesirable symptoms, such as an unusual number
of customer complaints, excessive warranty charges, excessive internal failures, large
number of accidents, or excessive worker absenteeism. Wherever the opportunity for a
project is identified, a project should be completed by following a systematic process.
The sequence of steps needed for solving problems successfully has been described
under different names by different authors. We will discuss below two such descrip-
tions: One by Dr. Deming, and the other by Dr. Juran. The DMAIC process adopted
by the Six-Sigma practitioners is also a problem-solving approach and is discussed in
Chapter 9 while discussing the Six Sigma quality management system.
The PDCA cycle recommended by Dr. Deming has four steps—plan, do, check, and
act—to be performed in that sequence. The PDCA cycle is a road map for making
continuous improvements. According to Dr. Deming (1986), this sequence of steps
was originally recommended by his mentor, Dr. Walter A. Shewhart. When he was
teaching quality methods in Japan, Dr. Deming used to refer to this cycle as the
“Shewhart Cycle.” The Japanese started referring to the cycle as the “Deming Cycle,”
and that name stuck. The method is now known as the Deming Cycle among modern-
day quality professionals. The activities needed in each step are described below:
1.
Plan: Document and analyze the current process, gather relevant data to
understand the causes and their effects, and propose theories on the root
causes. Plan for testing the theories.
2.
Do: Test the theories using experiments on a limited scale in order to under-
stand the relationship among the important variables in the process.
3.
Check: Check or study the data from the experiment to see if a good under-
standing of the process and its variables has been obtained. Check if the theo-
ries and the solutions proposed from them will produce the desired results,
which is the removal of the undesirable symptoms to the desired extent. If
not, modify the theories and solutions. This step, according to Dr. Deming,
C o n tinu o us Im p r o v em en t o f Q ua lit y 4 51
produces the knowledge (of the process)—a prerequisite for making any
improvement.
4.
Act: Implement the modified solutions by specifying the changes, document-
ing the revised instructions, and standardizing the new process.
The PDCA steps are presented in Figure 8.1, which shows the fourth step leading
into the first step. This emphasizes that the PDCA cycle is an iterative process, with
the end of the first iteration being the beginning of the next, and the iterations con-
tinuing forever. Such continuous iteration of the improvement cycle is necessary for
improving quality, increasing productivity, reducing waste, and enhancing customer
satisfaction, all leading to overall excellence in business performance.
The PDCA cycle describes, in simple terms, the process of solving a problem. It
makes the important point that problem solving is not a one-shot attempt. Although
the procedure does not describe the individual steps of problem solving in any great
detail, this method was the forerunner of many problem-solving sequences created
later by many others. This was also the process that the Japanese used with tremen-
dous success (they referred to it by the name “policy deployment”) to solve many
problems, from product rejection on a production line to product design for sat-
isfying a customer need to strategic plans to transform organizations into quality
organizations.
One important point Dr. Deming makes in this context is that “any step in
the Shewhart (Deming) Cycle may need guidance of statistical methodology for
economy, speed, and protection from faulty conclusions from failure to test and
measure the effects of interactions” (Deming 1986, 89). He believed that learning
and using statistical methods is critical to understanding the variations in popula-
tions, and in drawing proper conclusions about populations from sample observa-
tions in the presence of this variation. He was an engineer turned statistician and
emphasized the need for statistical-ability for people engaged in problem solving
in the quality field.
4. Act 1. Plan
3. Check 2. Do
6.
Institute controls to hold the gains: Create new work instructions, and train peo-
ple in the new methods. Make changes irreversible, and create control points
downstream to check if the new method is being properly used.
Several variations of the problem-solving sequence can be found in the literature
under different names (e.g., Crosby’s, Taguchi’s, DEMAIC, etc.). All, however,
stress the point that problem solving, or project completion, must be done by a cross-
functional team in a systematic, organized manner in order to reach a certain level
of success. When not done in an organized manner—when people follow the “fire-
ready-aim” (phrase borrowed from Carl Saunders of Caterpillar University) sequence
in making changes—they end up solving the wrong problems or achieving inadequate
results, or simply fail to make any impact on the problem. A quality improvement
team should adopt one such problem-solving methodology and train all team mem-
bers in its use.
The problem-solving methodology can be summarized based on a study of the
sequences suggested by several authors in the following generic steps. The tools used
in each of the steps are listed under each step. Several of these tools have already been
covered in earlier chapters. Those that have not will be discussed in the remaining
part of this chapter. Figure 8.2 captures the essence of the continuous improvement
process.
Project 1 Project 2
Pareto analysis
Quality costs Project statement—goals
Benchmarking and objectives. Gain
approval Project 3...
Flow chart
C&E Dgm
DOE Experiment, analyze,
Control charts and understand
Histogram the process
Project 4...
Scatter plots
Regression
Goals
No
met?
Yes
Sample review
control charts Establish controls
the new method is supposed to work. This would also give an opportunity for team
members to see if the solution they generated works as intended. More often than
not, solutions do not produce all the intended results during the first implementation.
Modifications will then have to be made to the solutions to obtain the desired results.
Validation involves making a full-day or full-shift run using production workers at
production capacity and under production conditions. The capability of the processes
to hold the parameters at the specified levels, and to produce the product characteris-
tics within specifications, must be established.
The tools used in this step include:
Engineering drawings, process charts, work instructions—to specify the new solution
Capability studies—to verify if the process is capable of meeting customer’s
specifications
As noted above, several tools are needed during the problem-solving process. Those
that have not been covered elsewhere in this book are discussed below.
8.3.1 Cause-and-Effect Diagram
The cause-and-effect (C&E) diagram is the first level of dissecting the process to
discover the root causes. This is also known as “Ishikawa’s fishbone diagram,” as it is
named after the Japanese professor Kaoru Ishikawa (1915–1989), who first used it to
investigate the causes of quality problems. This is a method that helps a team to think
together on paper.
The method consists in systematically identifying all the sources that might con-
tribute to the undesirable symptom(s) under investigation. This is generally done in
a brainstorming session among a team of people who are knowledgeable about the
C o n tinu o us Im p r o v em en t o f Q ua lit y 4 57
Undesirable
symptom
problem. The C&E diagram not only helps in the investigation of the causes, but also
serves as a means of recording the problem-solving process, including the brainstorm-
ing sessions. To follow a uniform format, it is suggested (Ishikawa 1985) that the
causes be investigated and recorded under the major categories shown as major stems,
as shown in the diagram in Figure 8.3.
The C&E diagram is a communication tool, and to avoid confusion, a uniform
format should be used in making the diagram. There are situations in which this for-
mat cannot be strictly followed. For example, this format may not be suitable when
investigating a problem related to a service. Then, the format would have to be suitably
modified to fit the problem situation without drastically changing the main structure.
Case Study 8.1 illustrates how the C&E diagram is made.
Highly smooth
surface Wrong measurement
Bad settings
Nondegreased of curing temperature
surface Wrong humidity
Irregular Nonreactive measurement
maintenance surface Improper thickness
Material causing measurement
inhibition of curing
Bond failure
Bad refrigeration of
Lack of High temperature
cut strips training
Bad storage
Inadequate
methods supervision
Improper baking High/low
Poor attitude humidity
Improper
sanding
Methods Manpower Environment
Figure 8.4 Example of a C&E diagram for bond failure. (From Ramachandran, B., and P. Xiaolan, “A Quality
Control Study of a Bond Curing Process.” Unpublished project report, IME 522, IMET Department, Bradley University,
Peoria, IL, 1999.)
After creating the C&E diagram, the team votes to select the most important
causes (usually three or four) that need to be investigated further with experimenta-
tion and analysis. Suppose the four top-ranking causes are to be picked. A simple vot-
ing procedure would be for each team member to pick the top four causes according
to his or her understanding of the problem situation, and rank them from one to four,
one signifying the most important. For each cause, the rank given by the members
will be totaled. The cause with the smallest total will be ranked first; the cause with
the next smallest total will be ranked second, and so on. The causes that are ranked
at the top will then be studied in further detail to discover solutions and implement
remedies.
8.3.2 Brainstorming
The term “brainstorming” has already been used earlier in this chapter, and there
seemed no need to explain its meaning. However, a few remarks about brainstorming
are appropriate.
Brainstorming involves exercising the brains of the members of a team, who pos-
sess an intimate knowledge of the process under study. The objective is, with their
help, to list all of the causes that contribute to the problem on hand and find possible
C o n tinu o us Im p r o v em en t o f Q ua lit y 459
ways of solving it. A certain set of conventions is usually adopted in those sessions to
maximize the information generated:
A facilitator, who is not part of the team, acts as a moderator and helps the team
keep their focus on the goals. (If an independent moderator is not available, one
of the team members who can act with independence can act as the facilitator.)
All ideas generated are recorded on a C&E diagram. No idea is rejected at the
initial stages of idea generation, because negative reaction to suggestions
might stymie creativity.
The facilitator goes around the table for ideas, giving each member an opportu-
nity to make his or her contributions. This avoids domination of the sessions
by a few aggressive individuals.
At the end of a brainstorming session, the ideas are pruned through consensus, to
exclude those that are not relevant or may not be feasible under the given con-
straints. The resulting diagram is presented to the team for voting and a certain
number (three or four) of top candidate causes are chosen for further investigation.
8.3.3 Benchmarking
think differently and to look for ways to make improvements that they had not thought
of before. Moreover, customers’ expectations are driven by the best-in-class performers.
Therefore, basing one’s goals on the performances of the best among one’s competitors
amounts to proactively anticipating customer expectations and taking steps to meet them.
The principle of benchmarking, although simple, needs to have a formal structure
to obtain uniformity in approach when used by different people in different parts of
a large organization. This formal structure was provided by the pioneering efforts of
managers at Xerox (Camp 1989). The structured approach is contained in the 10-step
process, summarized below from Camp and DeToro (1999). The process of bench-
marking follows the problem-solving sequence, wherein the objective is “to find and
follow the best-in-class.” The benchmarking process is implemented by a team.
information on the choice of goals and how they were accomplished, the lessons
learned, and the roadblocks to be avoided. The questions should be worded discretely
and not ask for information that the partner would rather not release. Some legal and
ethical issues regarding intellectual property rights may arise while studying exter-
nal benchmarking partners. The International Benchmarking Clearing House (Camp
1989) has developed a code of conduct to follow while studying a partner for bench-
marking. The enquiries should not be too intrusive and should be sensitive to the
confidentiality needs of the partners (ask not for anything that thou shall not want to
be asked). A report is written on the findings of the study, which forms the basis for
further analysis and recommendations.
the procedures followed, data gathered, analysis, recommendations, and the impact
on the business results. All these should be presented in a written report as well as in
an oral presentation. If the team had been keeping executives updated on the progress
of their work and received periodic feedback from them, then there will be no big
surprises at the presentation. This makes it easier for recommendations to be accepted.
next year. Therefore, the benchmarks must be reviewed and recalibrated periodi-
cally. Yearly review for most benchmarks would be reasonable. Annual reviews
would keep the recalibration effort manageable. If measures are subject to changes
because of rapid changes in technology or the market, however, more frequent
review of the measures may be necessary. By the same token, some measures may
remain robust for a long time and may not call for frequent reviews. The review
frequency has to be decided case by case. Benchmarking has the advantage, as
mentioned before, of setting goals based on external input rather than being gov-
erned by internal historical performance. They also face criticism that they do not
encourage innovation as copying other’s performance is the underlying principle of
benchmarking.
The Pareto analysis is used when there are several opportunities to choose one from.
For example, one project may have to be chosen as the first one from several project
opportunities, or one defect category may have to be addressed first from among many
defect categories. The Pareto analysis is used in these situations as it separates the vital
few from trivial many opportunities.
The method is based on a distribution proposed by the Italian researcher
Wilfredo Pareto (1848–1923) to describe how a small proportion of people in the
free Western societies controlled a large proportion of the total wealth. Dr. Joseph
Juran saw similar phenomenon in the quality area, where a small number of causes
are responsible for a large proportion of losses. He adopted the Pareto distribution
to describe the “maldistribution” of quality losses among causes, and he formalized
the method to separate the “vital few” causes from the “trivial many.” (He used the
terms vital few and useful many when referring to cause variables, such as customers
or suppliers.) Such separation helps in prioritizing improvement opportunities and
in concentrating on the vital few causes rather than spreading effort among the
many trivial causes. A quality engineer will find this method to be useful in many
situations when he or she should decide which project to select or product-defect to
address first.
The Pareto analysis consists of obtaining data on the frequency at which the differ-
ent causes have been occurring, in recent history, in creating the problem. The causes
are then rank-ordered based on the frequency of occurrence, with the cause having
the highest frequency being ranked first. Next, a diagram is made with percentage
frequency on the y-axis and the causes on the x-axis arranged in ascending order of
rank (descending order of frequency). Case Study 8.2 illustrates the method of making
a Pareto analysis.
The Minitab software calculates the percentage frequency of occurrence for each
of the causes and then rank orders them based on the percentage frequency. It also
464 A Firs t C o urse in Q ua lit y En gin eerin g
computes the cumulative percentage frequency of the causes, and graphs the cumu-
lative frequency. The percentage frequency and cumulative percentage frequencies
are shown in the table below the graphs. The cumulative frequency table and graph
help in reading off the graph, the most important causes. For example, in the exam-
ple case study, “curing humidity” and “curing temperature” can be read from the
graph as being the two dominant causes, accounting for more than 60% of all bond
failures.
The Pareto analysis thus helps in prioritizing project opportunities so that projects
can be chosen based on their order of importance.
100
200 80
60
Count
%
100 40
20
0 0
e
idi
ty ur
ati
on nts ers
rat ase ing er e me
hu
m pe gre oll rig rag a t Ot
h
Defect
ng em De r ef ds
to tre
uri gt B ad dr Ba em
C rin Ba Ch
Cu
Count 87 59 27 21 17 12 12 3
% 36.6 24.8 11.3 8.8 7.1 5.0 5.0 1.3
Cumulative % 36.6 61.3 72.7 81.5 88.7 93.7 98.7 100.0
Figure 8.5 Pareto diagram for bond failure causes. (From Ramachandran, B., and P. Xiaolan, “A Quality Control
Study of a Bond Curing Process.” Unpublished project report, IME 522, IMET Department, Bradley University, Peoria,
IL, 1999.)
8.3.5 Histogram
The histogram provides the means of observing the variability and centering of a pop-
ulation. The method of drawing the histogram and examples of its use were discussed
in Chapter 2. Two examples were also shown in Chapter 2 regarding use of histogram
in analyzing problems to find their root causes. Many quality problems are known to
result from excessive variability and/or poor centering (EV/PC) in process parameters
or quality characteristics. The EV/PC in mating parts results in assemblies with inad-
equate clearance, or with excessive clearance, and cause early wear of the parts and
early failure of machines. The EV/PC of in-process dimensions causes assembly dif-
ficulties in subsequent stations. The EV/PC in the weight of packaged material results
in an inability to meet specifications or in giving away too much material for free. In
the load vs. strength study in Chapter 3 in the context of reducing “accidents” during
the useful life of a product, we saw how excess variability in strength and/or load leads
to more chance for accidents. In general, EV/PC in process parameters are known to
be the most common causes for poor quality in finished products. All these problems
simply require plotting the histogram of the variable in question to understand the
466 A Firs t C o urse in Q ua lit y En gin eerin g
nature of the variability and the location of the process center with respect to the
spec center. This usually leads to discovering and eliminating the root causes of the
problem.
In the above example of bond failures in Case Study 8.2, it may be reasonable to
assume that the choice of specification for humidity by the process designers was cor-
rect. It should be more than mere coincidence that so many humidity readings were
out of specification when there were so many cases of bond failures. However, a cau-
tion may be in order: We need not assume that specifications for process variables are
always chosen correctly especially when failures are occurring. Sometimes we may
have to question the specifications and conduct experiments to determine the correct
specifications that would give defect-free products.
LSL USL
20 30 40 50 60
Figure 8.6 Histogram of curing humidity in bond failures. (From Ramachandran, B., and P. Xiaolan, “A Quality
Control Study of a Bond Curing Process.” Unpublished project report, IME 522, IMET Department, Bradley University,
Peoria, IL, 1999.)
C o n tinu o us Im p r o v em en t o f Q ua lit y 467
know that humidity was such an important variable and that it was so off-target.
Now that the data were available, however, it was easy for them to see the facts
of the case. It was not hard to convince the production people about the need
for controlling the humidity. Then, they were advised on how to accomplish the
control using the control charts.
8.3.6 Control Charts
Control charts were described in detail in Chapters 4 and 5 as the method for keeping
track of a process variable or a product characteristic at a consistent level. These charts
can be used not only to monitor a process for consistency but also for troubleshoot-
ing and root cause analysis in quality improvement projects. In Dr. Juran’s language,
control charts can be used not only to watch for “sporadic deviations” but also to dis-
cover causes for some “chronic deviations” in processes. Thus, they can be employed as
problem solving tools for process improvement.
When control charts are used on a process—say, to monitor a process variable—it
is customary to maintain an event log as a record of all those things happening in the
process or its environment, such as a change of tool, material, humidity, or operator.
This is done either by writing remarks on the chart, if it is maintained manually, or
by writing it in the event journal, which most computer software packages provide.
So, when the control chart shows that something has occurred making the process go
out of control, the operator, or an improvement team, tries to relate the signal on the
control chart with the happenings both in and around the process. More than likely,
this will lead to the discovery of relationships between causes and effects and lead to
resolution of problems. Case Study 8.3 illustrates the idea.
1 1
70
1 1 1
Sample mean
60
UCL = 54.33
50
Mean = 43.65
40
30 LCL = 32.98
1 1
20 1 1 1
0 10 20
Subgroup
50 1
40
Sample range
1
30 UCL = 26.86
20 –
10 R = 10.43
0 LCL = 0
Figure 8.7 Control charts for humidity of the curing chamber in the bond-failure problem. (From Ramachandran,
B., and P. Xiaolan, “A Quality Control Study of a Bond Curing Process.” Unpublished project report, IME 522, IMET
Department, Bradley University, Peoria, IL, 1999.)
C o n tinu o us Im p r o v em en t o f Q ua lit y 469
The study of what happened in and around the curing chamber when the data
were collected revealed some information.
The controller of the humidity generator that was supposed to maintain the
humidity at the set point in the chamber was acting erratically. It was supposed
to be supplied with dry gas that picked up the right amount of humidity while
exiting from the generator. The gas that was supplied, however, was not dry.
Indeed, it was very wet, causing swings in the behavior of the humidity genera-
tor. Also, the generator was supposed to be provided with distilled water per
the operating instructions, which was not being done. The water supply had
been connected to regular service water. The controller was replaced, and the
operating personnel were advised to follow the manufacturer’s instructions for
maintaining the generator. They also were advised to continue using the control
charts to watch the humidity of the chamber.
The other important variable—the temperature of the curing chamber—was
studied as well. Data on temperature had also been collected while gathering
data on humidity. When the chamber temperature was analyzed using X − and
R-charts, the temperature was found to be in-control at the prescribed level. This
helped in deciding that the lack of humidity control was possibly a reason for the
bond failures. The student team was not able to report the final results from the
client after the recommendations were implemented, but it was not difficult to
see how such monitoring and control of humidity would make a difference in the
performance of the process.
8.3.7 Scatter Plots
y y y y
x x x x
Figure 8.8 Examples of scatter plots. (a) Positively correlated. (b) Negatively correlated. (c) Nonlinearly correlated.
(d) No correlation.
0.5
0.4
y
0.3
10 20 30 40 50 60 70
x
that could be controlled, because air conditioning the entire building was not
feasible. The knowledge of this relationship, however, helped in understanding
what to expect on a rainy day, when the air is humid. There were other variables
in the equation, such as drying time and hot-air temperature, which could be
adjusted to compensate for the loss in dry-ability due to increase in humidity. We
needed to know the relationship among these variables so that when the humid-
ity increased we would know how much adjustment should be made in the other
variables in order to obtain consistent dry-ability.
In the Case study 8.5, all we could see from the scatter plot was that X and Y have
a relationship. We possibly could say that an increase in X causes a decrease in Y. If
we want a quantitative relationship to predict values of Y corresponding to values of
X, then we need to use regression analysis, which provides quantitative measures to
express the relationship between variables.
8.3.8 Regression Analysis
8.3.8.1 Simple Linear Regression In simple linear regression, we try to establish the rela-
tionship between the response Y and an independent variable X by fitting a straight line
to define that relationship. We hypothesize that the relationship is defined by the line:
y = α + βx
where α and β are the parameters of the line. Then, we collect certain number, say, n
observations on the pair (x, y) and fit a line to the data. That is, we estimate the coef-
ficients a and b of the line:
y = a + bx
such that the line passes as close to “all” the observed (x, y) values as possible. This is
done by choosing the values of a and b such that the sum of the squared deviations
47 2 A Firs t C o urse in Q ua lit y En gin eerin g
y
(x5, y5)
y = a + bx
(x3, y3)
ŷ5
ŷ4
x
x1 x2 x3 x4 x5
of the observed values of y from the values of y predicted by the line, at the different
values of x, is minimized. Figure 8.10 shows an example of a line fitted to (x¡, y¡), i =
1, …, 5.
For any value xi, the line provides an estimate for yi as yi = a + bxi. The actual
observation yi, corresponding to xi, may not be the same as this estimate, and the
difference between yi and yi or ei = yi − yi , is called
n
the residual at xi. The sum of
the squares of these residuals, or SS ( residual ) =
( yi − yi )2 represents how much ∑
i =1
the fitted line is off from the observed values. If the fitted line passed through all the
observed (xi, yi) values, this quantity will be equal to zero. Fitting the line to minimize
the SS(residual) is one way of making the line pass as close to “all” the observed val-
ues (xi, yi) as possible. Such a line y = a + bx , drawn to minimize the sum of squared
deviations, is called the least square line for the given data and is an “estimate” for the
hypothesized line y = α + βx, where a and b are estimates for α and β, respectively.
n
∑ ( y − a − bx ) . Taking the
2
The SS(residual) can be written as SS ( residual ) = i i
i =1
partial derivatives of the SS(residual) with respect to a and b and equating them to
zero, and then solving the resulting set of equations for a and b, the values of a and b
that minimize SS(residual) can be found. It can be shown (see Walpole et al. 2002)
that such values of a and b that minimize SS(residual) are given by:
n n n
n ∑ i =1
xi yi −
∑i =1
xi
∑
i =1
yi
b= 2
, a = y − bx
n n
n ∑ xi2 −
∑ xi
i =1 i =1
C o n tinu o us Im p r o v em en t o f Q ua lit y 473
These are called the coefficients of regression. Thus, we can find a line that provides a
“good fit” to a set of data by calculating the values of these regression coefficients that
define the least-squared line.
For any given value xi, the value yi will not be the same each time it is observed,
because yi is an observation of a random variable. The value yi, we assume, comes from
a normal distribution with mean = E(Y|xi) and variance = σ2, where E(Y|xi) is the
average of yi at the given xi and σ2 is the variance, which is assumed to be constant for
all values of xi. So, the hypothesis we made above is equivalent to saying that the value
of yi corresponding to an xi comes from a normal distribution, the mean of which,
E(Y|xi), is located on a straight line such that:
E (Y | xi ) = α + βxi
y E(Y/x5)
(x5, y5)×
(x1, y1)×
×
(x2, y2)
x
x1 x2 x3 x4 x5
8.3.8.2 Model Adequacy Model adequacy is measured using a quantity called the coef-
ficient of determination, which is denoted by R 2. The quantity R 2 represents the propor-
tion of total variability in the observations of Y that is explained by the regression line.
The total variability in the observations is given by
n
∑( y − y ) .
2
SS ( total ) = i
i =1
∑ ( y − y ) .
2
SS ( regression ) = i
i =1
∑ ( y − y ) .
2
SS ( residual ) = i i
i =1
n n n
∑( y − y ) = ∑( y − yˆ ) + ∑( yˆ − y )
i
2
i i
2
i
2
i =1 i =1 i =1
If the value of R 2 for a model is large, close to 1.0, the regression line is able to
explain a major portion, almost all, of the variability in the values of y, so the fitted
line is a good representation of the relationship between X and Y. If the value of R 2 is
small, it may mean that the straight-line model does not fully represent the relation-
ship between the variables—maybe, the relationship is not linear; maybe there are
other independent variables at play. Figure 8.12a shows examples of data sets and cor-
responding fitted lines, which have varying values of R 2.
If the linear model does not adequately represent the relationship, other curvilin-
ear models may have to be explored. Or, we may have to conclude that the values of
Y are dependent on more than the single independent variable X. We then will seek
the other variables that may help in explaining the behavior of Y. Thus, R 2 plays an
important role in the effort to finding the relationship between two variables X and Y.
C o n tinu o us Im p r o v em en t o f Q ua lit y 47 5
×
(x5, y5) (x5, y5)
R2 ~
– 1.00 × R2 < 1.00
× × ×
× ×
× ×
(x2, y2)
×
(x2, y2)
x1 x2 x3 x4 x5 x1 x2 x3 x4 x5
×
R2 << 1.00
× ×
×
(x2, y2) (x5, y5)
×
(a) x1 x2 x3 x4 x5
Figure 8.12 (a) Data sets and fitted lines with different R 2 values. (Continued)
8.3.8.3 Test of Significance The coefficients a and b as obtained from the formulas
above, are the intercept and slope, respectively, of the fitted straight line representing
the relationship between X and Y. A question to be answered now: Does a straight line
really explain the relationship as opposed to there being no such relationship? This
question can be rephrased: Is β significantly different from zero? (see Figure 8.12b).
The question has to be answered using a test of significance with the following
hypotheses:
H 0: β = 0
H 1: β ≠ 0
y y
× ×
×
×
× ×
β>0
×
×
× × ×
× × ×
β=0
× × × ×
×
×
×
×
× ×
x x
Linear relationship exists No linear relationship exists
(b)
n n n n n
2
∑ ( yi − yˆ i )2 ∑ ( yi − a − bxi )2 ∑ yi2 − a ∑ yi − b ∑x y i i
S = i =1
= i =1
= i =1 i =1 i =1
The equivalence of the numerators in the above expressions for S2 can be proved
using algebra. The denominator in the above expressions, called the degrees of freedom
(df), is obtained as the sample size n minus the number of parameters estimated from
the sample data. For the simple regression, two parameters, α and β, are estimated
from the sample data and, therefore, df = (n – 2). The term S2 is called the mean square
error.
Using the estimate for σ2, an estimate for the standard error (s.e.) of b can be
shown as
S
s .e.( b ) =
n
∑ (x − x )i
2
i =1
b
~ t n −2 (See Mendenhall and Sincich 1988)
s .e.( b )
Using the above statistic as the test statistic, the critical region is chosen as defined
below to reject the null hypothesis H0: β = 0 against the alternate hypothesis H1: β ≠ 0.
Critical Region: If the absolute value of the observed value of the test statistic
|tobs| > tα/2,n–2 , then reject H 0. Otherwise, do not reject H 0.
If H0 is rejected, it means β ≠ 0 and there is a straight-line relationship between
X and Y. If not, there is no such relationship. The following example shows how the
various quantities are calculated from data and how conclusion about the relationship
between the two variables is reached.
Example 8.1
A study of a drying oven was made to find the effect of atmospheric humidity and
the drying capability of the oven as represented by the dry-ability index. Data col-
lected on dry-ability (Y) and humidity (X) are shown in the table below which
includes some calculations to facilitate computation of the regression coefficients.
Find out if a linear relationship exists between humidity and dry-ability.
C o n tinu o us Im p r o v em en t o f Q ua lit y 47 7
From the data in the table, n = 20, Σ xi = 785, Σ yi = 7.59, Σ xiyi = 284.46, Σ xi2 =
36,167, and Σ yi2 = 2.9801.
Solution
20( 284.46) − (785)(7.59) −268.95
b= = = −0.0025
20( 36167 ) − (785)2 107115
7.59 785
a= − ( −0.0025) = 0.478
20 20
2
s =
SS(residual )
=
∑
y i2 − a yi − b xi y i∑ ∑
n−2 n−2
2.9801 − ( 0.478)(7.59) − ( −0.0025)( 284.46) 0.063
= = = 0.0035
18 18
s = 0.059
SS (residual )
R2 = 1 −
SS (total )
(∑ y )
2
(7.59)2
∑
i
SS (totall ) = y i2 − = 2.9801 − = 0.0997
n 20
SS (residual ) = 0.063
0.063
R2 = 1 − = 0.368
0.0997
0.059 0.059 0.059
s .e.(b) = = = = 0.00081
∑ x − (∑ x )
2
2/ n
36167 − (785) / 20 2 73.18
i i
H0: β = 0
H1 : β ≠ 0
b
Test statistic : ~ t n−2
s.e.(b)
478 A Firs t C o urse in Q ua lit y En gin eerin g
t18
CR CR
(b)
Figure 8.13 (a) Critical region (CR) for the test of significance of β. (b) Minitab output from regression.
SS(residual)
R2 = 1 −
SS(total)
dual) n − 1
SS(resid
Adjusted R 2 = 1 − ×
SS(total) n− p
where p is the number of parameters estimated, which equals the number of inde-
pendent variables in the regression plus one (for the intercept). For a simple linear
regression, p = 2. Therefore, for the above example,
0.068 19
Adjusted R 2 = 1 − × = 0.28
0.0997 18
C o n tinu o us Im p r o v em en t o f Q ua lit y 479
Regression plot
0.5
Y-dryblty
0.4
0.3
20 30 40 50 60 70
X-humidity
The regression output also gives the P value corresponding to the observed t-values.
In this case, since the P-value corresponding to X is smaller than 0.05, the chosen α,
we reject H0: β = 0 and conclude that X and Y have a significant linear relationship.
8.3.8.4 Multiple Linear Regression When there is reason to believe that a dependent
variable Y is influenced by several independent variables, say, X1, X 2, and X3, then we
use the multiple linear regression. The regression model will be
y = α + β1x1 + β 2 x 2 + β 3x3 + ε
The relationship will be estimated by
y = a + b1x1 + b 2 x 2 + b3x3
Formulas for a, b1, b2, and b3 can be developed the same way as for the simple regression
to minimize the sum of squared deviations. The coefficient of determination R 2, which
represents the proportion of the total variation in Y explained by the regression, is used for
determining how adequately a linear model fits the data. The mathematics of deriving the
formulas and the arithmetic for calculating the coefficients become complex; however, the
availability of computer programs makes the work easier. We will use an example to show
how the multiple linear regression is used to find the relationship of a response to several
independent variables. We will use the Minitab software to do the calculations.
Example 8.2
A study of a drying oven was made to find the effect of the variables atmospheric humid-
ity, hot-air temperature, and conveyor speed on drying capability of the oven as repre-
sented by the dry-ability index. The data gathered on the variables and the dry-ability
480 A Firs t C o urse in Q ua lit y En gin eerin g
index are given in Table 8.3. Determine if there is a linear relationship between dry-
ability and the three variables. Use α = 0.05. Knowledge of the relationship will help in
adjusting the parameters of the oven to obtain optimal drying performance.
Solution
Figure 8.15 shows the output from Minitab of the regression performed with the
three independent variables. The computer output shows the estimates of the coef-
ficients a, b1, b2, and b3 along with the standard error in these estimates. For each
variable, the output shows the value of the observed t statistic along with the P value
for the observed value of t. If the P value for any variable is less than α = 0.05, then
that variable has significant impact on the response.
Each of the t values have (n – p) degrees of freedom (df ), where p is the number
of parameters estimated, which is equal to the number of independent variables plus
one. For the example, df for each of the observed t values = [20 – (3 + 1)] = 16. The
critical value of t is t 0.025,16 = 2.12. If the |tobs| > 2.12 for any factor, reject H 0: βi = 0
for that factor. In this example, all three factors are significant, which means they
all have a significant impact on the dry-ability of the oven.
This information can be used to adjust the oven parameter to obtain the desired
oven performance.
y = α + β1x + β 2 x 2
When there are several independent variables, it is advisable to make scatter plots
of the response variable with the independent variables individually and get a pre-
liminary understanding of the nature of the relationships. This will help in deciding
what relationships should be tried. When there are, for example, two independent
variables, models with an interaction term of the form
y = α + β1x1 + β 2 x 2 + β 3x1x 2
can be tried. The following example shows the use of nonlinear regression to investi-
gate the relationship of variables.
Example 8.3
For the oven study in Example 8.2, try to fit a model that includes one interaction term.
Solution
We propose that there is interaction between air temperature and humidity based
on the belief that the effect of changing air temperature and humidity simultane-
ously is much more profound on dry-ability than the sum of the individual effects.
We propose the model
y = α + β1 x1 + β 2 x 2 + β 3 x3 + β 4 x1 x 2
The results of the nonlinear regression made using Minitab are reproduced in
Figure 8.16. We see that the interaction term is also significant. We also see that
the value of R 2 has increased compared to that in the previous example showing
that inclusion of the interaction term has improved the fit of the model to the data.
Regression analysis is a very powerful tool for understanding the relationships among
variables in a process, which can be put to use for great advantage in improving pro-
cess and product quality.
8.3.9 Correlation Analysis
Correlation analysis is another method to study the linear relationship between random
variables. The correlation coefficient of two random variables X and Y is defined as
cov(X , Y ) E [( X − µ X )(Y − µ Y )] σ XY
ρ XY = ≡ ≡
V ( X )V (Y ) V ( X )V (Y ) σ X σY
The empirical analog of this theoretical quantity, obtained from n sample observa-
tions of (x, y), is given by Pearson product moment coefficient of correlation, defined as
n
∑ (x − x )( y − y )
i i
rXY = i =1
n n
∑
i =1
2
( xi − x )
i =1
∑
( yi − y )2
It can be shown that ρ, and r, lie in the interval [−1, 1] (see Hines and Montgomery
1990).
When the value of r is near –1, we say the two variables have a strong negative cor-
relation, which means that when the value of one variable increases, the value of the
other decreases linearly, and vice versa. When the value of r is near +1, we say the
two variables have a strong positive correlation, which means that increasing values of
one variable produce increasing values of the other in a linear fashion and vice versa.
When the value of r is near zero, we say the variables have no correlation, which
means they have no linear relationship (see Figure 8.17).
y y y
× ×
× ×
× ×
× ×
× × ×
× × ×× × ×
× × ×
× × ×
× × ×
x x x
(a) Negatively correlated (b) No correlation (c) Positively correlated
8.3.9.1 Significance in Correlation For normally distributed variables, Table 8.4 gives
the 95% critical values of r for selected sample sizes. If the absolute value of the cal-
culated r from sample observations exceeds the quantity in the table for a given sam-
ple size, then we conclude there is a significant correlation between the variables.
Otherwise, we conclude there is no correlation.
The formula for the correlation coefficient can be rewritten in a form that is con-
venient for computation. The numerator and denominator of the expression can be
shown to equal the following two expressions, which are computationally simpler.
For the numerator
n n
n n
∑ xi
∑ yi
∑ (x − x )( y − y ) = ∑ x y −
i =1
i i
i =1
i i
i =1
n
i =1
n n
∑
i =1
( xi − x )2 ∑
i =1
( yi − y )2 =
2 2
n n
n
∑ xi
n
∑ yi
∑x
i =1
2
i −
i =1
n
×
∑y
i =1
2
i −
i =1
n
Example 8.4
Calculate the correlation coefficient between dry-ability and humidity from the
data given in Table 8.3, and check if the correlation is significant.
484 A Firs t C o urse in Q ua lit y En gin eerin g
Solution
Table 8.5 shows the calculation of sums and sums of squares of the variables that
facilitate computation of the correlation coefficient.
To calculate the r X,Y
(785)(7.59)
Numerator = 284.46 − = −13.4475
20
(785)2 (7.59)2
Denominator = 36, 167 − × 2.9801 − = (73.183)( 0.3157 )
20 20
−13.4475
rX ,Y = = −0.582
(73.183)( 0.3157 )
Referring to Table 8.4, we see that the critical value for r when n = 20 is 0.42.
Because the absolute value of the observed value of r is greater than r-critical, we
conclude there is a significant (negative) correlation between X1 and Y at α = 0.05. In
physical terms, the dry-ability of the oven does increase with a decrease in humidity
in the atmosphere.
C o n tinu o us Im p r o v em en t o f Q ua lit y 485
The results from Minitab on the correlation coefficient are shown in Figure 8.18
and the results are seen to agree with the calculated figures. The conclusion using the
P-values would also lead to the same conclusion as arrived at using Table 8.4.
8.4 Lean Manufacturing
The term “lean manufacturing” refers to the production system created by the Toyota
Motor Corporation to deliver products of right quality, in right quantity, at the right
price, to meet the needs of the customer. Taiichi Ohno (1912–1990), the Toyota engi-
neer and a creative genius, who became a V.P. of manufacturing at Toyota, is mainly
credited with creating the system, which is also known as the Toyota Production
System. He had received assistance from Dr. Shigeo Shingo (1909–1990), an indus-
trial engineer and author of the Single Minute Exchange of Die (SMED) proce-
dure, who helped in strengthening parts of the system. Eiji Toyoda (1913–2013), the
man who transformed the Toyota Motor Company that his uncle had founded into
a global powerhouse, was searching for a model production system to adopt, in order
to improve the Toyota production facilities that he was given charge to manage. At
that time, in late 1940s, the Toyota Motor Company was producing cars at the rate
of about 200 per year when the Ford Motor Co. at the Rouge Plant near Detroit was
making 7000 cars per day. The system he witnessed at the Rouge plant during a visit
in 1950 offered a model he could adopt but he saw several drawbacks in the system.
He saw those drawbacks as opportunities for improvement and built a new system
with help from Ohno, which sought to eliminate all possible wastes in production and
maximize the value-adding functions.
Seven types of wastes (called the mudas) were identified for elimination: Waste
in defective units produced, units overproduced, products or in-process material sit-
ting in inventory, transportation of raw material or product moved unnecessarily over
long distances, wasted motions of workers, waiting time of workers, and unnecessary
operations performed.
What resulted from continuously improving the system over two decades in the
1950s and 1960s is now referred to as the Toyota Production System (TPS) or lean
manufacturing system. A brief discussion of the TPS is undertaken here to show its
relationship to the traditional quality engineering methodology. Many of its compo-
nents are indeed continuous improvement approaches to improve quality, eliminate
waste, streamline operations, and reduce overall costs, thus enabling quality products
delivered to the customer at the least price.
486 A Firs t C o urse in Q ua lit y En gin eerin g
The major components of the TPS System and their interrelations are depicted in
Figure 8.19 and are further elaborated below. The figure indicates only the dominant
relationships among the component functions and there exist many subtle relation-
ships that the reader will come to recognize, which cannot be expressed in any broad-
brush portrayal of the system. Any reader with a background in industrial engineering
would notice that many of the components of the TPS are tools industrial engineers
have used traditionally as part of methods engineering, to make workplaces more effi-
cient and more productive. However, the ideas of just-in-time production was not
part of the traditional IE discipline and, the tools of methods engineering have been
refined, simplified, and packaged into one whole coherent system in the TPS, which
has proven its capability for achieving, quality, productivity, and cost reduction.
Incidentally, methods engineering, methods analysis, or motion analysis, as it is
variously called, is the process of analyzing how a work is accomplished in a workplace
by breaking down the work into fundamental motions, or therbligs as they are called,
and creating an optimal sequence of motions to accomplish the same work without
wasteful motions. This same approach can be used to identify wasteful movements
in a larger context of a factory, for improving flow of material and products among
workplaces. The concept of methods analysis was introduced by Frank and Lillian
Gilbreths in the 1910s and are well documented in books such as Niebel’s Methods,
Standards, & Work Design (Freivalds and Niebel 2009) and Motion and Time Study:
Design and Measurement of Work (Barnes 1980).
C o n tinu o us Im p r o v em en t o f Q ua lit y 487
The objective of presenting the Lean System here is only to provide an overview of
the system and is not meant to provide exhaustive guidelines on implementing a lean
system in a production shop. Books by Pascal (2002), MacInnes (2002), and Black and
Hunter (2003) can be considered for this latter purpose.
The Lean System can be considered to have three major functional modules:
Quality control
Quantity control
Waste and cost control
There are a few minor modules which either contribute to the three major ones
listed above or directly to the final objective as shown in Figure 8.19. Each of these
modules is discussed below in some detail.
8.4.1 Quality Control
The objective of this module is to deliver to the customer a product (or service) that is
designed and produced such that it will meet their needs and delight them in its use.
If the product produced does not meet the needs, either due to poor design or due
to poor production methods, it would have to be discarded and so is a waste causing
losses to the producer. The methods described in other chapters of this book, start-
ing from how to find what the customer wants, translating those needs into product
features, choosing the targets and limits of variability for the key product features,
designing a process to accomplish the product characteristics within the chosen limits
of variability, making the product using control procedures to ensure the product is
produced according to the design, packaging and delivering the product to the cus-
tomer, and helping the customer in its proper installation and use are the appropri-
ate methodologies needed to achieve this goal. In addition, the TPS places special
emphasis in preventing, at all costs, a defective unit being passed on to the customer.
Discovering the causes for the defectives or process errors (we had called these
assignable causes or special causes in Chapter 4) and implementing error-prevention
methods is the process employed religiously for this purpose. Mistake-proofing or
fool-proofing methods, called poke-yoke, that will not allow the errors to creep back
into the process, are employed to prevent defectives ever being produced. Further,
the TPS uses 100% final inspection of the product so that not a single defective unit
will be passed on to the customer. This is true whether the customer is the end user
outside the producing organization or an internal customer who uses the “product” in
the next stage of processing or assembly. The defect-free production is further facili-
tated by a practice, call it a culture, where any member of the production team would
stop a production process if he or she sees the process producing defective units. This
practice, called jidoka, ensures delivery of 100% quality products to the customer and
is considered (Wilson 2010) one of the pillars of the Toyota Production System.
488 A Firs t C o urse in Q ua lit y En gin eerin g
8.4.2 Quantity Control
Quantity control refers to producing only the amount of product the customer has
requested, not more, not less. This type of “lean” production is in contrast to pro-
ducing large quantities of products and stocking them in inventory in anticipation
of customer demand. Such storage of products in inventory results in losses which
include cost of space and shelves, cost of handling in stocking and retrieving, cost of
security and insurance, and the interest cost of the capital invested in the products
being stored. Such “mass” production had been in vogue primarily because of large
costs involved in “setting up” of production lines. When an assembly line has to be set
up, for example, for changing over from one model to another model of a product, or
when a tooling in a machine has to be changed from one set-up to another to make a
different part number, it involves labor time and, more importantly, lost production
of the production machinery. The traditional approach had been to produce products
in batches for each set up so as to distribute the set-up cost over the larger number
of units or a batch. The size of the batch is decided to minimize the total cost of the
inventory operation and is called the economic lot size. Such production in large batches
resulted in large inventories because products that are not immediately used by cus-
tomers need to be stocked. Recognizing that the long set-up times were the reason for
large batch-production, the TPS invented methods to reduce set up times, again using
steps of methods engineering. They succeeded in reducing set-up times to a fraction of
what used to be before improvement, which enabled production of small batches, or as
much as needed by a customer’s order. Even as small a batch size as one became pos-
sible and was economical. The Single Minute Exchange of Die, a procedure perfected
by Shigeo Shingo to minimize set up times in changing dies, one of the major set up
operations in the production of automobile body parts, was a major driver in making
the one-piece production (per set up) economical (Black and Hunter 2003, Chapter 6).
Next, the TPS addressed the problem of overproduction resulting from lack of cor-
relation between the number produced and the number demanded by the customer.
The conventional wisdom was to anticipate or project the demand based on historical
demand and “push” the projected quantities through the production system. This led
to large inventories waiting for the customer to make the request for the product. The
authors of the TPS invented the “pull” method where the product is made in quantity
that is only pulled by the customer, leaving nothing (almost nothing) for storage. The
pull system makes use of the kanban, which literally means a visual card.
A kanban is a card that is attached to a part with the part’s ID number written on
it. It may contain additional information such as name of supplier if it is a supplied
part, location of stores if it is an item from storage, where it is needed, and so on.
Every part that is made or received from a supplier gets a kanban and resides in a small
storage location. When a part is withdrawn by a customer, the kanban is detached
from the part and stacked in a kanban retrieval pouch located in a very visible spot on
the assembly floor. Accumulation of the kanbans in this pouch indicates to processes
C o n tinu o us Im p r o v em en t o f Q ua lit y 489
upstream that more of this part is to be produced or procured in the quantity indicated
by the number of kanbans that have accumulated. The kanbans could be transmitted
to the upstream production processes or procurement sources at set time intervals or
at the accumulation of a predetermined number of units of product. Mechanisms are
put in place to announce the accumulation of predetermined number of kanbans or
time for review of their numbers so that production or procurement process can be
triggered. When the kanbans are sent to the producer, it is a Production Kanban,
and when it is detached at the time of withdrawal by the customer, it is a Withdrawal
Kanban. For the kanban system to work well, some rules are followed:
No new part will be made or procured unless there is a (withdrawal) kanban
for it.
No part can be withdrawn unless a (withdrawal) kanban is created.
Customer withdraws only the quantity needed.
Producer makes only the quantity indicated by the withdrawal kanbans.
Never ship defective items.
Level the production plan when the kanban system creates widely varying pro-
duction plan.
Leveling implies that the demand placed on the system does not vary widely from
day to day both in numbers of products as well in product mix (model variation).
Leveling is accomplished by taking the demand for the product over a longer period
of time, say, a week, or a month, and dividing evenly among the days. This, though a
bit of a compromise on the purely pull system, is considered necessary for the smooth
functioning of the system.
The fundamental objective of the system is to make sure that no production is made
beyond what the customer needs. The kanban system, also referred to (by Americans)
as take-one-make-one-system, helps in accomplishing it.
breakdown from 5000 per month to 50 per month, and the profits realized through
use of TPM have been recorded as 10 times the cost for implementing the program
(Suzuki 1992). The website TPMonline.com reports several successful stories. For
example, the National Steel and Shipbuilding Co. (NASSCO), San Diego, CA, a full
service shipyard that designs and builds ships, implemented TPM in their production
shops in 1997 and increased equipment uptime in some instances from 74% to 99%.
Stabilizing a process means to make the process behave consistently the same.
Standardization refers to providing standard operating procedures for each of the
operations, preferably using clear graphics, so that the process can be repeated the
same way each time it is performed. Standardization of operations helps in making
a process stable. Standardization, the TPS cautions, is never meant to leave it in the
same condition for ever (Pascal 2002). Standardization is the step needed for making
it ready for further improvement. Standardization also helps in passing on the current
expertise to next generation of workers. Before a process is standardized, the process
must be improved in several dimensions. Of course, it should be improved to make
defect-free products, made only in quantities that customer needs. The process must
be subjected to value-stream analysis to eliminate non-value-adding functions before
it is standardized. There are also a few other tools used to enhance the stability of a
process. Visual management using the “5Ss” and proper layout of machinery to mini-
mize unwanted motions are the important ones.
8.4.6 Visual Management
Visual management involves making the visual appearances of the workplace clear,
symmetric, and uncluttered, so that any deviations will be detected easily just by look-
ing at the process. The 5Ss recommended are: Sort, Set-in-order, Shine, Standardize,
and Sustain.
Sorting simply means: Keep in the workplace only the items that are needed by
removing from the workplace those that are not needed in the daily operation. The
unwanted items, such as redundant parts, tools, jigs, old machinery, tables, chairs,
shelves, obsolete computer screens, and so on, accumulate over time and take up shelf
and floor space, and present safety hazards. Periodic house cleaning will get rid of
the clutter and make the workplace clean. How often the clearing should be done,
by whom, and how to decide which one is an unwanted item, can be determined by a
workplace team depending on what suits their workplace. However, these should be
determined by the team, documented and strictly followed.
Set-in-order means: Organize the machines, tools, staging areas, and parts-shelves
in a layout that will minimize the movement of the job and the worker. This can be
done by making a diagram of the movement of incoming material going through
492 A Firs t C o urse in Q ua lit y En gin eerin g
the process operations to the final finished part location. Such a diagram, known as
the flow diagram in industrial engineering, will enable identification of zig-zagging
and backtracking within the workplace. By rearranging the machines with respect
to each other and other features of the workplace, these unwanted motions can be
avoided leading to a more rational layout. The flow diagram for a workplace can be
made on a grid paper where the items of machinery are shown located at their current
locations with the distances between them drawn to scale. The path followed by the
material as it goes through the conversion process is drawn on the paper with arrow-
heads indicating direction of travel. Such a diagram of a workplace, if it has not been
improved already, will usually present a picture of unorganized loops of travel paths
(thus earning the name “Spaghetti diagram”). The total distance of travel is used as a
metric of how well the work place is organized or disorganized. A critical eye will see
in this diagram ways to untangle the loops and streamline the paths by rearranging
the machinery so as to minimize the total distance travelled by the product. These dia-
grams and their analysis are best made by a team of people working in the workplace.
At the end of such analysis, the team will come up with the machinery laid out in such
a way the product moves smoothly in the direction toward the final end point of the
product. The U-shaped cell, one of the configurations of machinery layout, has been
found to be the most suitable layout for workplaces, not only to facilitate smooth flow
of the product, but also to minimize the number of workers needed to provide loading,
unloading, and other services the machinery would need. A caution may be in order:
The U-shaped cells are capital intensive because they require dedicated machinery and
should not be employed unless warranted by production volume.
The same approach used to smooth out and minimize the travel path of parts and
subassemblies within a workplace can be used to analyze flow of material between
workplaces in the larger context of the factory. A team of people working in the
various workplaces will be able to identify unnecessary travel and create a layout to
minimize travel distances, thus yielding an efficient layout for the entire production
facility. U-shaped cells producing component parts, linked in the sequence needed in
the assembly of the final product are known to produce the best layout for the total
production facility (see Black and Hunter 2003, Chapter 5). In situations where many
products are flowing through a plant, creating complex travel sequences, special com-
puter tools and simulation programs available to an industrial engineer may have to
be used.
Shine means: To clean the place of dirt, oil, and unwanted grit that may make the
workplace look dull. This will also disclose in a timely manner any leak that may spring
in a machine leading to fixes that may save the machine from an unscheduled break-
down. A clean workplace also makes for a workplace that is safe, environment-friendly,
and contributing to the health of workers. Proper lighting can also be included under
this function to enhance the visibility and safety of the workplace. Cleaning must be
done on a regularly scheduled time interval with responsibility for cleaning clearly
identified among the team members working in the workplace.
C o n tinu o us Im p r o v em en t o f Q ua lit y 493
Standardize means: To make the improved workplace the standard practice by doc-
umenting the new layout, schedules, and responsibilities for sorting and cleaning. The
entire team working in the workplace should be made aware of the new process as
documented as the new standard.
Sustain means: To make sure that the sorted, cleaned and organized workplace
remains in the improved condition in the future. This can be accomplished by imple-
menting rules that everyone will follow regarding cleaning schedule and discarding
unwanted material so that the team’s objective of a clean and organized workplace
remains accomplished.
Leveling was described earlier as the way of obtaining an even schedule for the pro-
duction system from day to day, both in product numbers and product mix. Balancing
is the process of distributing the work within a workplace evenly among the opera-
tions to be performed in the workplace. It is done in a way there are no bottleneck
operations where the job waits for a long time. The cycle time is the time needed to
make one piece of a product in a workplace. If all the work needed to make a product
is performed in one location, doing one operation after another, the cycle time will
equal the sum of all operation times. If the operations are distributed to be performed
at several locations, the cycle time will equal the sum of the operation times at the
location where the largest number of operations is performed. (Of course, when the
operations are distributed over multiple locations, more workers will be needed.) If,
however, the operations are more evenly divided among the locations, the less will
be the cycle time. Please see in Figure 8.20 the three configurations of a process
to fabricate a welded part. The total time for welding the fabrication is 44 minutes,
which, if performed all at one place as in Configuration 1, will result in a cycle time of
44 minutes. If the work is divided among two locations using two operators as shown
in Configuration 2, cycle time will be 34 minutes. If however, the work is more evenly
Configuration 3:
Tack and part of finish Finish weld 24 min.
Cycle time = 24 min.
weld 20 min.
distributed between the two operations, a finished fabrication will be made every
24 minutes giving a cycle time of 24 minutes. This is the basic principle of assembly line
balancing, covered in industrial engineering literature.
In the real world, the problems won’t be this simple, and there will be many restric-
tions imposed by precedence requirements (which operation needs to be done first
and which next, etc.). There will also be opportunities for performing operations in
parallel. This problem of assembly line balancing has been studied in great detail, and
tools, algorithms, and computer programs have been developed to handle complex
situations. One has to refer to a book in production and operations management such
as Buffa (1983) or Evans (1993) for more detailed information on this topic.
So, by properly configuring the operations, by splitting or combining and distribut-
ing the total work needed to accomplish a job among work locations, the cycle time
to produce a product can be adjusted and optimized. The TPS uses a metric called
takt time, which represents the maximum time within which a unit of product must
be made at a workplace in order to meet a customer’s quantity demand. It is obtained
by dividing total number of minutes available in a day by the number of units required
to be made in the day:
Thus, the takt time is the upper bound for the cycle time. Therefore, the design of
the workplace should be such as to yield a cycle time which is less than or equal to the
takt time. These are the basic principles involved in balancing the assembly line and
meeting the demand from the customer.
The Lean Production System depends heavily on a culture where trained workers
willingly participate in continuous improvement of every aspect of the system for
its successful implementation. Providing continuous training to the team members
in problem-solving approaches (e.g., the PDCA method) and problem-solving tools
such as value stream mapping, making improved layouts and tools for quality control;
planning and line balancing tools; coupled with project management and presenta-
tions skills improves the capabilities of team members for productive participation in
improvement activities. Further, encouragement to the team should be provided by
recognizing their achievements through publicity in company magazines and notice
boards, and presenting awards through competitions conducted within plants and
between plants.
Only when the workforce participates in lean implementation with knowledge
and passion will there be suggestions for improving cycle times, reducing wastes,
improved visual management, and many other functions that contribute to the success
of the lean system (Pascal 2002). The Jidoka concept, where a team member will stop
C o n tinu o us Im p r o v em en t o f Q ua lit y 495
a production process when a defective is seen being produced, very much depends on
the knowledge, involvement, and willingness of a team member to take the action
to ensure no defective is passed on to the next stage in production. Further, willing
participation by employees and, hence, success of the lean manufacturing system can
be realized only when the management practices openness in communication, trust,
fairness, and a sincere interest in the welfare of the workforce.
We have attempted to describe the Lean Manufacturing or the Toyota Production
System in simple language so as to communicate the basic principles involved. We
see from the above that the TPS is a collection of modules, which, though somewhat
interdependent, can be progressively implemented one at a time. Although a thorough
reengineering of a process starting from re-layout of the workplaces in cells that are
linked following the assembly sequence will yield the best results through the use
of “lean” concept, the individual modules of quality control, quantity control, and
cost control can be implemented in an existing facility in an incremental fashion.
Such incremental implementation will yield improved results in terms of better qual-
ity and cost reduction and would help the system evolve into a lean or near-lean system
progressively.
8.5 Exercise
8.5.1 Practice Problems
8.1 Prepare a Pareto analysis, first with paper and pencil and then using com-
puter software, for the data given below. The data relate to the frequency of
occurrence of short-picks in a book warehouse because of various reasons. A
short-pick occurs when not enough books are in the shelves to fill an order.
The data were collected over a period of three months to investigate the rea-
sons for short-picks and to reduce/eliminate the short-picks in the warehouse.
FAILED
REASON FOR NO CLEAN IN NO KEY-IN OUT OF UP-PRICE-
SHORT-PICK DISPLAY UP ERROR INVENTORY ERROR STOCK UP-PRICE BOX UNEXPLAINED
Frequency 7 87 9 120 4 23 42 41 3
Source: Data from Kamienski, K., and A. Murphy, “Line Shortage Analysis of a Publisher’s Warehouse,” unpublished proj-
ect report, IME 522, IMET Department, Bradley University, Peoria, IL, 2000.
8.2 Draw a Pareto diagram for the data given below on the number of warranty
claims made in a month for a car model at a dealership.
8.3 The data in the table below come from a process that produces large iron cast-
ings for automobile engines. The castings showed occasional porosity in a
particular location, which is a defect according to customer’s specifications.
The table shows data on several process variables, along with the measure on
the porosity relative to 22 castings. The porosity measure was assigned by an
inspector in the scale of 0 to 10, where 0 means no porosity and 10 means the
casting has to be scrapped. A number of 1 to 3 for porosity requires no salvage
work, and a number from 4 to 9 requires the castings be salvaged by cleaning
the holes and filling them with welding.
CASTING NO. DIP BAUME DIP VISCOSITY POUR TEMP TAP TEMP POUR TIME POROSITY
1 43.0 540 2417 2424 29 3
2 43.0 540 2444 2457 28 0
3 43.0 530 2441 2471 27 0
4 43.0 530 2447 2464 29 0
5 43.0 530 2449 2459 29 0
6 43.0 530 2450 2463 28 0
7 43.0 540 2449 2460 28 0
8 43.0 540 2430 2440 27 1
9 43.0 510 2449 2460 28 0
10 43.0 510 2422 2448 30 3
11 42.5 490 2449 2456 28 0
12 42.5 490 2428 2461 28 3
13 43.0 580 2440 2465 27 0
14 43.0 580 2435 2440 29 0
15 43.5 530 2434 2441 28 0
16 43.0 530 2447 2465 28 1
17 43.0 510 2449 2458 27 0
18 43.0 510 2447 2459 28 1
19 43.0 510 2428 2433 28 2
20 43.0 520 2428 2454 29 0
21 43.0 540 2426 2465 29 2
22 44.5 540 2433 2457 27 2
b. Using further regression analysis, including the use of multiple and cur-
vilinear models, find out if there is any relationship between the porosity
measure and the process variables of tap temperature, dip viscosity, and
pour time.
8.5 Calculate the correlation coefficient between pour temperature and tap tem-
perature in the data of Problem 8.3, and compare the results with those from
computer calculation.
8.6 Calculate the correlation coefficient between dip viscosity and dip baume in
the data of Problem 8.3, and compare the results with those from computer
calculation.
8.5.2 Term Project
There are quality problems all around us—in the cafeterias, residence halls, registra-
tion system, student retention programs, maintenance department, the laundromat, or
any production facility with which you may be familiar. All these facilities produce a
product or a service to meet certain needs of their customers, and if customers are not
satisfied by what they pay for, there is a quality problem. Identify one such problem
situation, and make a project statement using quantified measures of the symptoms to
indicate existence of the problem. (In some situations, finding how well the customers
are satisfied could itself be a project.)
Collect whatever data necessary, and use whatever tool necessary to analyze the
data. Find the root causes of the problem, and understand why these causes exist.
Identify solution alternatives to eliminate the root causes, and make recommendations
for improving the quality of the product or service and, thus, the customer’s satis-
faction. If the improvements can be implemented, provide data on the performance
measure to indicate the improvement has been accomplished. If changes cannot be
implemented for want of time, explain, using some projections, how your solution will
solve the problem.
A written report should be submitted, which should include the project statement,
data collected, analysis, results, solutions, and projected outcomes from implementa-
tion of the solutions. The report should have no more than 15 typewritten pages, not
including tables, graphs, diagrams, and photographs.
References
Barnes, R. M. 1980. Motion and Time Study: Design and Measurement of Work. 7th ed. New York:
John Wiley.
Black, J. T., and S. L. Hunter. 2003. Lean Manufacturing Systems and Cell Design. Dearborn,
MI: Society of Manufacturing Engineers.
Buffa, E. S. 1983. Modern Production/Operations Management. 7th ed. New York: John Wiley.
Camp, R. C., and I. J. DeToro. 1999. “Benchmarking,” Section 12. In Juran’s Quality Handbook.
5th ed. Edited by J. M. Juran and A. B. Godfrey. New York: McGraw-Hill.
498 A Firs t C o urse in Q ua lit y En gin eerin g
Camp, R. C. 1989. Benchmarking: The Search for Industry Best Practices that Lead to Superior
Performance. Milwaukee, WI: ASQC Quality Press.
Chatfield, C. 1978. Statistics for Technology. New York: John Wiley & Sons.
Deming, W. E. 1986. Out of the Crisis. Cambridge, MA: MIT Center for Advanced Engineering
Study.
Evans, J. R. 1993. Production/Operations Management. 5th ed. Minneapolis/St. Paul, MN:
West Publishing.
Freivalds, A., and B. Niebel. 2009. Niebel’s Methods, Standards, & Work Design. 12th ed.
New York: McGraw-Hill.
Hines, W. W., and D. C. Montgomery. 1990. Probability and Statistics in Engineering and
Management Science. 3rd ed. New York: John Wiley & Sons.
Hogg, R. V., and J. Ledolter. 1987. Engineering Statistics. New York: Macmillan.
Ishikawa, K. 1985. What Is Quality Control?—The Japanese Way. Translated by D. J. Lu. Englewood
Cliffs, NJ: Prentice Hall.
Juran, J. M., and F. M. Gryna. 1993. Quality Planning and Analysis. 3rd ed. New York:
McGraw-Hill.
Kamienski, K., and A. Murphy. 2000. “Line Shortage Analysis of a Publisher’s Warehouse.”
Unpublished project report, IME 522, IMET Department, Peoria, IL: Bradley University.
MacInnes, R. 2002. The Lean Enterprise Memory Jogger. Salem, NH: Goal/QPC.
Mendenhall, W., and T. Sincich, 1988. Statistics for the Engineering and Computer Sciences.
San Francisco, CA: Dellen Publishing Co.
Minitab. 1995. Reference Manual, Release 10 Xtra. Minitab, Inc.., www.minitab.com.
Pascal, D. 2002. Lean Production Simplified. New York, NY: Productivity Press.
Ramachandran, B., and P. Xiaolan, 1999. “A Quality Control Study of a Bond Curing Process.”
Unpublished project report, IME 522, IMET Department, Bradley University, Peoria,
IL.
Suzuki, T. 1992. New Directions for TPM. Translated by John Loftus. Cambridge, MA:
Productivity Press.
Wilson, L. 2010. How To Implement Lean Manufacturing. New York: McGraw Hill.
9
A System for Q ualit y
That a system is needed for designing, producing, and delivering products and services
to satisfy customer needs has been well established since 1950s, and several models for
organizing such a system have been proposed. In this chapter, we will explore some
of the state-of-the-art models along with some of their predecessors that had laid the
foundation for systems thinking in the quality field.
The modern approach to producing quality products and services to satisfy customers’
needs calls for creating a quality system wherein the responsibilities for various aspects
of meeting customer needs are identified and assigned to the various agencies in the
system. The different agencies of the system then perform their functions in a coher-
ent manner, with a view to achieving the system’s common goal of meeting customer
needs while utilizing the system’s resources efficiently. If such a system is organized
and maintained well, that system will produce quality products while using resources
efficiently. That was the premise upon which leaders in the quality field proposed and
advanced the systems approach to quality.
This chapter explores some of the models proposed for creating a quality system.
Some that were proposed in the early stages of systems-thinking may not represent
a complete model for a quality system, but they were the forerunners of the modern
systems-thinking for quality. More recent models have taken the best of the features
of the earlier models and have been formulated as more complete models, and thus
represent the state-of-the-art templates for building quality management systems. We
will review here some of the earlier models as well as the newer ones. We review six
systems, of which the first three represent the former kind and the next three repre-
sent the latter. Specifically, we will review:
Dr. Deming’s system
Dr. Juran’s system
Dr. Feigenbaum’s system
The Malcolm Baldrige National Quality Award criteria
ISO 9000:2015 standards
The Six Sigma system
499
500 A Firs t C o urse in Q ua lit y En gin eerin g
A few other models have also been proposed by other authors, of which those by
Philip B. Crosby, Dr. Kaoru Ishikawa, and Dr. Genichi Taguchi have won accep-
tance among many users. We will restrict our discussions, however, to the six listed
above; the reader is referred to books such as Evans and Lindsay (2005) for details of
the other models. These models differ from one another in the emphasis they place
on the different components of a quality management system. No single model will
be the perfect fit for a given organization, and a review of several models will provide
a perspective on the strengths of the different models and would help in tailoring a
system that best suits the needs of the organization.
Dr. W. Edwards Deming was the guru who taught the Japanese how to organize
and manage a system for quality. Born in Sioux City, Iowa, on October 14, 1900,
Dr. Deming lived most of his early life in Wyoming, where his parents moved when he
was seven years old. He earned a BS in electrical engineering from the University of
Wyoming at Laramie in 1921, and later a master’s degree in mathematics and physics
from the University of Colorado at Boulder. He also earned a PhD in physics from
Yale in 1928. He was employed after graduate school by one of the laboratories of
the Department of Agriculture in Washington, D.C., and as part of his activities
there, he organized lectures in statistics at the Graduate School of the Department
of Agriculture. The graduates from these seminars included statisticians of the U.S.
Census Bureau, who used statistical sampling surveys for the first time in determining
the U.S. unemployment rate during the Great Depression. Dr. Deming also partici-
pated in using sampling techniques to evaluate and improve the accuracy of entering
and tallying data at the Census Bureau during the 1940 Census.
Dr. Deming came in contact with Dr. Shewhart while working in Washington and
became one of the admirers of the author of the control chart methods. He organized
seminars by Dr. Shewhart at the Graduate School of the Department of Agriculture,
which offered the latter opportunities to expound the control chart method to audi-
ences outside the AT&T telephone companies where he was employed. During World
War II, Dr. Deming was called on to assist the Statistical Research Group at Columbia
University in spreading statistical methods among the manufacturers of goods and
ammunition for the war effort. He wrote the curriculum and personally taught classes
in which thousands of engineers were trained in statistical process control techniques.
Dr. Deming also spent a year studying statistical theory in London with Sir Ronald
A. Fisher, the famous statistician who invented the methods of experimental design.
After the war, in 1947, Dr. Deming went to Japan. He was invited by General
McArthur’s administration, the occupying forces, to help the Japanese in their census
work to evaluate the extent of rehabilitation and reconstruction work needed there. He
went to Japan again in 1950, at the invitation of the Union of Japanese Scientists and
Engineers (JUSE), to assist the organization in spreading knowledge of statistical quality
A Sys t em f o r Q ua lit y 5 01
control within Japanese industry. “With his simple explanations and adequate demon-
strations, Dr. Deming’s lectures were so effective and persuasive that they left an unfor-
gettable impression upon our minds,” wrote an official of the JUSE (Gabor 1990, 80). He
taught them how to implement statistical quality control methods following the plan-do-
check-act (PDCA) cycle. He also taught them his new management philosophy, which
was contained in his 14 points. The Japanese learned the methods, adapted them to their
culture, and institutionalized the continuous improvement process, which culminated in
the enormous success of Japanese goods in world markets. In recognition of his contribu-
tion to the growth of a quality culture in their land, the Japanese instituted a prize, called
the Deming Prize, to be awarded to corporations that achieve excellence in product qual-
ity, or to individuals who make an outstanding contribution to statistical theory or its
application. In 1960, Emperor Hirohito awarded Dr. Deming the Order of the Sacred
Treasure, Second Class—the highest honor bestowed by the emperor on a non-Japanese
person. During the 1980s, Dr. Deming brought home to the American industry the les-
sons of quality he had helped the Japanese to learn, and participated in the quality revolu-
tion that was to unfold within U.S. industry. He died in 1993 at the age of 93.
The system Dr. Deming recommended is contained in the 14 points he advocated
to the Japanese to form a cogent set of guidelines for creating a management system
that will enable an organization to develop, design, and produce products that satisfy
customers. Dr. Deming (1986) later offered the same set of guidelines, with some
minor modifications, in his book Out of the Crisis as the recipe for American managers
to confront the enormous competition posed by foreign manufacturers in the 1980s.
His recommendations include a vision for long-term growth of quality, productiv-
ity, and business in general, as well as recommendations on how the vision can be
accomplished through process improvements to reduce variability with the willing
participation of a well-trained workforce. The 14 points recommended by Dr. Deming
are discussed in detail below. These “points” are reproduced verbatim from his book;
the explanation he provided under each point has been summarized. Subheadings
have been added to facilitate comparison with other models. [Note: All quotes in the
discussion of Deming’s system come from the book Out of the Crisis (Deming 1986),
a veritable source of wisdom for quality engineers.]
9.2.1 Long-Term Planning
Point 1: Create Constancy of Purpose for Improvement of Product and Service This point
relates to having a strategic, long-term vision for an organization regarding growth
in quality and productivity, to become competitive, to stay in business, and to provide
jobs. Organizations should have a long-term plan for creating new products, invest-
ing in research for new technology, and investing in people through education and
training, to be able to satisfy the needs of their customers. Toward this end, they
should first find out, through customer surveys, what the customers’ needs are, and
then develop products to meet those needs. When the products and services are in the
502 A Firs t C o urse in Q ua lit y En gin eerin g
hands of the customer, organizations should ascertain, again through inquiry, if the
products and services meet the needs, as intended. If they do not meet the needs, the
products should be redesigned to satisfy the unmet needs. This should be an ongoing
activity for productive organizations.
Dr. Deming said, “The customer is the most important part of the production line,
and providing product and service through research and innovation to satisfy them
is the best way to stay in business and to provide jobs. This should be the vision and
it should be made clear to all in the organization and to those that are related to it.”
9.2.2 Cultural Change
Point 2: Adopt the New Philosophy According to Dr. Deming, the old management
philosophy practiced by the U.S. industry (during the 1960s and 1970s) allowed work-
ers on jobs that they did not know how to perform, employed supervisors who neither
knew the jobs they were supervising nor the skills of supervision, and employed man-
agers who had no loyalty to the organization and were job-hopping. Those manage-
ment practices also considered a certain level of mistakes (i.e., defects) to be acceptable.
Those practices, according to Dr. Deming, would not work in the new competitive
environment that was emerging in the global market, which had competitors such as
the Japanese, who had adopted a new management philosophy for increasing quality
and making continuous improvement. The old philosophy caused too much waste,
increased the cost of production, and resulted in a noncompetitive product.
For Dr. Deming, doing everything “right the first time” should be the new cul-
ture. Whether taking down a customer order, choosing specifications for a dimension,
writing work instructions, making the product, preparing invoices, or answering a
service call—everything should be done correctly the first time.
9.2.3 Prevention Orientation
Point 3: Cease Dependence on Mass Inspection Quality cannot be achieved through
inspection. It must be built into the product through the use of the right material and
the right processes by trained operators. Dr. Deming claimed that: “Quality comes not
from inspection, but from improvement of the production process.” Furthermore, rou-
tine inspection becomes unreliable through mistakes caused by boredom and fatigue.
Use of control charts, which need small samples taken at regular intervals, will help
in achieving and maintaining statistical control of processes. In turn, this will assure
the production of products with a minimum of variation and consistent quality.
9.2.4 Quality in Procurement
Point 4: End the Practice of Awarding Business on the Basis of Price Tag Alone Dr. Deming
placed great importance in buying quality material in order to produce quality
A Sys t em f o r Q ua lit y 503
products. Buying from the lowest bidder, with no regard for quality, is detrimental to
producing quality and satisfying customers. “He that has a rule to give his business to
the lowest bidder deserves to be rooked,” he said.
Purchasing managers must be educated to understand the need for quality material
through exposure to how that material is used in production processes. They should
learn how to specify quality in purchasing contracts. They should know that, some-
times, even if the material meets the specifications that are written down, it might not
meet the needs of the production process adequately because all the requirements of
a material cannot be fully written into specifications and contracts. Suppliers should
be made to understand where and how the materials are used so that they know the
requirements for the supplied materials.
Dr. Deming emphasized the need to have a single source for each material or part,
and to develop long-term relationships with such single suppliers. The reasons he gave
for having a single supplier for each of the supplies are elaborated in Chapter 7 under the
section dealing with supplier relationships. Briefly, the reasons include: A long-term rela-
tionship with a single supplier enables research and innovation; the variability in supplies
from a single supplier will be smaller compared to multiple suppliers; each supplier will
try to improve their performance through their motivation to become the single supplier;
accounting and administrative expenses will be smaller, as will the total inventory for the
supplies. The single supplier can also participate in the design activities of the customer.
9.2.5 Continuous Improvement
Point 5: Continuously Improve the System of Production and Service Quality starts at
the design stage, with a good “understanding of the customer’s needs and of the way
he uses and misuses a product.” This understanding must be continuously updated,
and any newly discovered needs should be met through changes to the product design.
There should be continuous improvement in every activity, such as procurement,
transportation, production methods, equipment maintenance, layout of the work area,
handling, worker training, supervisor training, sales, distribution, accounting, pay-
roll, and customer service. The processes must be improved to reduce variability in key
characteristics such that their distribution becomes “so narrow that specifications are
lost beyond the horizon.” Meaning: The spread in process variability must be reduced
to such a level that the spread in the specification will appear huge in comparison. In
terms of setting goals for process capability, this goes far beyond Motorola’s 6-sigma
capability. (We wonder if the Motorola engineers took their cue from this statement
of Deming’s for setting 6-sigma as goal for capability of processes.)
To understand process variation and make process improvements, continuous learn-
ing of new methods is necessary. “There is no substitute for knowledge.” People must
be given the opportunity to educate themselves and to learn new skills in experimen-
tation, process control, and improvement methods to maintain processes in statistical
control.
504 A Firs t C o urse in Q ua lit y En gin eerin g
Point 7: Adopt and Institute Leadership Managers should become leaders or coaches
who facilitate and help their teams in performing their jobs. Management by objective
(MBO), which is based solely on results or outcomes, should be replaced with leader-
ship, with a focus on quality of product and service. Managers should be able to recog-
nize opportunities for improvement and have the know-how to devise and implement
the improvements. These leaders should remove the barriers that make it impossible
for workers to do their jobs with pride. They should focus on quality rather than quan-
tity of products produced. They must know the work they supervise; otherwise, they
cannot help or train their workers. They should understand that errors, or defectives,
when they occur, are produced by the system and not by the people. They should work
to improve the system to prevent errors rather than blame the workers for them. They
should also know how to recognize when a system is stable (i.e., in-control) and when
it is not, so that they can apply remedies to prevent defectives.
The new leaders should understand the laws of nature on how worker performance
will vary from worker to worker and is generally distributed as a normal distribution,
with half the people performing below the average and half above. There will be some
outliers, or performances outside the system—that is, outside of the 3σ-limits. The
outliers above the upper limits should be examples to be emulated by others, and those
below the lower limits should be helped to improve their performance. Such an under-
standing of the statistical behavior of populations will help to eliminate practices, such
as celebrating someone as an excellent performer based on a single day’s or month’s
performance, or denigrating another as a poor performer based on a single instance.
Point 8: Drive Out Fear Fear among employees prevents them from reporting prob-
lems in product design or problems arising from process deterioration that can cause
A Sys t em f o r Q ua lit y 505
poor quality products, which can then get shipped out to customers. Fear of losing
one’s job prevents workers from suggesting innovations or improved methods. Fear
prevents workers from acting in the best interest of the company because some short-
term, narrowly specified goals will be violated.
Some managers cause fear in their workers because they believe in managing by
fear. Fear is caused by: Production quotas, which are used in evaluating performance;
by annual merit ratings, which are used for determining salary increases; by decisions
made by managers through erratic judgments based on one’s feelings rather than on
real data. Unless fear is removed, workers will not come forward with suggestions for
improvement, will not accept new knowledge and new methods for quality improve-
ment, and will not stop poorly made products from being shipped to customers.
Point 9: Break Down Barriers between Staff The work involved in achieving product
quality is done at many places in an organization, and by many different people. They
all have information that needs to be shared. For example, marketing people have
information on consumer preferences, which the product design people need, and the
manufacturing people generate information on process capabilities, which the design-
ers need. Dr. Deming gives an example of a service person fixing a problem in each of
the new machines built by a company, which he never shared with the designers. If he
had informed the designers of the problem in the machines when he first noticed it,
the problem could have been fixed at the source and all the service calls and expensive
repairs needed later at the customer locations could have been avoided.
Many of the points made by Dr. Deming can be identified as the basis for sev-
eral new methodologies later developed by others for quality improvement and waste
reduction and for several provisions in the modern models of quality systems such as
the ISO 9000. Point 9, perhaps, gave the impetus for growth of concurrent engineer-
ing (CE), which has been adopted by many design organizations to create products
using cross-functional teams. The whole concept of CE is based on breaking down
barriers among functional areas and exchanging information at the early stages of
product and process design, which is the thrust of Point 9.
Point 10: Eliminate Slogans, Exhortations, and Targets for the Workforce Posters and
slogans on walls, mainly addressed to workers, do not have any positive results. In fact,
they have negative effects, such as creating mistrust, frustration, and demoralization
among workers. These posters create the perception that the defects and errors are
caused by workers, and suggest that if only workers would pay proper attention, the
errors could be avoided. The truth, however, is that the management has not addressed
the causes of the failures and defectives, such as poor material, bad machinery, and
inadequate training. “No amount of entreating or exhortation of the workers can solve
problems of the system, which only the management can fix.”
Slogans such as “be a quality worker,” “take pride in your work,” “do it right the
first time,” and so on, may have “some fleeting temporary effect on removing some of
506 A Firs t C o urse in Q ua lit y En gin eerin g
obvious problems in processes, but will be eventually recognized as a hoax.” The man-
agement should start working on purchasing better-quality material from fewer sup-
pliers, maintaining machinery in better working conditions, providing better training,
using statistical tools for stabilizing processes, and publicizing these activities among
workers. These will create faith in the words of management among workers, improve
their morale, and encourage their further cooperation.
Point 11(a): Eliminate Numerical Quotas for the Workforce According to Dr. Deming,
a numerical quota or work standard—that is, requiring so many pieces to be produced
per day—“is a fortress against improvement of quality and productivity.” Standards
are often set for the average worker, which means that half the workers will exceed
them and the other half will not. The first half will not make any more than this
standard (they stop work before the end of the day). The second half are demoralized,
because they cannot do any better than what they are currently doing under the given
conditions. Once the standard is set, no one makes any effort to improve upon it,
which is counter to the concept of continuous improvement.
Work standards are used by many organizations to create production schedules and
prepare budgets. According to Dr. Deming, a better way to obtain information for
budget and schedules is to collect data on the production time, find out the distribu-
tion, find the special causes for the outliers, and eliminate the causes of these outliers
by avoiding the reasons that generate the special causes. Better training of workers
would also improve performance. Such actions will improve productivity and provide
data for budgets and schedules as well. In other words, setting standards based on a
process that is not stable, not improved by removing assignable causes, is a bad idea.
Point 11(b): Eliminate Numerical Goals for People in Management Numerical goals set
for managers, such as decrease cost of warranty by 50% next year, with no plans as to how
to accomplish it “is just a farce.” Those goals will never be accomplished. A manager
should learn the job that he is expected to supervise, and seek methods to improve the
processes by identifying and eliminating wasteful steps. Mere goal setting without the
knowledge and ability to make improvements will not help.
Point 12: Remove Barriers that Rob People of Pride of Workmanship According to
Dr. Deming, every worker wants to do a good job and be proud of it. If defectives are
produced and waste-generated, it is because the management does not provide the
opportunity for workers to do their jobs well. Workers have many obstacles to doing
their jobs with pride. Being treated as a commodity—such as being hired one week
when needed and then fired the next week when not needed—is one such obstacle.
Lack of work instructions on how to do a given job, and lack of standards for what is
acceptable work and what is not are others. Poor quality of raw material (bought at
a cheap price), poorly maintained equipment, inadequate tools, out-of-order instru-
ments, and foremen pushing to meet the daily production quota are still more causes
A Sys t em f o r Q ua lit y 507
that “rob the hourly worker of his birthright, the right to be proud of his work, the
right to do a good job.”
When workers are denied the chance to do a good job and be proud of it, they
are no longer eager to come to work, and widespread absenteeism results. “He that
feels important to do the job will make every effort to be on the job.” According to
Dr. Deming, “Barriers against realization of pride of workmanship may in fact be one
of the most important obstacles in reduction of cost and improvement of quality in
the United States.”
Point 14: Take Action to Accomplish the Transformation Dr. Deming laid down an
action plan for initiating and accomplishing quality in an organization:
1. Management must first understand and agree to the 13 points enunciated
above.
2. Management must be prepared to change its philosophy and explain to the
rest of the organization, through seminars and other means, the need for this
change.
3. When a critical mass of people—especially in middle management—
understand and agree with the changes needed, then these changes will happen.
4. Every job in an organization can be divided into stages, or operations, and can
be analyzed using a flow diagram. Each stage has a customer and a supplier,
with the last stage having the end user as the customer. Such analysis helps
in continuously improving the methods and procedures to better satisfy the
customer. The Deming cycle of plan-do-check-act explained in Chapter 8 will
help in the continuous improvement.
5. An organization should be created to guide and monitor the continuous
improvement in the process stages using qualified and trained statisticians,
who will help in conducting experiments and in completing process improve-
ment projects.
We can see in the above discussion that Dr. Deming intended a systemic change for
an organization to become a quality organization. His 14-point recipe indeed contains
a model that, when followed, will create a quality management system to enable the
production of quality products and services.
508 A Firs t C o urse in Q ua lit y En gin eerin g
Dr. Joseph M. Juran was another guru who enormously influenced the direction of
the quality movement, both in the United States and around the world. He was born
in Romania on December 2, 1904, and he came to the United States in 1912 when
his family migrated to Minneapolis. He graduated from the University of Minnesota
with a degree in electrical engineering in 1924. After graduation, he joined the
Western Electric Co.—the manufacturing arm of the former Bell Telephone Co.
(now known as AT&T)—in the inspection department of their Hawthorne plant.
He was promoted as manager of the department and, when he was only 24 years old,
became chief of the inspection division. He earned a JD degree in 1936 from Loyola
University Law School in Chicago, and then moved to New York to become the cor-
porate industrial engineer at the AT&T headquarters. After a tour of duty with the
Lend-Lease Administration in procuring and leasing arms, equipment, and supplies
during World War II, he started teaching industrial engineering as a professor and
department chair at New York University. After working for a consulting firm as a
quality management consultant, he then started his own consulting practice in 1949.
This practice grew into the Juran Institute, which today provides education, training,
and consulting to a wide range of industries worldwide.
Dr. Juran’s main contribution to the quality field was in the management area.
He emphasized that quality professionals should become “bilingual”—meaning they
should learn to speak the language of finance, which the executives understand; and
also the technical language that the workers and engineers understand. His adoption
of the Pareto principle to differentiate between the vital few and the trivial many
causes while investigating causes responsible for quality losses, is one of his major
contributions. The differentiation between chronic and sporadic problems to facilitate
the application of appropriate methods to resolve them is another. The systematic
“breakthrough” approach to solving chronic problems that Dr. Juran postulated is yet
another important contribution to the quality field.
He did not see the value of statistics in quality improvement work as much as, for
example, Dr. Deming did. He even claimed that the use of statistics was being over-
done. In his view, it was an important element for achieving quality but should not
be treated as the “be-all and end-all.” That was a contentious statement to make even
in the early 1950s (Donaldson 2004), and it may be even more contentious today in
view of the revival of interest in statistical methods among quality professionals, as
evidenced by the widespread use of the Six Sigma methodology, which is purported
to make extensive use of statistics.
Dr. Juran has written many books and many articles, of which the Quality Control
Handbook, now published as Juran’s Quality Handbook, in its seventh edition (Defeo
2017), stands out today as the standard reference on quality topics. In the early 1950s,
invited by the JUSE, Dr. Juran went to Japan and provided training in the manage-
ment aspects of quality. In 1981, he was, like Dr. Deming, awarded the Order of
A Sys t em f o r Q ua lit y 509
the Sacred Treasure, Second Class, in recognition of his contribution to the Japanese
quality movement. Dr. Juran died in February 2008 at the age of 104.
The system recommended by Dr. Juran is contained in the “quality trilogy,” also
called the “Juran trilogy,” which he proposed for producing quality products and sat-
isfying customers. The three components of the trilogy (Juran and Defeo 2010) are:
1. Quality planning
2. Quality control
3. Quality improvement
9.3.1 Quality Planning
Quality planning consists of the activities carried out during the product development
and design stages, as well as during process engineering, before the product is put into
production. These activities include:
Determining who the customers are for the product
Determining what their needs are
Developing product features that respond to these needs
Developing processes that are able to produce those product features
Transferring the product and process plans to the operating or production
function
The road map for quality planning, according to Dr. Juran, is first to identify the
customer. The customer is someone who is impacted by the product. A customer can
be external or internal. The external customer is someone outside the organization
who buys and uses the product, and the internal customer is someone inside the orga-
nization who further processes the product to be delivered to the next internal or
external customer. There are multiple customers for many products. For example, with
a new medicine, in addition to the ultimate patient who receives the treatment, the
doctor, the pharmacist, the hospital, and the government regulatory agencies are all
customers too. When there are numerous customers, it is necessary to classify them
into the vital few and the useful many categories so that planning resources may be
directed to meeting the needs of the most important segment of customers.
The major approaches for discovering the customer’s needs are:
1. By being the customer (i.e., by putting the planner in the shoes of the customer)
2. Simulating the customer’s needs in the laboratory
3. Communicating with the customer
Communicating with the customer (i.e., market survey) is the most widely used
method. Some communications are customer initiated, such as customer complaints
and warranty claims. Translation of the customer needs, which are in the language
of the customer, into the language of the planner is done through spreadsheets,
510 A Firs t C o urse in Q ua lit y En gin eerin g
which are akin to the matrices of the quality function deployment procedure discussed
in Chapter 3.
Product development must be done in order to meet the needs of the customers
and the needs of the suppliers, and to optimize the costs to both. The final cost to the
end user must also be competitive. Product development should take into account the
vital few (rather than the useful many) needs of the customers and consider how, and
to what extent, competitors attempt to meet those needs. The opinions and percep-
tions of the customers that guide their buying habits, the value that the product will
provide for the price that customers pay, and the failure-free operation of the product
in customer hands should all influence the determination of product features.
Monopoly of product development vested in the product development or engineer-
ing functions can cause difficulties to other functions, such as manufacturing and
marketing. It can also result in products that elicit complaints from end users. To
avoid this, participation from those involved with other functions must be sought by
the product developers through design reviews.
The concept of dominance is a useful tool for process planners. Manufacturing
processes can be identified as set-up dominant, time dominant, component dominant,
worker dominant, or information dominant, based on what is important to the process
so that it can function error-free. Identification of the dominant variable in the process
helps the planner in establishing suitable control for the variable. For example, for set-
up-dominant processes, in which the quality of subsequent batches depends on how
well the initial set-up is done, “first-piece inspection” and approval would serve well.
Information on the capability of various production processes in meeting tolerances
is vital for process planners. Knowledge regarding the available capabilities of produc-
tion machinery helps the planner to choose the right process, or right machinery, for
producing a given product characteristic. They are also used to specify to the produc-
tion people the level of capability that must be maintained in the process machinery.
Process control is the means of maintaining a process in its planned state. This
involves selecting the critical operations of a process, based on the potential for seri-
ous danger to human lives or the environment, or for serious waste and monetary
loss at subsequent stages of processing. For these critical operations, process control
procedures must be specified, including the variable to be checked, by whom it is to
be checked, and the qualification of the people making the checks. How critical the
incidents of deviation must be before triggering an investigation and elimination of
root causes must also be stipulated.
The final step in process planning is process validation. This is done during a
pilot run when the process is run using production equipment and production
workers, at production capacity, and its capability is evaluated under normal oper-
ating conditions. The capability must be within acceptable limits before the pro-
cess is handed over to the operating function. The process planning function ends
with a transfer of knowledge on how the product should be produced, including
A Sys t em f o r Q ua lit y 511
9.3.2 Quality Control
Quality control is defined by Dr. Juran as “the regulatory process through which we
measure actual quality performance, compare it with quality goals, and act on the dif-
ference” (Juran 1988, 6.31). To exercise this control, a limited number of centers—or
control stations—must be established. These control stations are chosen using the fol-
lowing guiding principles:
1. At changes of jurisdiction to protect the recipients (e.g., between major depart-
ments or between a supplier and a customer)
2. Before embarking on an irreversible path (e.g., set-up approval before
production)
3. After creation of a critical quality characteristic
4. At dominant process variables
5. At natural “windows” for economical control (e.g., chemistry of molten iron
provides a window on strength of iron when it solidifies)
The choice of control stations is made on process flow diagrams. For each control
station, the process variable or product characteristic to be measured, instruments to
be used, interval between taking measurements, tolerances to be allowed, and deci-
sions to be taken based on deviation from tolerance, should be specified. These are all
chosen by planners and are included in the process planning documents. The desig-
nated operating personnel should take those measurements and make the decisions as
laid out.
Tools exist for analyzing the measurements made at the control stations, which
would indicate any significant deviations in the measurements from the expected goal.
The control chart is one such tool for seeing if a statistically significant deviation
exists—that is, whether the deviation results from a chance cause or a real cause.
Significant deviations are acted on for eliminating the root cause. Dr. Juran identi-
fies a difference between statistically significant and economically significant devia-
tions. According to him, not all statistically significant deviations can be economically
significant calling for immediate corrective action. When numerous deviations are
found, priorities must be established based on economic significance, and when “eco-
nomic significance of some nonconformance is at a very low level, corrective action
may not be taken for a long time” (Juran 1988).
One of the very significant contributions made by Dr. Juran relative to process
management is in the differentiation he made between two possible sources of prob-
lems when measurements indicate deviations from expected goals. The two sources,
according to him, are sporadic sources of deviation and chronic sources of deviation.
512 A Firs t C o urse in Q ua lit y En gin eerin g
40 Sporadic
Original zone
of quality control
20
Operations
Chronic waste New zone
begin (an opportunity of quality control
for improvement)
Quality
improvememt
0
0 Time
Lessons learned
Figure 9.1 Sporadic vs. chronic deviations in a process. (From Juran, J. M., and F.M. Gryna. Quality Planning and
Analysis, Third Edition. McGraw Hill, 1993. With permission.)
Sporadic sources cause occasional deviations and can be detected by control mecha-
nisms, leading to corrective action. Chronic sources, on the other hand, cause devia-
tions in the process that are likely to be ignored by control mechanisms and thus
remain embedded in the system—even accepted as a fact of life. Their root causes are
usually difficult to detect and to eliminate. The chronic sources of variations should
be addressed through quality improvement projects using a certain sequence of steps,
which Dr. Juran called the “breakthrough” sequence. The difference between sporadic
sources and chronic sources of variation is illustrated in Figure 9.1.
9.3.3 Quality Improvement
complaints in the recent past, and the market share lost because of unsatisfied
customers—and then presented to the upper management, along with the
estimated potential savings from the proposed project. The estimated expen-
diture should be compared with the potential savings/benefits to show the
possible return, which is usually high. Dr. Juran and Godfrey (1999) gives
examples in which an investment of $15,000 for quality improvement pro-
duced an average benefit of $100,000. In another example, Russ Westcott
(2005) reported a return on investment (total benefit/total cost) of $72,000
per dollar, within three months in a public utility company. In yet another
example, Lou Anne Crawley-Stout (2016) reports an average rate of return of
$8.56 for every $1 invested in 35 public health quality improvement projects,
over a three-year period. Use of such successful examples would help in selling
the project to the upper management and winning their approval.
Implementation of the solutions may encounter resistance from the manager who
owns the process, workers who are unwilling to accept new methods, or the union
that wishes to protect workers’ jobs. This is a human relations issue, and it should be
approached with respect and sensitivity. Some of the approaches to addressing the
resistance are communicating the reasons for the changes and the effects they will
have on the parties involved; using parties as participants in the problem-solving pro-
cess; making adjustments to the solutions when reasonable objections are raised; and
allowing enough time for people to accept the changes. Once the solution has been
implemented, controls must be established at key locations in the process to make sure
the new methods are followed and people do not go back to the old methods either by
design or by default.
When the expected goals are reached and there is a fair degree of assurance that the
solution will continue to perform as intended, then the project can be considered as
being completed. A final report must then be written that contains the record of the
journeys made, from the symptoms to the final implementation. The data collected,
methods used, analytical results, new configuration of the process (depicted in flow
diagrams), specifications of the final solution, and the final results on process perfor-
mance should all be included in the report.
Dr. Juran’s system is more pragmatic than Dr. Deming’s, and it can be implemented
within the existing framework of a business organization. Dr. Deming’s system is, of
course, more comprehensive, calling for a complete philosophical and cultural change
in the organization and revamping the entire system—from the way in which cus-
tomer needs are ascertained to the way in which those needs are met.
Dr. Armand V. Feigenbaum was born on April 6, 1920, in New York City. He received
a bachelor’s degree from Union College, Schenectady, New York, and his MS and
PhD from the Massachusetts Institute of Technology.
Dr. Feigenbaum was the one who first proposed a formal method for studying
the costs associated with producing quality products and those arising from not
producing quality products, which we have come to know as a “quality cost study.”
He was also the originator of the concept of total quality control—the approach to
quality and profitability that has profoundly influenced management strategy in
the competition for world markets. He postulated that contributions from many
parts of an organization are needed to make a quality product that will satisfy the
customer; and that these activities must be coordinated to optimize the output of
the entire organization. This concept was embraced by the Japanese, who used it as
the basis for their company-wide quality control philosophy. This was also the genesis
for the systems approach to quality that was later adopted by quality professionals
worldwide.
516 A Firs t C o urse in Q ua lit y En gin eerin g
Dr. Feigenbaum served as the founding chairman of the board of the International
Academy for Quality, the worldwide quality society. He served two terms as presi-
dent of the American Society for Quality. He was awarded the Edwards Medal, the
Lancaster Award, and Honorary Membership by the American Society for Quality.
The Union College awarded him the Founders Medal for his distinguished career in
management and engineering. In December 1988, Dr. Feigenbaum was awarded the
Medaille G. Borel by France—the first American to be so honored—in recognition of
his international leadership in quality as well as his contributions to France. In 1993, he
was named a Fellow of the World Academy of Productivity Science, and was awarded
the Distinguished Leadership Award by the Quality and Productivity Management
Association. In 1996, he was the first recipient of the Ishikawa/Harrington Medal
for outstanding leadership in management excellence for the Asia-Pacific region.
Dr. Feigenbaum was the president and CEO of General Systems Company, which is
the leading international company in the design and implementation of management
operating systems in major manufacturing and services companies throughout the
world. He passed away in November 2014.
As mentioned, Dr. Feigenbaum was one of the early thinkers who recognized the
need for a systems approach for achieving quality in products and satisfying custom-
ers. Although Dr. Deming had been propagating the value of a systems approach
to optimize the output of any organization since his early days in Japan (Deming
1993), the first clear enunciation of the need for a systems approach to produce qual-
ity products, and how the system should be organized and managed, came from
Dr. Feigenbaum. He identified four “jobs” (Feigenbaum 1983, 64) that were necessary
for assuring quality in products:
On the need for a quality system, Dr. Feigenbaum (1961, 109) said:
Since the work is most generally part of an overall “team” effort, it must be related to that of
the other members of the team. Hence, not only does the division of labor become a consid-
eration, but also the integration of labor becomes an equally important consideration. The
whole purpose of organization is to get division of labor but with integrated effort leading to
singleness of purpose. If the individuals are working at cross-purposes or are interfering with
each other’s efforts, either the people or the system are not working as they should be. The
quality system provides the network of procedures that the different positions in a company
must follow in working closely together to get the four jobs of total quality control done.
Dr. Feigenbaum showed in a matrix (Figure 9.2) how the responsibilities for various
quality activities should be shared by different agencies in a quality system. One can
see the genesis of the basic structure for a quality system in this matrix and observe
that many of his ideas for creating and managing a quality system, some proposed as
early as 1951 (Feigenbaum 1951), being reflected in later models for quality manage-
ment systems, such as the ISO 9000 standards and the Baldrige Award criteria.
The Baldrige Award was established in 1987, as Malcolm Baldrige National Quality
Award (MBNQA), during the presidency of Ronald Reagan, for recognizing U.S. cor-
porations who achieve excellence in organizational performance using modern quality
and productivity improvement methods. This was a recognition that the U.S. corpora-
tions would have to adapt to the new global market in order to successfully compete
with the competitors who are excelling in quality and productivity. The award was
named after Mr. Malcolm Baldrige, President Reagan’s secretary of commerce who
died in an accident while in office. The award was administered by the Baldrige National
Quality Program (BNQP) of the National Institute of Standards and Technology
(NIST), assisted by the American Society for Quality (ASQ ). In 2010, the name of
the program was changed to Baldrige Performance Excellence Program (BPEP), still
within the NIST of the Department of Commerce of the United States Government.
Justifying the name change, the Baldrige Program Director said (NIST 2010):
“… In the more than two decades since the inception of the Malcolm Baldrige National
Quality Award, the field of quality has evolved from a focus on product, service and
518 A Firs t C o urse in Q ua lit y En gin eerin g
Relationship chart
(Applied to product quality)
General manager
Shop operations
C = Must contribute
Quality control
Manufacturing
manufacturing
M = May contribute
Engineering
engineering
Marketing
I = Is informed
Materials
Manager
Finance
Areas of responsibility
Figure 9.2 Interrelationships in a quality system. (From Feigenbaum, A. V., Total Quality Control. 3rd ed. New York, NY:
McGraw-Hill, 1983. With permission.)
These remarks by the director reflects how the strategic focus for organizations has
evolved over the years from a mere product or service quality to an overall perfor-
mance excellence or organizational quality, and how the Baldrige Criteria provides a
model for a system for achieving that excellence.
A Sys t em f o r Q ua lit y 519
The BPEP program publishes three sets of criteria for achieving performance
excellence, called Baldrige Excellence Framework (NIST 2017a): One for businesses/
nonprofit, one for educational institutions, and one for health organizations. We dis-
cuss here the criteria for businesses/nonprofit.
The criteria are created and updated by a board of overseers, who are appointed by
the U.S. secretary of commerce, and consist of distinguished leaders from all sectors
of the U.S. economy. The awards categories and criteria are revised and published
every two years. The following account is based on the 2017–2018 documents (NIST
2017a).
The NIST also publishes a shorter version of the Baldrige Excellence Framework,
called Baldrige Excellence Builder (NIST 2017b), which serves as an introduction to
the Framework and can be used by an organization for self-assessment to identify and
improve what is critical to the organization’s success.
The Framework including the criteria serves two main purposes:
Leadership
Strategic planning
Customers
Measurement, analysis, and knowledge management
Workforce
Operations
Results
Each criterion is divided into a certain number of “items,” with the total num-
ber of items being 17. The award uses a point system, with a certain number of
points being assigned to each item, and the total for all items being equal to 1000
(see Figure 9.3). The points assigned to the items, and thus to the criteria, can
be interpreted as the level of importance that the board of overseers attribute to
the items, and to the criteria, for achieving performance excellence. Performance
excellence can be interpreted as the “aggregate” of excellence in product or service
quality, customer satisfaction, operational and financial results, relationship with
employees and business partners, and discharge of public responsibility as a cor-
porate citizen.
520 A Firs t C o urse in Q ua lit y En gin eerin g
P Organizational Profile
P.1 Organizational description
P.2 Organizational situation
Figure 9.3 Malcolm Baldrige National Quality Award Criteria for Performance Excellence Items and Point Values. (From
Baldrige Excellence Framework, 2017–2018 www.nist.gov/baldrige.)
The interrelationship among the criteria, showing how together they form an inte-
grated process for managing an organization and contribute to excellence in perfor-
mance, is shown in Figure 9.4, which is reproduced from the NIST document (NIST
2017a).
The criteria, the items, and the requirements under them are explained through a
series of questions in the Baldrige Excellence Framework. These questions have been
recast below into positive statements under each criterion. These statements can serve
as guidelines for setting up a management system modeled after the Baldrige criteria.
One important point is to be made here: The recommendations made here are mes-
sages implied in the questions as interpreted by the authors. The Baldrige Framework
does not prescribe how businesses should organize themselves. Readers are referred to
the original Excellence Framework for more details, especially if they are contemplat-
ing on creating a system based on the Baldrige model. A particularly good reference
A Sys t em f o r Q ua lit y 5 21
Organizational profile
Strategy Workforce
Customers Operations
Figure 9.4 Malcolm Baldrige National Award - Criteria for Performance Excellence Overview and Structure. (From
Baldrige Excellence Framework, 2017–2018 www.nist.gov/baldrige.)
on how to implement a quality management system in line with the Baldrige criteria
is Brown (2014).
9.5.1 Criterion 1: Leadership
9.5.1.1 Senior Leadership
9.5.2.1 Strategy Development
9.5.2.2 Strategy Implementation
9.5.3
Criterion 3: Customers
a. Customer Listening
1. Current Customers: Organizations should have identified the processes
by which they will gather feedback from current customers on the prod-
ucts and services provided. The methods should be tailored according
to the type of customers, customer groups, or market segments. They
should consider using social media, web-based technologies to listen to
the customer, as appropriate. They should have plans to receive immedi-
ate, actionable feedbacks on quality of products, customer support, and
transactions.
A Sys t em f o r Q ua lit y 525
9.5.3.2 Customer Engagement
9.5.4
Criterion 4: Measurement, Analysis, and Knowledge Management
a. Performance Measurement
1. Organizations should institute processes for collecting data for tracking
daily operations and overall organizational performance, including prog-
ress relative to strategic objectives and action plans. They should have
defined their key organizational performance measures, as well as key
short- and long-term financial measures. They should have determined
the frequency for collecting data on these measures.
2. They should collect comparative data, between time to time, among com-
petitors, among customer segments, etc.
3. They should collect data from customers on their preferences, levels of
satisfaction, aggregate data on complaints and their resolutions, including
from social media, so as to build a customer-focused culture and to sup-
port fact-based decision making.
4. The performance measurement system should be kept current to meet the
current business needs of the organization, and should be agile, that is, sensi-
tive to rapid or unexpected changes in the organization or in its environment.
b. Performance Analysis and Review
Organizations should have well-defined procedures for review and analy-
sis of performance data, comparative data, and customer data to obtain valid
results that the organization and senior leaders can use. The results should
be used to assess the organizational success, competitive performance, finan-
cial health, and progress toward meeting strategic objectives. They should use
reviews to assess their ability to respond to changing organizational needs and
challenges in the environment.
c. Performance Improvement
1. Organizations should have procedures to project future performance based
on findings in the performance reviews and be able to reconcile differences
between projections and those used for making plans.
2. They should translate the performance reviews into action plans for continu-
ous improvements and innovations. The review information should be passed
on to workgroups and functional-level operations to support their decision
making. Where appropriate, the information from these reviews should be
shared with suppliers for their actions to support the organization’s goals.
A Sys t em f o r Q ua lit y 527
9.5.5
Criterion 5: Workforce
9.5.5.1 Workforce Environment
b. Workforce Climate
1. Workplace Environment: Organizations should have procedures to ensure
and improve workforce health, safety, and security. They should have perfor-
mance measures to assess these factors in the work environment, and set goals
for these measures and strive to achieve them. If there is such a need, different
measures and goals must be created for different workplace environments.
2. Workforce Benefits and Policies: They should have policies, services, and
benefits to enhance the climate for the workforce. These may have to be
different for the different groups to suit their needs.
9.5.5.2 Workforce Engagement
9.5.6
Criterion 6: Operations
9.5.6.1 Work Processes
9.5.6.2 Operational Effectiveness
9.5.7 Criterion 7: Results
9.5.7.2 Customer Results
a. Customer Satisfaction
Organizations should be able to present the levels and trends in key mea-
sures or indicators of customer satisfaction and dissatisfaction, and present a
comparison with the customer satisfaction levels of their competitors or other
organizations offering similar products. They should also have differentiation
of these measures by product offerings, customer groups, and market seg-
ments, as appropriate.
b. Customer Engagement
Organizations should be able to present the levels and trends in key mea-
sures or indicators of customer engagement including those for building cus-
tomer relationships. The measures should be followed over the lifecycle of
customers. The measures should be compared with the customer satisfaction
levels of the competitors or other organizations offering similar products.
They should also have differentiation of these measures by product offerings,
customer groups, and market segments, as appropriate.
532 A Firs t C o urse in Q ua lit y En gin eerin g
9.5.7.3 Workforce Results
a. Workforce-Focused Results
1. Workforce Capability and Capacity: Organizations should use key mea-
sures to evaluate the current levels and trends in workforce capability and
capacity, including appropriate skills and staffing levels. These results
should differentiate how they differ by the diversity of the workforce, by
workforce groups and segments, as appropriate.
2. Workforce Climate: Organizations should use key measures to evaluate
the current levels and trends in workforce climate, including workforce
health, security, accessibility, and services and benefits. These results
should differentiate how they differ by the diversity of the workforce, by
workforce groups and segments, as appropriate.
3. Workforce Engagement: Organizations should use key measures to evalu-
ate the current levels and trends in workforce satisfaction and engagement.
These results should differentiate how they differ by the diversity of the
workforce, by workforce groups and segments, as appropriate.
4. Workforce Development: Organizations should use key measures to eval-
uate the current levels and trends in workforce and leader development.
These results should differentiate how they differ by the diversity of the
workforce, by workforce groups and segments, as appropriate.
This is a set of international standards that contains the requirements for building a
basic quality management system (QMS). The standards were initially issued in 1987
by the International Organization for Standardization (ISO), which is a worldwide
federation of national standards bodies. A minor revision of the standards was issued
in 1994, and the third edition, issued in 2000, incorporated many changes to the
original version to make it a modern template for building a QMS. The standards were
revised again as described below.
The original edition of the ISO 9000 standards consisted of five documents—ISO
9000, ISO 9001, ISO 9002, ISO 9003, and ISO 9004. Of these, ISO 9000 had the
preamble, definition of terms, and instructions on how to use the rest of the docu-
ments; and ISO 9004 contained guidelines on how to establish a QMS that is speci-
fied through the “requirements” in ISO 9001, ISO 9002, and ISO 9003. A QMS
could be certified by independent auditors, upon verification, that the system satisfies
the requirements. The three standards—ISO 9001, ISO 9002, and ISO 9003—differed
in the scope of the system to be certified. ISO 9001 was to be used with a system of
the largest scope, in which the organization’s responsibilities included finding the
needs of the market, designing the product to meet those needs, and making, deliver-
ing, and helping in the product’s installation and use. The ISO 9002 and ISO 9003
534 A Firs t C o urse in Q ua lit y En gin eerin g
were to be used by organizations with more limited responsibilities, ISO 9002 being
applicable when products are made according to a customer’s design, and ISO 9003
being applicable when the function was only distributing products designed and made
by another organization. In a way, ISO 9002 and ISO 9003 were subsets of ISO 9001.
In the 2000 version of the standards, modifications made to ISO 9001 rendered
ISO 9002 and ISO 9003 redundant, and so these were eliminated, resulting in the
new set consisting only of ISO 9000, ISO 9001, and ISO 9004. The numbers 9002
and 9003 had been retired while keeping the numbers 9001 and 9004, presumably
to maintain an alignment between the older and newer versions of ISO 9004. This
should explain why the newer sets, ISO 9000:2000, and later versions contain only
ISO 9001 and ISO 9004, and are missing the numbers 9002 and 9003.
ISO 9000 was revised in 2005 and describes, as the earlier edition did, the funda-
mentals of quality management systems and defines the terminology relative to them.
ISO 9001 was revised in 2008 and again in 2015. The major objective of the 2015
revision was to accomplish consistency and compatibility among all management sys-
tems standards such as ISO 45001 for Occupational Health and Safety Management
and ISO 14001:2015 for Environmental Management, etc., in terms of language and
terminology so as to reduce conflicts and duplication. It will also provide for the facil-
ity to migrate all relevant management systems into one. That would result in mini-
mizing documentation needs.
ISO 9004 was last revised in 2009 and is also a standard that provides guidelines
for organizing a QMS to progress beyond the requirements of the ISO 9001.
The discussion that follows is based on the 5th edition of the standard ISO
9001:2015. The standard specifies the requirements of a QMS, which, when adopted
by an organization, will help in providing products that will satisfy customer require-
ments and meet applicable statutory and regulatory requirements. As mentioned ear-
lier, an organization’s quality system can be certified against this standard when there
is a need to demonstrate such ability to meet customer and statutory and regulatory
requirements.
So, counting all the latest revisions, the current version of the standards includes:
ISO 9000:2015 Quality Management Systems—Fundamentals and Vocabulary
(ANSI/ISO/ASQ Q9000:2015)
ISO 9001:2015 Quality Management Systems—Requirement
(ANSI/ISO/ASQ Q9001:2015)
ISO 9004:2009 Managing for the Sustained Success of an Organization—A
Quality Management Approach (ANSI/ISO/ASQ Q9004:2009)
Although ISO 9000 is just one of the documents in the set of standards, the entire
set is often referred to as “ISO 9000 Quality Management Standards.”
The standards recognize seven quality management principles (QMPs), and the
quality management standards are based on these seven principles. These “principles”
are basic beliefs, rules or norms based on which some things are done. These principles
A Sys t em f o r Q ua lit y 535
QMP 1 Customer Focus Organizations should know who their customers are and
understand the current and future needs of those customers and strive to meet and
exceed the customers’ needs and expectations. Align organization’s objectives with the
needs of the customers, communicate customer needs throughout the organization,
design, produce and deliver products to meet customer needs, measure and monitor
customer satisfaction, and manage relationship with customers to sustain a successful
relationship.
QMP 2 Leadership Leaders should establish the mission, vision, and strategy to
achieve them, and communicate them throughout the organization. They should create
a set of guiding values of fairness, ethical behavior, trust, and integrity in the organi-
zation. They should commit themselves to quality goals and encourage such com-
mitment organization-wide. They should make sure they provide required resources
and training to the people, and allow them authority to act with accountability. They
should inspire, encourage, and recognize people’s contribution.
QMP 3 Engagement of People People at all levels should be involved in the pursuit
of the chosen quality goals so that all their abilities are fully utilized for the benefit
of the organization. This needs communication to people on the importance of their
individual contribution, collaborative effort among people, open discussion and shar-
ing of knowledge, and empowering people to take initiatives without fear. Peoples’
contribution should be acknowledged and rewarded. Surveys should be conducted
to assess people’s satisfaction and results should be shared with them and acted upon
where improvements are necessary.
The market
and
customers
Product Production
delivery process Operations
and management
customer
service
Human
Customer Customer feedback Process resource
satisfaction and continuous design management
measures improvement
Business
Customer process
needs Product management
survey design
A productive organization
Figure 9.5 Model of a quality management system (the system is a process made up of several interconnected
sub-processes).
Organizations should view their entire system as made-up processes and assign
authority, responsibility, and accountability to the individual processes. They should
make sure resources and capabilities are available to operate the processes, and man-
age the processes and their interrelationships so as to maximize organization’s quality
objective. They should make sure necessary information is available to operate and
improve processes. They should evaluate the individual processes and the overall sys-
tem and manage risks that can affect the output of processes and output of the overall
system.
the expertise in these methods, and that the results of analysis are used in making
improvements—properly moderated by experience and intuition.
(A student-friendly version)
9.7.1 Scope
This International Standard is used where there is a need to demonstrate the ability
of an organization to provide products and services that consistently meet customer
538 A Firs t C o urse in Q ua lit y En gin eerin g
9.7.2 Normative Reference
The terms used in this standard have meaning as defined in ISO 9000:2015
9.7.4.1 Understanding the Organization and its Context This requirement says that
the organization should be aware of its mission and vision and should know its
strengths and weaknesses, internal and external, which have influence on achiev-
ing its vision (objectives). The external issues could be related to technology, com-
petition, market, social and economic environment, and legal restrictions. The
internal issues could be related to capacity and capabilities of the processes in the
organization.
9.7.4.2 Understanding the Needs and Expectation of Interested Parties This requirement
says the organization should understand who the “interested parties” are to the QMS.
The term “interested parties” includes the customers, supplies, investors, employees,
the community where the organization is located, and the whole society where the
organization’s products are used. (The term “product” has been used in this sum-
mary as an abbreviation of the phrase “products and services” given in the original
standard.) Also note, the term “interested parties” is used in the standard to represent
those who are normally referred to as “stakeholders.”
The organization should learn what the requirements of the stakeholders are,
requirements that are expected to be delivered by the quality system.
9.7.4.3 Determining the Scope of the Quality Management System The organization should
determine the scope of the QMS and make it a part of the record defining the prod-
ucts covered and the boundaries and applicability of this International Standard. The
scope should be determined, taking into account the requirements of the interested
parties, the external and internal issues mentioned above relating to the organization,
and the products the organization is capable of delivering.
A Sys t em f o r Q ua lit y 539
The organization shall apply all the requirements of the International Standard
if they are applicable within the scope envisaged. In case any of the requirements of
the standard is determined not applicable, such determination should be made part
of the record. Such exclusions are admissible only if they do not affect the organiza-
tion’s ability or responsibility to deliver to the customer products that conform to their
requirements and enhance their satisfaction.
9.7.4.4 Quality Management System and its Processes This is the requirement where the
standard stipulates that the organization shall establish, implement, maintain, and
continually improve a QMS in accordance with this International Standard. It goes
into details of how to establish the system, such as determining the processes needed
for the system, providing inputs, determining the sequence and interaction of the
processes, assigning responsibilities, ensuring their effectiveness by measuring their
performance using proper measurements, and monitoring and controlling them. They
should continuously monitor the performance of the processes and implement changes
where needed.
The organization should maintain recorded data to support operation of these pro-
cesses as well as to record that the processes performed as planned.
9.7.5 Leadership
9.7.5.2 Policy The top management shall establish, implement, and maintain the
quality policy in accordance with their needs. The quality policy shall be maintained
as a record and be communicated, understood, and applied. It should also be available
to relevant interested parties.
540 A Firs t C o urse in Q ua lit y En gin eerin g
9.7.5.3 Organizational Roles, Responsibilities, and Authorities The top management shall
make sure that responsibilities and authorities for the relevant roles in the QMS are
assigned, communicated, and understood within the organization, for making sure that
the QMS conforms to the International Standard, for ensuring that the processes are
delivering intended results, for measuring and reporting performance of the QMS to
top management including opportunities for improvement, and for ensuring integrity of
the system during the times when changes to the system are planned and implemented.
9.7.6. Planning
9.7.6.1 Actions to Address Risks and Opportunities While planning a QMS taking into
account the needs of its interested parties and its own strengths and weaknesses, the
organization shall consider the risks and opportunities that need to be addressed to
assure that the QMS achieves its intended results, enhance desired effects, prevent or
minimize undesired effects, and achieve improvement. The organization shall plan
action to address these risks and opportunities.
They shall plan on how to integrate the action plans into a QMS, and how to evalu-
ate the effectiveness of these actions.
9.7.6.2 Quality Objectives and Planning to Achieve Them The organization shall establish
quality objectives at relevant functions, levels, and processes as needed for the QMS.
The objectives will be measurable, relevant to meeting customer requirements, and to
enhance customer satisfaction. These objectives shall be monitored, communicated,
and updated as appropriate.
The planning for quality objectives shall include information on what will be done,
using what resources, by whom, when and how the objectives will be evaluated.
9.7.6.3 Planning of Changes If there is need to make changes to the QMS, it shall be
done in a planned manner considering the purpose of the changes and their potential
consequences and availability of resources, so that the integrity of the QMS remains
intact and responsibilities and authorities are properly reallocated.
9.7.7. Support
9.7.7.1 Resources The organization shall determine and provide the resources needed
for establishing, implementing, maintaining, and continuous improvement of the
QMS. This includes:
People strength required for effective implementation of the QMS.
Infrastructure such as buildings and utilities, equipment including hardware
and software, transportation equipment, and information and communica-
tion technology.
A Sys t em f o r Q ua lit y 5 41
Proper environment for operation of processes, which includes several human and
physical factors, such as social (e.g., non-discriminatory, non-confrontational),
psychological (e.g., non-stressful, non-burnout) and physical factors (e.g.,
temperature, humidity, airflow, noise).
Monitoring and measuring instruments suitable for the measurements to be
taken are maintained to be fit for use. Documentation as evidence of fitness
of the instruments.
When traceability of instruments to national standards is a requirement, the instru-
ments should be calibrated at specified intervals against applicable national or inter-
national standards and the instruments should be identified regarding their status of
calibration. The instruments should be protected from damage or deterioration that will
invalidate the calibration. If a measuring instrument was found unfit for the intended
use, action should be taken to verify the validity of previous measurements made by
the unfit instrument, and appropriate action should be taken to protect the customer.
The organization shall determine the knowledge and expertise needed to perform
the operations of its processes, and this knowledge shall be made available to the extent
necessary along with updating them as needed. This knowledge could be acquired
from internal resources as from past failures, improvement projects, overall experi-
ence, or intellectual property. The knowledge could be also from external sources such
as from customers, suppliers, standards, conferences, or academia.
9.7.7.3 Awareness The organization shall make sure that the people working in the
organization are aware of the quality policy, relevant quality objectives, the individual’s
contribution to the effectiveness of the system, and the possible results arising out of
non-conformance to the system requirements.
The organization shall ensure that proper identification (title, date, author, or
reference number), format, and record of review and approval are included in the
documentation.
The documentation shall be controlled so that it is available for use when and
where needed and is adequately protected for confidentiality, improper use, or loss of
integrity.
The documentation shall also be controlled by deciding the distribution, access,
retrieval and use; storage and preservation for legibility; control of changes with
proper version identification; retention; and disposition.
Documentation of external origin needed in the system shall be identified and
controlled.
Documentation retained as evidence of conformity should be protected from pos-
sible alterations.
9.7.8. Operation
9.7.8.1 Operation Planning and Control The organization shall plan, implement, and
control the processes needed for providing the products. This includes determining
requirements of the processes as well as the criteria for the process performance and
product acceptance, determining the resources needed, implementing control of the
processes, and retaining records to show that the processes have been carried out as
planned, and demonstrating that the products conform to their requirements.
The organization shall also make sure that the outsourced processes are controlled.
9.7.8.2 Requirements for Products and Services The organization shall communicate with
the customer for providing/receiving information relating to products and services,
handling enquiries, concluding contracts, and orders. The organization shall deter-
mine requirements for the products such that they are clearly defined and applicable
statutory and regulatory requirements are included. The organization shall review
the requirements from the point of view of its ability to meet those requirements.
The review should take into account the requirements not stated by the customer,
yet needed for the intended use, when such needs are known. The organization shall
retain record of the results of the review and on any new changes to originally speci-
fied requirements when accepted and confirmed.
9.7.8.3 Design and Development of Products and Services The organization shall estab-
lish, implement, and maintain a design and development process that is appropriate
for the subsequent making and delivery of the products. The process will take into
account the nature and complexity of the design, design and development reviews
needed, design verification and validation needed, responsibilities and authorities
involved, internal and external resources needed, the need for control of the interface
among persons involved in the design, the need for involvement of customers and
A Sys t em f o r Q ua lit y 543
users, the requirements of the production process, the level of involvement of the
customers in the design process, and the record that is needed to demonstrate that the
design and development requirements were met.
The organization shall determine the inputs needed for the design and development
process for any particular design. They include functional and performance require-
ment of the product, information derived from experience of previous models, statu-
tory and regulatory requirements, standards and codes the organization has adopted,
and the potential consequences of failure of the product. The organization shall retain
record on the inputs used in the design and development activities.
The organization shall apply control to the design and development process to
ensure the results expected are clearly defined; reviews are conducted to verify that
the design outputs meet the expected results; verification is done to ensure that the
design outputs meet the requirements; validation is done to ensure that the resulting
products (made out of the design) meet the intended use; necessary actions are taken
to resolve issues discovered during reviews, verification, and validation; and record is
retained on all these design and development activities.
The organization shall ensure that the design outputs meet input requirements for
the subsequent production process, include monitoring and measuring requirements,
and specify the essential product characteristics along with their acceptance criteria,
and safe provision. The organization shall retain record on all these activities.
The organization shall ensure that any changes to the design outputs are evaluated
for any adverse effect they may have on the original design, and record is retained on
the reviews made on the changes, authorization of the changes, and the action taken
to prevent adverse effect due to the changes.
9.7.8.5 Production and Service Provision The organization shall implement production
of products under controlled conditions. They shall make sure they have records avail-
able defining the characteristics of the products to be produced and the results to be
achieved, suitable monitoring and measuring devices are available, and monitoring
and measuring activities are implemented at appropriate stages to verify that control of
processes and acceptance criteria for products have been met. Suitable infrastructure
544 A Firs t C o urse in Q ua lit y En gin eerin g
and environment for the processes should be made available along with competent
people with required qualifications. Periodic validation of the ability of the process
to produce desired results, where resulting output cannot be verified, shall be imple-
mented. Actions are needed to prevent human error and to implement release, deliv-
ery, and post-delivery activities.
Organization shall use suitable means to identify outputs when traceability is
needed, shall identify status of outputs with respect to measurement requirements
throughout the production process, shall control the unique identification of the out-
puts, and shall retain record to enable traceability.
Organization shall exercise care with property belonging to customers or external
providers, suitably identifying them, protecting them, and safeguarding them. If any
such property is lost or damaged, the organization shall report to the external provider
and keep record on such occurrences.
The organization shall preserve the products during production and subsequent
handling to the extent necessary to ensure conformity to requirements (when deliv-
ered to customer).
The organization shall meet the requirements of post-delivery activities associated
with the products including statutory and regulatory requirements, potential unde-
sired consequences associated with the product, and customer needs.
The organization shall review and control changes to the production process to the
extent necessary to conform to the requirements and shall retain record on such changes.
9.7.8.7 Control of Nonconforming Outputs The organization shall ensure that outputs
that do not conform to the requirements are identified and prevented from being
delivered or used unintentionally. Nonconforming products can be either corrected,
segregated, and contained, or authorized by customer for acceptance under concession.
The organization shall retain record that describes the nonconformity, actions
taken, concessions obtained if any, and identification of the authority deciding the
action in respect of the nonconformity.
9.7.9.
Performance Evaluation
9.7.9.1 Monitoring, Measurement Analysis, and Evaluation The organization shall evalu-
ate the performance and effectiveness of the QMS and keep appropriate documenta-
tion of the outcomes. The organization shall determine what to monitor, using what
A Sys t em f o r Q ua lit y 545
measurements, how often, and how the measurement data will be analyzed and per-
formance evaluated.
The organization shall monitor customers’ perception of how well their needs and
expectations have been fulfilled. The organization shall determine the methods and
measurements to be taken and how they will be analyzed.
The organization shall have measurements, analysis and results to evaluate, confor-
mity of products to customer needs, the degree of customer satisfaction, the effective-
ness of the QMS and it being continuously improved, effective implementation of
plans, effectiveness of handling risks and opportunities, and performance of external
suppliers.
9.7.9.2 Internal Audit The organization shall conduct internal audits at scheduled
intervals to verify if the QMS conforms to this International Standard as well as con-
forms to the organization’s own requirements, and if the system is effectively imple-
mented and maintained.
The audits shall be planned and implemented at time intervals chosen based on
the importance of processes involved, changes in the organization, and the results of
previous audits. The scope and audit criteria shall be defined for each audit and audi-
tors shall be selected to ensure objectivity and impartiality. The results of the audits
shall be reported to relevant management personnel, and appropriate corrective action
shall be taken without undue delay. The results of the audits as well as actions taken in
response shall be retained as record.
9.7.10.
Improvement
9.7.10.2 Nonconformity and Corrective Action When a nonconformity occurs, the orga-
nization shall take action to control and correct it and consider actions to eliminate the
546 A Firs t C o urse in Q ua lit y En gin eerin g
causes for the nonconformity, so that the nonconformity does not occur again or occur
at another location. Any changes to the quality planning system considered necessary
shall be made.
The organization shall retain record on the nature of the nonconformities, actions
taken, and the results of the corrective action.
9.7.10.3 Continual Improvement The organization shall continually improve the QMS
to improve its suitability, adequacy, and effectiveness using the results of analysis and
evaluation from internal audits, process performance measures, customer satisfaction
measures, and other similar measures of performance.
“The Six Sigma system is a comprehensive and flexible system for achieving, sustain-
ing and maximizing business success” (Pande, Neuman, and Cavanagh 2000). The
Six Sigma process strives to achieve this by careful understanding of customer needs,
use of facts through data collection and analysis, and improving, and re-engineering
processes to increase customer satisfaction and business excellence. Business excel-
lence here includes cost reduction, productivity improvement, growth of market share,
customer retention, cycle time reduction, defect reduction, change to a quality culture,
and new product/service development.
The Six Sigma methodology was born in 1987 in Motorola’s communication sector as
an approach to track and compare performance against customer requirements, and
to achieve an ambitious target of near-perfect, 6σ (or 6-sigma) quality, in the products
produced. The 6σ quality means that the defect rate in the production of each compo-
nent of an assembly (e.g., a cellular phone) will not be more than 3.4 parts per million
(ppm) opportunities. The Six Sigma process, or the systematic approach to process
improvement to attain 6σ quality, later spread throughout the company with strong
backing from the then-chairman Robert Galvin. In the 1980s, this process helped the
company to achieve enormous improvements in quality: 10-fold improvement every
two years, or 100-fold improvement in four years. Two years after they set out on the
Six Sigma journey, they received the prestigious national award: Baldrige Award. In
the 10 years, between 1987 and 1997, the company increased its sales fivefold, saved
$14 billion from Six Sigma projects, and saw its stock prices increase at an annual rate
of 21.3% (Pande et al., 2000). All this happened to Motorola when it was facing tough
competition from Japanese competitors.
Several other organizations followed the Six Sigma process model and reported enor-
mous success in reducing waste and winning customer satisfaction. Two important
examples were Allied Signal (Honeywell) and the General Electric Co.
[Note: the term “6σ” is used here to refer to the quality of a process, and the
term “Six Sigma” is used to refer to the systematic approach used to achieve the 6σ
quality.]
A Sys t em f o r Q ua lit y 5 47
Theme 1: Focus on the Customer The Six Sigma process begins with the measure-
ment of customer satisfaction on a dynamic basis, and the Six Sigma improvements
are evaluated based on how they impact the customer. Customer requirements are
assessed first, and the performance of the organization’s product is then evaluated
against those requirements. Next, the unmet needs are addressed through product or
process change. Customers get the highest priority in a Six Sigma organization—an
organization that adopts the Six Sigma process to improve “all” its processes.
Theme 2: Data and Fact-Driven Management The Six Sigma philosophy emphasizes
the need for taking measurements on process performance, product performance, cus-
tomer satisfaction, and so on. The Six Sigma process does not allow decisions based on
opinions, assumptions, and gut feelings. Process managers and problem solvers should
decide what information is needed and arrange to gather it. The data should then be
analyzed using the appropriate tools, and the information generated must be used to
make decisions on process and product improvements. This “closed-loop” system, in
which measurements from output are used to take corrective action to improve a pro-
cess, is an important hallmark of the Six Sigma system.
Theme 6: Drive for Perfection (with Tolerance for Failure) The most important theme
of the Six Sigma process is driving toward near-perfection—that is, not more than
3.4 defects per million opportunities (DPMO) in every process. This is achieved
through diligent efforts in process analysis and process improvement, in repeated
iterations, on a never-ending basis. This is the way to satisfy and then delight the
customer, whose standards usually keep changing—and increasing. There may be
occasional failures when driving toward perfection, but a Six Sigma organization will
not be deterred by these. A Six Sigma organization learns from failures and makes
progress toward perfection (Pande et al., 2000).
The 6σ measure used to signify the capability of processes comes from the represen-
tation of a process variation using the normal distribution. Assuming that a process
follows a normal distribution, if the measure of variability of this process, the stan-
dard deviation σ, is equal to one-sixth of the distance of a specification limit from the
specification center (called the half-spec), then the process is said to have 6σ quality or
6σ capability and is called a “6σ process.” (A little bit of reflection would show that a
6σ process is the same as a process with Cp = 2.0.) If the variability is larger—that is,
if the value of σ is larger, such that it is only, say, one-fourth of the half-spec, then the
process is said to have 4σ quality, and so on. So, we can determine the capability of a
process in number of sigmas if we have an estimate for the process standard deviation
and values for the upper and lower spec limits.
When a process has 6σ quality, there will be no more than 3.4 ppm outside speci-
fication limits, even if the process is off-center from the target by a 1.5σ distance.
The proportions outside specifications under other process quality levels (sigma con-
ditions), in defects per million (DPM), and the corresponding yield or acceptable
proportions produced by such processes are shown in Table 9.1. (See Chapter 5 for
an explanation of how the figures in Table 9.1 are computed. Table 9.1 is a repeat of
Table 5.10.)
The above explanation of a 6σ process is valid if the quality characteristic is a mea-
surement, such as height, weight, and so on, and the proportion falling in any region
can be calculated using the normal distribution. If the characteristic is a countable
characteristic, such as number of pin holes per square-foot of glass, number of dirty
bottles in a skid, or number of mails delivered to the wrong address, then these mea-
sures cannot be assumed to be normally distributed and so the process quality in
number of sigmas cannot be computed using the above approach; therefore, a different
approach is used.
A (“large”) sample is taken from the process, and the proportion defectives in the
sample are calculated. The sigma level of the process is equated to the sigma level
of a normal process producing the same level of defectives as in the sample data.
Table 9.1 facilitates reading the sigma level of a process given the proportion defec-
tives in the sample. For example, if the sample data for an attribute showed that there
were 6200 DPM (0.62%), then the quality level of the process is 4σ because a nor-
mal process with 4σ capability will produce 0.62% outside specification in the worst
condition of its center. The DPM can be read as the number of defective units out of
every million units produced, or as number of DPMO. The latter reading is appropri-
ate for evaluating service processes such as postal deliveries, data entry, or inventory
verifications.
The people at Motorola, who created the sigma measure to designate quality levels
of processes, were motivated by the fact that this provides a simple, uniform man-
ner of designating the quality levels of processes regardless of whether the quality is
evaluated by measuring a characteristic or by counting units with a certain attribute.
Because of the availability of tables such as Table 9.1, the people at Motorola claimed
that one does not have to understand any statistical principles to be able to compute
process quality levels.
The following are the advantages claimed by the Six Sigma advocates for the 6σ
measure. According to them, the measure:
This 6σ terminology has become widely accepted among quality professionals for
evaluating process quality (or capability) levels, although some honest statisticians
contend that the 6σ terminology has been a cause of confusion among quality work-
ers. For example, we hear people say “the larger the value of σ for a process, the better
the quality,” which may be correct in the Six Sigma language but is incorrect in the
language of statistics. What the Six Sigma people really mean is that “the larger the
550 A Firs t C o urse in Q ua lit y En gin eerin g
number of sigmas that can fit within a customer’s specification and process center
(half-spec), the smaller the value of σ; hence, the better the process quality.”
The Six Sigma system recognizes three major strategies for enhancing process quality.
These strategies are used in three different situations:
1. Process improvement
2. Process design/redesign
3. Process management
These three strategies are not mutually exclusive. They can be used together in any
given situation as appropriate. The process management strategy will usually follow
after the improvement or design/redesign strategies have been implemented.
Process Improvement
Process Design/Redesign
While the process improvement strategy will produce incremental changes, it some-
times may be necessary to revamp an entire process because incremental improve-
ments will not produce enough of a desired result. Then, the process design/redesign
strategy must be used.
This involves a whole rethinking on the process, based on inputs from customers.
This is similar to the concepts of re-engineering used by some to denote redoing an
entire process when it has become ineffective due to changes in technology or cus-
tomer expectations.
Process Management
This is the strategy equivalent to what Dr. Juran called “holding the gains.” The
difference here, however, is in maintaining an additional watch as to whether the
process is responding to changes in the needs of the customer. In this strategy, pro-
cesses are documented, and customer needs are defined and updated on a regular
basis. Meaningful measures of process performance are defined and compared with
A Sys t em f o r Q ua lit y 5 51
the measures of customer needs in real time. Quick, responsive action is taken when
the process performance falls short of customer needs and expectations.
Figure 9.6 The five-step improvement models for processes. (From Pande, P. S., R. P. Neuman, and R. L. Cavanagh, The
Six Sigma Way, McGraw-Hill, New York, NY, 2000. With permission.)
552 A Firs t C o urse in Q ua lit y En gin eerin g
A five-step Six Sigma road map is recommended for organizations that want to start
implementing a Six Sigma system throughout the organization and become a Six
Sigma organization. The road map, shown in Figure 9.7, is explained further in the
following steps.
Step 1: Identify Core Processes and Key Customers
The objective of this step is to get a perspective on the core processes in the
organization, their interactions with one another, and a clear understanding
of the products they produce and the customers they serve. When this step is
completed, process charts, or “maps” of the core processes would have been
generated. This is like taking stock of what is going on in the organization as
preparation for the next step.
Step 2: Define Customer Requirements
The objective of this step is to establish standards for the products produced and
the services provided based on input from the customer so that the process
Identify core
processes and key
customers
Measure current
performance
Figure 9.7 The Six Sigma road map for an organization. (From Pande, P. S., R. P. Neuman, and R. L. Cavanagh, The Six
Sigma Way, McGraw-Hill, New York, NY, 2000, with permission of McGraw-Hill.)
A Sys t em f o r Q ua lit y 553
The Six Sigma system calls for a special organizational structure as well as continuous
and rigorous cooperation among all employees in an organization. The key players in
a Six Sigma organization and their roles in the Six Sigma process are as follows:
Executive leaders: Show highly visible top-down commitment, assume owner-
ship of the Six Sigma process, create vision and goals, identify opportunities,
allocate resources, and provide inspired leadership.
Project sponsor (champion): A line manager or owner of a process who identifies
and prioritizes project opportunities. Selects projects, provides resources, par-
ticipates in project execution, and removes barriers.
554 A Firs t C o urse in Q ua lit y En gin eerin g
Master black belt: Works full-time for Six Sigma implementation. Has respon-
sibility for planning and providing technical support for the entire organi-
zation. Trains black belts, acts as a coach and mentor, and provides overall
leadership in Six Sigma implementation.
Black belts: Experts on Six Sigma tools. Work full-time on Six Sigma projects,
train green belts, lead teams, and provide assistance with Six Sigma tools (e.g.,
improvement methods, diagnostic tools, and statistical methods).
Green belts: Work part-time on projects with black belts. Integrate Six Sigma
methodology in daily work. Can lead small projects.
Yellow belts: All employees. Trained in quality awareness and are part-time par-
ticipants in teams. Contribute with process expertise.
Financial rep: Independent of project team. Determines the project costs and
savings. Reports project benefits.
The idea that a quality system should be created and maintained to produce qual-
ity products and services to meet customer needs was recognized by people such as
Dr. Deming, Dr. Juran, and Dr. Feigenbaum in the early 1950s, and each proposed
a model to create such a system. Although those models were only initial attempts
to create a quality management system, they provided the basic building blocks,
which were later used by others to create more complete models, such as the ISO
9000 standards, the Baldrige Award criteria, and the Six Sigma system. These three
systems, which incorporate the best experiences with quality systems, are the best
models currently available for creating a quality management system.
In the case of the ISO 9000 standards, certification can be obtained from reg-
istrars accredited by a government or quasi-governmental agency. (The Registrar
Accreditation Board in the United States is a quasi-governmental agency.) The regis-
trars, through their certified auditors, review the documents of an organization that
is aspiring to become certified, make a site visit, and, if satisfied, certify that the
organization meets the requirements of the standard in the production of specified
products. Such certification by a third party (i.e., a party other than the producer and
the customer) provides an objective evaluation of an organization’s quality system and
its ability to supply products and services to meet customer needs and meet (govern-
ment) statutory and regulatory requirements.
In a similar manner, the Baldrige Performance Excellence Program (BPEP) orga-
nization evaluates businesses that wish to receive an award, through volunteer groups
of examiners and judges, and decides on the final recipients of the award. An applicant
business will receive feedback from the examiners and judges on the status of their
quality management system, irrespective of whether or not they receive the award.
When an organization receives finalist status and a site visit by the judges, it is already
recognition of the healthy status of their quality management system. The availability
A Sys t em f o r Q ua lit y 555
models, or all three models, and have achieved excellent business results. In those
cases, ISO 9000 is used first to build the foundation for the system, and then the
Baldrige model is used to strengthen and enhance the system.
Thus, an organization would do well to exploit the individual strengths of the mod-
els and implement a quality management system that best suits its own needs and
circumstances.
9.10 Exercise
9.10.1 Practice Problems
Deming System
9.1 What did Dr. Deming mean when he said, “Adopt a new philosophy”?
9.2 Why is 100% inspection undesirable? Why will 100% inspection not result
in 100% good products?
9.3 Why is a single source for supplies better than multiple sources according to
Dr. Deming?
9.4 According to Dr. Deming, what were the most important inhibitors for
workers doing their work in the American industry?
9.5 What is the role of a supervisor in Deming’s System?
9.6 According to Dr. Deming, why do slogans and numerical quotas not help in
achieving quality?
Juran System
9.7 What are the three components of the “Juran Trilogy”?
9.8 How does the concept of dominance help in process planning?
9.9 How does one select the locations where control must be exercised in a process?
9.10 What is the difference between sporadic and chronic deviations in processes?
9.11 Why is the first project chosen for quality improvement important?
9.12 What are the reasons for resistance to change while making improvements to
a process? How should this resistance be handled?
Baldrige System
9.13 What were the reasons for creating the Baldrige Award in the United States?
9.14 Why is the criterion: Measurement, analysis, and knowledge management,
fourth among seven, central to the Baldrige Award criteria?
9.15 What are the business results on which the Baldrige Award focuses?
9.16 Why are business processes important to the excellence of a business?
9.17 The Baldrige Award focuses not only on product quality, but also on the busi-
ness results. Explain.
9.18 Figure 9.4 is commonly referred to as the “Baldrige burger.” Where is the
meat?
A Sys t em f o r Q ua lit y 557
9.10.2 Mini-Projects
Mini-Project 9.1 The above set of 30 questions has been created to help students
understand the various systems in good detail. However, it is only one of several pos-
sible sets. Generate another set of 30 questions, six from each system, similar to but
different from the above set.
Mini-Project 9.2 Compare the three modern systems— Baldrige Award, ISO 9000,
and Six Sigma—and identify their differences.
References
ANSI/ISO/ASQ Q9000:2015. 2015. Milwaukee, WI: ASQ Quality Press.
ANSI/ISO/ASQ Q9001:2015. 2015. Milwaukee, WI: ASQ Quality Press.
558 A Firs t C o urse in Q ua lit y En gin eerin g
559
560 A p p en d i x 1
Φ(z) 1
Z 0
tν
0 tα,ν
2
χα,ν
7 0.99 1.24 1.69 2.17 2.83 4.25 6.35 9.04 12.02 14.07 16.01 18.48 20.28 24.32
8 1.34 1.65 2.18 2.73 3.49 5.07 7.34 10.22 13.36 15.51 17.53 20.09 21.96 26.12
9 1.73 2.09 2.70 3.33 4.17 5.90 8.34 11.39 14.68 16.92 19.02 21.67 23.59 27.88
10 2.16 2.56 3.25 3.94 4.87 6.74 9.34 12.55 15.99 18.31 20.48 23.21 25.19 29.59
11 2.60 3.05 3.82 4.57 5.58 7.58 10.34 13.70 17.28 19.68 21.92 24.72 26.76 31.26
12 3.07 3.57 4.40 5.23 6.30 8.44 11.34 14.85 18.55 21.03 23.34 26.22 28.30 32.91
13 3.57 4.11 5.01 5.89 7.04 9.30 12.34 15.98 19.81 22.36 24.74 27.69 29.82 34.53
14 4.07 4.66 5.63 6.57 7.79 10.17 13.34 17.12 21.06 23.68 26.12 29.14 31.32 36.12
15 4.60 5.23 6.26 7.26 8.55 11.04 14.34. 18.25 22.31 25.00 27.49 30.58 32.80 37.70
16 5.14 5.81 6.91 7.96 9.31 11.91 15.34 19.37 23.54 26.30 28.85 32.00 34.27 39.25
17 5.70 6.41 7.56 8.67 10.09 12.79 16.34 20.49 24.77 27.59 30.19 33.41 35.73 40.79
18 6.26 7.01 8.23 9.39 10.86 13.68 17.34 21.60 25.99 28.87 31.53 34.81 37.16 42.31
(Continued )
563
564
50 27.99 29.71 32.36 34.76 37.69 42.94 49.33 56.33 63.17 67.50 71.42 76.15 79.49 86.66
60 35.53 37.48 40.48 43.19 46.46 52.29 59.33 66.98 74.40 79.08 83.30 88.38 91.95 99.61
70 43.28 45.44 48.76 51.74 55.33 61.70 69.33 77.58 85.53 90.53 95.02 100.42 104.22 112.32
80 51.17 53.54 57.15 60.39 64.28 71.14 79.33 88.13 96.58 101.88 106.63 112.33 116.32 124.84
90 59.20 61.75 65.65 69.13 73.29 80.62 89.33 98.64 107.56 113.14 118.14 124.12 128.30 137.21
100 67.33 70.06 74.22 77.93 82.36 90.13 99.33 109.14 118.50 124.34 129.56 135.81 140.17 149.45
A p p en d i x 1 565
Table A.4 Factors for Calculating Limits for Variable Control Charts
n A A2 A3 B3 B4 c4 D1 D2 D3 D4 d2 d3
2 2.121 1.880 2.659 0 3.267 0.798 0 3.686 0 3.267 1.128 0.853
3 1.732 1.023 1.954 0 2.568 0.886 0 4.358 0 2.574 1.693 0.888
4 1.500 0.729 1.628 0 2.266 0.921 0 4.698 0 2.282 2.059 0.880
5 1.342 0.577 1.427 0 2.089 0.940 0 4.918 0 2.114 2.326 0.864
6 1.225 0.483 1.287 0.030 1.970 0.952 0 5.078 0 2.004 2.534 0.848
7 1.134 0.419 1.182 0.118 1.882 0.959 0.205 5.204 0.076 1.924 2.704 0.833
8 1.061 0.373 1.099 0.185 1.815 0.965 0.387 5.306 0.136 1.864 2.847 0.820
9 1.000 0.337 1.032 0.239 1.761 0.969 0.546 5.393 0.184 1.816 2.970 0.808
10 0.949 0.308 0.975 0.284 1.716 0.973 0.687 5.469 0.223 1.777 3.078 0.797
11 0.905 0.285 0.927 0.321 1.679 0.975 0.812 5.535 0.256 1.744 3.173 0.787
12 0.866 0.266 0.886 0.354 1.646 0.978 0.924 5.594 0.283 1.717 3.258 0.778
13 0.832 0.249 0.850 0.382 1.618 0.979 1.026 5.647 0.307 1.693 3.336 0.770
14 0.802 0.235 0.817 0.406 1.594 0.981 1.118 5.696 0.328 1.672 3.407 0.763
15 0.775 0.223 0.789 0.428 1.572 0.982 1.204 5.741 0.347 1.653 3.472 0.756
Source: (Abridged) from Table M of Duncan, A. J., Quality Control and Industrial Statistics, 4th ed, Homewood, IL: Richard
D. Irwin, 1974.
566 A p p en d i x 1
12 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 2.75 2.69 2.62 2.54 2.51 2.47 2.43 2.38 2.34 2.30
15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 2.48 2.40 2.33 2.29 2.25 2.20 2.16 2.11 2.07
20 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 2.35 2.28 2.20 2.12 2.08 2.04 1.99 1.95 1.90 1.84
24 4.26 3.40 3.01 2.78 2.62 2.51 2.42 2.36 2.30 2.25 2.18 2.11 2.03 1.98 1.94 1.89 1.84 1.79 1.73
30 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 2.16 2.09 2.01 1.93 1.89 1.84 1.79 1.74 1.68 1.62
40 4.08 3.2 2.84 2.61 2.45 2.34 2.25 2.18 2.12 2.08 2.00 1.92 1.84 1.79 1.74 1.69 1.64 1.58 1.51
60 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 1.99 1.92 1.84 1.75 1.70 1.65 1.59 1.53 1.47 1.39
120 3.92 3.07 2.68 2.45 2.29 2.17 2.09 2.02 1.96 1.91 1.83 1.75 1.66 1.61 1.55 1.50 1.43 1.35 1.25
∞ 3.84 3.00 2.60 2.37 2.21 2.10 2.01 1.94 1.88 1.83 1.75 1.67 1.57 1.52 1.46 1.39 1.32 1.22 1.00
(Continued )
Table A.6 (Continued) Percentiles of the F-Distribution
99TH PERCENTILES OF THE F (ν1, ν2) = f0.01, ν1, ν2
ν2\ν1 1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 ∞
1 4052 5000 5403 5625 5764 5859 5928 5981 6022 6056 6106 6157 6209 6235 6261 6287 6313 6339 6366
2 98.5 99.00 99.17 99.25 99.30 99.33 99.36 99.37 99.39 99.4 99.42 99.42 99.45 99.46 99.47 99.47 99.48 99.49 99.5
3 34.12 30.82 29.46 28.71 28.24 27.91 27.67 27.49 27.35 27.23 27.05 26.87 26.69 26.60 26.50 26.41 26.32 26.22 26.12
4 21.20 18.00 16.69 15.98 15.52 15.21 14.98 14.80 14.66 14.55 14.37 14.20 14.02 13.93 13.84 13.75 13.65 13.56 13.46
5 16.26 13.27 12.06 11.39 10.97 10.67 10.46 10.29 10.16 10.05 9.89 9.72 9.55 9.47 9.38 9.29 9.20 9.11 9.02
6 13.75 10.92 9.78 9.15 8.75 8.47 8.26 8.10 7.98 7.87 7.72 7.56 7.40 7.31 7.23 7.14 7.06 6.97 6.88
7 12.25 9.95 8.45 7.85 7.46 7.19 6.99 6.84 6.72 6.62 6.47 6.31 6.16 6.07 5.99 5.91 5.82 5.74 5.65
8 11.26 8.65 7.59 7.01 6.63 6.37 6.18 6.03 5.91 5.81 5.67 5.52 5.36 5.28 5.20 5.12 5.03 4.95 4.86
9 10.56 8.02 6.99 6.42 6.06 5.80 5.61 5.47 5.35 5.26 5.11 4.96 4.81 4.73 4.65 4.57 4.48 4.40 4.31
10 10.04 7.56 6.55 5.99 5.64 5.39 5.20 5.06 4.94 4.85 4.71 4.56 4.41 4.33 4.25 4.17 4.08 4.00 3.91
A p p en d i x 1
12 9.33 6.93 5.95 5.41 5.06 4.82 4.64 4.50 4.39 4.30 4.16 4.01 3.86 3.78 3.70 3.62 3.54 3.45 3.36
15 8.68 6.36 5.42 4.89 4.56 4.32 4.14 4.00 3.89 3.80 3.67 3.52 3.37 3.29 3.21 3.13 3.05 2.96 2.87
20 8.10 5.85 4.94 4.43 4.10 3.87 3.70 3.56 3.46 3.37 3.23 3.09 2.94 2.86 2.78 2.69 2.61 2.52 2.42
24 7.82 5.61 4.72 4.22 3.90 3.67 3.50 3.36 3.26 3.21 3.07 2.93 2.78 2.70 2.62 2.54 2.45 2.35 2.26
30 7.56 5.39 4.51 4.02 3.70 3.47 3.30 3.17 3.07 2.98 2.84 2.70 2.55 2.47 2.39 2.30 2.21 2.11 2.01
40 7.31 5.18 4.31 3.83 3.51 3.29 3.12 2.99 2.89 2.80 2.66 2.52 2.37 2.29 2.20 2.11 2.02 1.92 1.80
60 7.08 4.98 4.13 3.65 3.34 3.12 2.95 2.82 2.72 2.63 2.50 2.35 2.20 2.12 2.03 1.94 1.84 1.73 1.60
120 6.85 4.79 3.95 3.48 3.17 2.96 2.79 2.66 2.56 2.47 2.34 2.19 2.03 1.95 1.86 1.76 1.66 1.53 1.38
∞ 6.63 4.61 3.78 3.32 3.02 2.8 2.64 2.51 2.41 2.32 2.18 2.04 1.88 1.79 1.70 1.59 1.47 1.32 1.00
569
http://taylorandfrancis.com
Appendix 2: Answers to Selected Exercises
CHAPTER 1
1.1 (d)
1.3 (a)
1.5 (b)
1.7 (b)
1.9 (d)
1.11 (b)
1.13 (b)
1.15 (e)
CHAPTER 2
2.1 The chance the game will end in 180 minutes in American League: 80%.
The chance the game will end in 180 minutes in National League: 85%.
The time before which 95% of the games will end in American League:
200 minutes.
The time before which 95% of the games will end in National League:
200 minutes.
2.3 X = 1891; S = 4.06; Median = 1891; Mode = 1891; R = 25; IQR = 2.
2.5 a. S = {ME1, ME00, ME01, ME02, EE1, EE00, EE01, EE02, CE1,
CE00, CE01, CE02, IE1, IE00, IE01, IE02, MfE1, MfE00, MfE01,
MfE02}.
b. A = {ME01, EE01, CE01, MfE01}.
2.7 S is shown with cross hatch in the figure below. Event A: {X > 24, Y < 120}
is shown in double hatch.
5 71
572 A p p en d i x 2
2.8
B1 B1 , B1 B2 , B1 B3 , B1 B4 , B1W1 , B1W 2 , B1W3 , B1W4
B1 B2 , B2 B2 ,..........................................BB2W3 , B2W4
B3 B1 , B3 B2 ,...........................................B3W3 , B3W4
B4 B1 , B4 B2 ,..........................................B4W3 , B4W 4
a. S =
W1 B1 ,W1 B2 ,...........................................W W1W3 ,W1W4
W 2 B1 ,W 2 B2 ,...........................................W 2W3 ,W 2W4
W3 B1 ,W3 B2 ,..........................................W3W3 ,W3W4
W4 B1 ,W4 B2 ,..........................................W4W3 ,W4W4
Number of the elements in sample space = 64
B1 B1 , B1 B2 , B1 B3 , B1 B4
B2 B1 , B2 B2 , B2 B3 , B2 B4
b. A ( both black ) =
B3 B1 , B3 B2 , B3 B3 , B3 B4
B4 B1 , B4 B2 , B4 B3 , B4 B4
2.29 pmf of X:
x 1 2
p(x) .5 .5
μX = 1.5
σ x2 = 0.25
2.31 a = 4
F(x) = x4, 0 ≤ x ≤ 1
F(1/2) = 1/16
F(3/4) = 81/256
P(1/2 ≤ X ≤ 3/4) = 65/256
μx = 4/5
2
σ x = 4 / 150
2.33 0.85
2.35 7
2.37 a. 0.0
b. 0.0228
c. 0.9772
2.39 LSL = 0.436
2.41 0.614
2.43 [168.85, 191.15]
2.45 [0.99, 1.31]
2.47 99% CI for μ: [6.215, 6.253]
99% CI for σ2: [0.000273, 0.0025]
99% CI for σ: [0.0165, 0.0464]
2.49 There is no reason to believe the yield is less than 90% at α = 0.05.
2.51 Both machines are filling equal volumes.
2.53 The data do not come from a normal population.
CHAPTER 3
3.1 Pearson correlation coefficient of 1st survey and 2nd survey results = 0.668.
P-value = 0.025. So, Reject Ho: Pearson coefficient = 0, the two sets of results
are correlated. The questionnaire is reliable.
3.3 n = 21
3.5 The exponential seems a good fit for the data. The MTTF = 76.6 months.
3.7 MTTF = 10.625 years.
3.9 22.1% of the washing machines will need service during warranty.
5 74 A p p en d i x 2
CHAPTER 4
4.1 UCL( X ) = 25.19, CL( X) = 24.61, LCL(X ) = 24.03.
UCL(R) = 1.80, CL(R) = 0.79, LCL(R) = 0.
Process not-in-control.
4.3 UCL( X ) = 37.19, CL(X ) = 35.72, LCL( X ) = 34.18.
UCL(R) = 4.58, CL(R) = 2.01, LCL(R) = 0.
The process is not-in-control. There is an upward drift in the process.
4.4 UCL(X ) = 37.19, CL(X ) = 35.72, LCL( X ) = 34.25.
UCL(S) = 2.04, CL(S) = 0.9, LCL(S) = 0.
We see the same phenomenon we saw in the X and R-charts for the same
data. The average weight of the packages increases from the beginning to the
end of the period.
4.5 UCL(P) = 0.147, CL(P) = 0.04, LCL(P) = 0.
The process is in-control, but there are on average 4% defective bottles.
Steps must be taken to reduce the average level of defectives.
4.7 We will use a C-chart.
UCL(C) = 13.3, CL(C) = 5.95, LCL(C) = 0.
The process is not-in-control. Some days the errors are too many. There
seems to be an opportunity for controlling the process at a consistent level.
4.9 A P-chart with varying sample size would be appropriate. p = 0.033.
The process is not-in-control. There are hours when the defective rate is
low and hours when the defective rate is high. An investigation for the rea-
sons is called for.
4.10 We will use a U-chart since we are tracking the number of occurrences of
defects per house and the house size is changing. As an example, the limits
for the 23rd value of u are:
UCL(u23) = 0.393, CL(u23) = 0.108, LCL(u23) = 0.
There are two houses where the average number of defects per room were
above the limits. Both houses were cleaned by Crew “C.”
A p p en d i x 2 575
On the capability of the process: The cleaning business does not have the
capability to deliver what they promised. They should either improve their
capability or revise their guarantee.
4.11 The trial control limits:
UCL(X ) = 25.19, CL(X ) = 24.61, LCL(X ) = 24.03
UCL(R) = 1.80, CL(R) = 0.79, LCL(R) = 0
After going through a process of eliminating plots outside limits and
recalculating limits with remaining data, we get to a stable process with:
X = 24.44, R = 0.6769
Cp = 1.01, Cpk = 0.44
The Cpk is much smaller than the Cp indicating that the process is quite
off-center.
4.13 Repeatability error: σe = 0.0315. Reproducibility error: σ0 = 0.0492.
Gage error: σg = 0.0584. Overall standard deviation: σall = 0.1837.
The variability in the product: σp = 0.1742. σg/σp = 0.33. Not very good. This
ratio should be less than 10%. The precision to tolerance ratio (PT Ratio):
6σg/(USL − LSL) = 0.7.
This is also not good. The PT ratio should be less than 10%. The gage
has too much variability both from the instrument as well as from the
operators.
To check the resolution and variability of the instrument using control
charts, we draw X and R-charts for the data from Operator 1 and Operator 2.
X charts from both operators show that the instrument variability is
smaller than the variability in product, which is a different conclusion from
the quantitative analysis performed above. The quantitative analysis is more
dependable.
The R-charts from both operators show that the resolution of the instru-
ment is adequate.
CHAPTER 5
5.1 a. P(X > 3.24) = 0.0082
b. The answer will be approximately the same because, according to the
central limit theorem, the X 9 will have the same normal distribution
used to calculate the probability in Part (a).
5.3 k 0.5 1.0 1.5 2.0
β 0.9332 0.5 0.0668 0.00135
576 A p p en d i x 2
CHAPTER 7
7.3 p 0.01 0.02 0.05 0.08 0.10 0.15 0.20
Pa(p) 0.878 0.662 0.199 0.05 0.017 0.003 0.00
7 .7 n = 45, c = 3
7.9 p 0.01 0.03 0.05 0.07 0.10 0.15
Pa(p) 0.998 0.956 0.840 0.728 0.428 0.157
7.11 p 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.12
ASN 23.6 26.4 28.6 30.0 31.0 31.6 31.7 31.6 31.3 30.8 29.6
7.17 p 0.01 0.02 0.04 0.05 0.06 0.08 0.10 0.12 0.14 0.16
AOQ 0.01 0.02 0.035 0.035 0.031 0.018 0.011 0.002 0.001 0.000
CHAPTER 8
8.3 a. The results of regression show that the “pour temp” has a significant
impact on “porosity.”
b. Pour temp and dip viscosity have significant impact on porosity, but pour
time does not.
8.5 Pearson correlation coefficient of tap-temp and pour-temp = 0.616 P-value =
0.002.
There is significant correlation between tap-temp and pour-temp.
Index
5 79
580 In d e x
Operational effectiveness, 530 for quality, 18; see also Product planning
Operations, 529–530 product creation cycle, 141–142,
Organizations, 376–379 142f
culture, 378–379 tools, 142
defined, 376 strategic, for quality, 399–402
structure, 376–378 deployment, 401–402
Organizing, principle of management, 398 history, 399
Out of the Crisis, 16, 501 making, 399–401
Output, attribute, process capability with, for training, 386–388
276 Point estimate, 113
Poisson distribution, 97–98, 418, 445
P mean and variance of, 97–98
Poisson law, 267
Packaging standards, 206 Poisson probabilities, 427
Page, Harold, 4 Poke-yoke, 487
Parameter design, 166–168 Polaroid, 4
Pareto analysis, 463–464, 514 Poor quality cost, 28; see also Failure cost
Pareto principle, 508, 514 Populations
Patterns in control charts, 238 defined, 47
P-charts empirical methods for
limits for, 304–305 box-and-whisker (B&W) plot, 59–60,
for many characteristics, 261 60f
meaning of LCL on, 261 exercise in, 62–63
OC curve of, 312, 313–314 frequency distribution, 49–55
overview, 245–248 graphical methods, 57–60, 61
SPC program, implementing, 267 location measures, 60
with varying sample sizes, 252–255 measures of dispersion, 61
examples, 253–255, 256f, 257t numerical measures, 60–61, 62
overview, 252–253 numerical methods, 56–57, 56f
Penalty for not meeting schedules, cost, 20 stem-and-leaf (S&L) diagram, 57–59,
Perceived quality, 9 58f
Percent defective chart (100P-chart), 258 inference methods, 111–112
Perfectionism, 23 confidence intervals, 113–118
Performance, 9 definitions, 112–113
excellence, 518 exercise in, 133–135
Permutations, 79 hypothesis testing, 118–126
theorem on number of permutations, mini-projects, 135–139
79–80 P-value, 131–133, 132f
Perseverance, 394 tests for normality, 126–131
Personal behavior, 406 mathematical models, 64
Peterson, Don, 379 probability, 64–85
PFC, see Process flow chart (PFC) probability distributions, 85–111
Pittsburgh Health System Veteran Affairs, 7 variability in, 45–46
Pittsburgh Regional Health Initiative Poster, eliminating, 505–506
(PRHI), 7 Power, of test, 118
Plan-do-check-act (PDCA) cycle, 392, 501 Precision
Planning instrument, MSA, 284–288
principle of management, 398 of instrument, MAS, 277–278
In d e x 5 91
appraisal cost, 19 R
external failure cost, 20
internal failure cost, 19–20 Random experiments, 64–65
prevention cost, 18–19 Randomization, 171
relationship among, 29–30 Random sample, 47
“cost of quality” (COQ ) program, 27–28, Random variables, 85–87
27t continuous, 86
equipment, 29 discrete, 86
indirect, 28 identically distributed random variables
intangible, 28–29 (I.I.Ds.), 112
liability costs, 29 Range space, 86
life-cycle, 29 Rational subgrouping, 232–233, 262–263
mini-projects, 40–43 R-chart, 220–239
“preliminary study,” 31–37 as acceptance tool, 228
scoreboard, 26–28, 26f, 27t case studies, 232–233
study of, 17 control vs. capability, 238
approval from upper management, determining sample size, 230
20–21 examples, 221, 222f, 225–226
conducting, steps, 20–25 factors for calculating limits for variable,
data analysis, 22–25 220t
data collection, 21–22 false alarm, 229–230
organize for, 21–22 frequency of sampling, 231–232
projects arising from, 25 improving sensitivity, 234
success stories, 37 increasing sample size, 236
TOC and, 28–29 limits for, 303–304
vendor, 28 maintain process at current level, 227
Quality council, 513 Minitab software, 223, 226
Quality culture, characteristics, 379 objective of using, 224
Quality engineering OC curve of, 309–310
defined, 7–8 patterns in, 238
note about, 7–8 preparing check sheets, 229
Quality evaluation, 406 preparing instruments, 228–229
Quality function deployment (QFD), 141, process at given target/nominal value,
147, 148–152, 149f 227
competitor, as benchmark, 152 rational subgrouping, 232–233
customer requirements and design resolution in instrument, 281–282
features, 151–152 runs, use, 237
design features, prioritizing, 152 sample size changes for, 234, 235t, 236f
Quality improvement, 15–16, 415 selecting variable for charting, 228
Quality information system, 18 3-sigma rule, 230–231
Quality in procurement, 405–446 standards for µ and/or σ, 316–320
Quality management system (QMS), 533 µ and σ given, 317–320
Quality planning, 18 µ given, σ not given, 317
Quality-related activities, 11 overview, 316–317
Quality revolution, 4, 5–6 troubleshooting tool, 227–228
major events, 6t uses of, 227–228
Quality system, 2 warning limits, use, 236
Quantity control, 488–489 Reagan, Ronald, 5
In d e x 595