Solutions Manual
Solutions Manual
Introduction to Quality
Learning Objectives
Exercises
1.1. Why is it difficult to define quality?
Even the American Society for Quality describes “quality” as a subjective term for which each person or
sector has its own definition. Given a large set of customers considering purchasing the same product or
service, it is likely that each customer evaluates it in terms of a completely different set of desirable
characteristics. As a result, it is extremely difficult to come up with a single definition of quality that could
meet the expectations of all customers for all products or services. (For further details refer to page 2)
1.2. Briefly discuss the eight dimensions of quality. Does this improve our understanding of quality?
1.3. Select a specific product or service and discuss how the eight dimensions of quality impact its overall
acceptance by consumers.
1.4. Is there a difference between quality for a manufactured product and quality for a service? Give some
specific examples.
The service industry has some additional dimensions of quality that can be used to evaluate the quality of a
service. These dimensions include professionalism and attentiveness of the service providers. In terms of
the aesthetics, for a manufactured product, this would be related to the visual appeal of the product. In the
service sector, this is the physical appearance of the facility (For further details refer to page 4).
1.5. Can an understanding of the multidimensional nature of quality lead to improved product design or better
service?
Understanding the multidimensional nature of quality can help with improved product design. By focusing
on several (or all) dimensions of quality, improvements to a product’s design or service can be made. In
addition, recognizing that quality is inversely proportional to variability will also lead to improved product
designs. Reducing the variability of critical features identified from the dimensions of quality will lead to
improved product design or better service.
1.6. What are the internal customers of a business? Why are they important from a quality perspective?
The internal customers of a business are those within the company that require products or services from
other departments in the company. Internal customers could include managers or other employees.
1.7. What are the three primary technical tools used for quality control and improvement?
The three primary statistical technical tools include: statistical process control, design of experiments, and
acceptance-sampling.
Quality costs are the categories of costs that are associated with producing, identifying, avoiding, or
repairing products that do not meet requirements. These categories include prevention costs, appraisal
costs, internal failure costs, and external failure costs.
1.9. Are internal failure costs more or less important than external failure costs?
External costs can be expensive as a result of defective products being sent to customers. These costs
include the costs of returned products/materials, warranty charges, liability costs, and the indirect costs of
business reputation or loss of future business. External costs can be eliminated if all units of product
conform to requirements, i.e. if the defects are discovered prior to delivery of the product to the customer
(internal failure).
1.10. Discuss the statement “Quality is the responsibility of the quality department.”
The responsibility for quality spans the entire organization. Quality improvement must be a total,
company-wide activity, and that every organizational unit must actively participate. Because quality
improvement activities are so broad, successful efforts require, as an initial step, senior management
commitment. This commitment involves emphasis on the importance of quality, identification of the
respective quality responsibilities of the various organizational units, and explicit accountability for quality
improvement of all managers and employees in the company. Nevertheless, the quality department takes
care of the quality planning and analyses in order to make sure all quality efforts are being successfully
implemented throughout the organization. (For further details refer to pages 26-27)
1.11. Most of the quality management literature state that without top management leadership, quality
improvement will not occur. Do you agree or disagree with this statement? Discuss why.
Top management provides the strategic agenda and goals of quality improvement in a company.
Management can strategically define the quality improvement projects that will most benefit the company.
In addition, they can spread the knowledge and culture of quality throughout the company (For further
details refer to page 26-27).
1.12. Explain why it is necessary to consider variability around the mean or nominal dimension as a measure of
quality.
The primary objective of quality engineering efforts is the systematic reduction of variability in the key
quality characteristics of the product. The introduction of statistical process control will help to stabilize
processes and reduce their variability. However, it is not satisfactory just to meet requirements – further
reduction of variability around the mean or nominal dimension often also leads to better product
performance and enhanced competitive position. (For further details refer to page 17)
1.13. Suppose you had the opportunity to improve quality in a hospital. Which areas of the hospital would you
look to as opportunities for quality improvement? What metrics would you use as measures of quality?
Identifying areas for quality improvement (in any company) should be made with the company’s strategic
goals in mind. You want to make the improvements that will have the biggest impact on the company
(hospital). In a hospital setting, some potential areas of opportunities for quality improvement include clinic
wait times, lab processing times, length of stay in the hospital, surgical outcomes, medication delivery, etc.
1.14. Suppose you had to improve service quality in a bank credit card application and approval process. What
critical-to-quality characteristics would you identify? How could you go about improving this system?
Some possible CTQs could be the cycle time for the loan to be approved. You might start an improvement
to the system by identifying potential causes of long cycle times, determine which causes are the most
harmful to the system and then try to eliminate or reduce them.
a) Health-care facility
Some possible dimensions of quality that may be of interest in a health-care facility:
Performance: Is the appropriate treatment provided?
Reliability: Is the treatment effective?
Durability: Is the treatment done correctly the first time?
Aesthetics: Is the facility clean?
Features: Is the facility able to handle different types of health-related issues?
Perceived Quality: Does the facility have a good reputation?
Responsiveness: Does a patient have to wait a long time for treatment? Is the staff prompt?
Professionalism: Are the nurses/doctors/other staff knowledgeable?
Is the facility affordable?
b) Department store
Some possible dimensions of quality that may of interest in a department store:
Performance: Does the store have what you need?
Reliability: Does the store consistently have what you need? Are the products in good condition?
Serviceability: Is it easy and/or possible to make returns or exchanges?
Aesthetics: Is the store clean? Does it have an easy to understand layout?
Features: Does the store have specialized products?
Perceived Quality: Does the store have a good reputation?
Responsiveness: Are the staff quick to help find an item? Is the staff helpful?
Professionalism: Do the staff know where items in the store are?
c) Grocery store
Some possible dimensions of quality that may of interest in a grocery store:
Performance: Does the store have what you need?
Reliability: Does the store consistently have what you need? Are the products in good condition?
Durability: Are there items past the expiration date in the store?
Aesthetics: Is the store clean? Is the layout understandable/logical?
Features: Does the store have specialized products?
Perceived Quality: Does the store have a good reputation?
Responsiveness: Are the staff quick to help find an item? Is the staff helpful?
Professionalism: Do the staff know where items in the store are?
Exercises
Deming’s philosophy is more focused on statistics than Juran. A significant component of his philosophy is
statistical techniques for reducing variability. Juran takes a more strategic approach and believed most
quality problems result from ineffective planning for quality. (For further details refer to pages 32-37)
The Juran Trilogy is the three components of quality management philosophy: planning, control and
improvement. The planning process involves identifying external customers, determining their needs,
designing products that meet these needs, and planning for quality improvement on a regular basis. Control
is used by the operating forces of the business to ensure that the product meets the customers’
requirements. Improvement aims to achieve performance and quality levels higher than the current levels.
2.3. What is the Malcolm Baldrige National Quality Award? Who is eligible for the award?
The award is an annual award administered by NIST to recognize U.S. organizations for performance
excellence. Awards are given to U.S. organizations in five categories: manufacturing, service, small
business, health care, and education.
Juran’s philosophy of quality focused on a strategic approach to quality management and improvement.
One of Deming’s key philosophies is to reduce variability. Both emphasize planning in quality
improvement: plan, control, and improve are both themes in Deming and Juran’s philosophies. (For further
details refer to pages 32-37)
The prestige of winning the award as only three awards are given each year in each of the five categories is
one motivating factor. In addition, improving the company’s performance leads to improved customer
satisfaction, improved product quality, productivity, etc. This will all lead to overall improved business for
the company.
2.7. Hundreds of companies have won the MBNQA. Collect information on two of the winners. What success
have they had since receiving the awards?
a. What results do you obtain if the probability of good quality on each meal component was 0.999?
10
P ( single meal good )=( .999 ) =0.990045
4
P ( all meals good )= ( 0.990045 ) =0.96077
12
P ( all visits duringthe year good )=( 0.96077 ) =0.6186
b. What level of quality on each meal component would be required to produce an annual level of quality that
was acceptable to you?
2.9. Discuss the similarities between the Shewhart cycle and DMAIC.
Both are cyclic plans/strategies for improvement of a process. There are phases dedicated to planning and
defining the current process. Both are team-oriented aimed at testing, implementing, and maintaining
improvements (For further details refer to pages 35, 55-65).
2.10. Describe a service system that you use. What are the CTQs that are important to you? How do you think
that DMAIC could be applied to this process?
Consider eating out at a restaurant. Potential CTQs are time until seated, time for waiter to come to the
table, time until food is served, the taste of the food, the temperature of the food, menu options, friendly
staff, etc. The Define phase would include project definition, goals, and scope of the project. Flow charts
and value stream maps will help define the current process. In the Measure step, data must be collected to
understand the current state. Food temperature or time until guests are served could be recorded, for
example. Guest surveys could help determine current customer opinions on menu options, etc. The measure
phase will aim to identify causes for long wait times or improper food temperature. This could be done with
cause-and-effect diagrams and hypothesis testing, for example. In the Improve step, the project team will
propose changes that can be made to improve long wait times. This could include a redesign of the tables
that improves workflow, reassigning waitstaff, introducing time trackers, etc. After a pilot study is done to
test if the improvements are successful, then the control step will consist of maintaining the improvements.
This may include training new waitstaff and monitoring times.
2.11. One of the objectives of the control plan in DMAIC is to “hold the gain.” What does this mean?
Once a project is completed, the improved process is handed off to the process owner. To hold the gain
means to ensure that the gains from the project stay in place and become a part of the daily routine. Once
an improvement has been achieved, you do not want to let it drop back to the level it was at previously.
2.12. Is there a point at which seeing further improvement in quality and productivity isn’t economically
advisable? Discuss your answer.
If the cost to improve the quality of the process/product outweighs the gains of the improvements and the
process is operating at an acceptable quality level, then it is probably not worth the expense to continue
making improvements.
At a tollgate, a project team presents its work to managers and “owners” of the
process. In a six-sigma organization, the tollgate participants also would include the
project champion, master black belts, and other black belts not working directly on
the project. Tollgates are where the project is reviewed to ensure that it is on track
and they provide a continuing opportunity to evaluate whether the team can
successfully complete the project on schedule. Tollgates also present an opportunity
to provide guidance regarding the use of specific technical tools and other
information about the problem. Organization problems and other barriers to success
—and strategies for dealing with them—also often are identified during tollgate
reviews. Tollgates are critical to the overall problem-solving process; It is important
that these reviews be conducted very soon after the team completes each step.
2.14. An important part of a project is to identify the key process input variables (KPIV) and key process output
variables (KPOV). Suppose that you are the owner/manager of a small business that provides mailboxes,
copy services, and mailing services. Discuss the KPIVs and KPOVs for this business. How do they relate to
possible CTQs?
Potential KPIVs include number of current customers, types of customers, number of employees, current
services offered, and cost of services. Potential KPOVs include store revenue, customer wait times.
Customer CTQs may include wait times and price of services. The number of customers, types of
customers, and services offered affect the price of services. Number of employees, current services offered
impact wait time for customers.
2.15. An important part of a project is to identify the key process input variables (KPIV) and key process output
variables (KPOV). Suppose that you are in charge of a hospital emergency room. Discuss the KPIVs and
KPOVs for this business. How do they relate to possible customer CTQs?
Potential KPIVs include number of patients, types of patients, number of nurses on shift, number of doctors
on shift, day of the week, time of the day, etc. Potential KPOVs include waiting time until treatment, time
to process laboratory results, time until admission, percentage of patients without being seen, percentage of
patients who return for treatment, etc. Critical to quality characteristics may include wait times and
treatment received.
2.16. Why are designed experiments most useful in the improve step of DMAIC?
Designed experiments can be applied to an actual physical process or to a computer simulation model of
that process. They can be used for determining which factors influence the outcome of a process and
determining the optimal combination of factor settings.
2.17. Suppose that your business is operating at the three-sigma quality level. If projects have an average
improvement rate of 50% annually, how many years will it take to achieve six-sigma quality?
Assuming a 1.5 σ shift in the mean that is customary with six-sigma applications [
' '
X N ( μ =1.50 , σ =1) ¿, the percentage within the −3 σ and 3 σ limits is:
x=ln ( 66810
3.4
)/ln (1−0.5)
x=¿14.26
It will take the business about 14 years and 3 months to achieve 6 σ quality.
2.18. Suppose that your business is operating at the four-and-a-half-sigma quality level. If
projects have an average improvement rate of 50% annually, how many years will it
take to achieve six-sigma quality?
Assuming a 1.5 shift in the mean that is customary with six-sigma applications
[X' N (μ' =1.50 , σ =1) ], the percentage within the −4.5 σ and 4.5 σ limits is:
x=ln ( 1,350
3.4
)/ln ( 1−0.5 )
x=8.6332
` It will take the business nearly 8 years and 7 months to achieve6 σ quality.
2.19. Explain why it is important to separate sources of variability into special or assignable causes and common
or chance causes.
Common or chance causes are due to the inherent variability in the system and cannot generally be
controlled. Special or assignable causes can be discovered and removed, thus reducing the variability in
the overall system. It is important to distinguish between the two types of variability, because the strategy
to reduce variability depends on the source. Chance cause variability can only be removed by changing the
system, while assignable cause variability can be addressed by finding and eliminating the assignable
causes.
2.20. Consider improving service quality in a restaurant. What are the KPIVs and KPOVs that you should
consider? How do these relate to likely customer CTQs?
Potential KPIVs include number of customers, number of servers on staff, number of cooks on staff,
available food, cost of food, etc. Potential KPOVs include temperature of the food, taste of the food, wait
time until greeted, wait time until food is served, restaurant revenue, etc.
2.21. Suppose that during the analyze phase an obvious solution is discovered. Should that solution be
immediately implemented and the remaining steps of DMAIC abandoned? Discuss your answer.
The answer is generally NO. The advantage of completing the rest of the DMAIC process is that the
solution will be documented, tested, and its applicability to other parts of the business will be evaluated.
An immediate implementation of an “obvious” solution may not lead to an appropriate control plan.
Completing the rest of the DMAIC process can also lead to further refinements and improvements to the
solution. Also the transition plan developed in the control phase includes a validation check several months
after project completion. If the DMAIC process is not completed, there is a danger of the original results
not being sustained.
2.22. What information would you have to collect in order to build a discrete-event simulation model of a retail
branch-banking operation? Discuss how this model could be used to determine appropriate staffing levels
for the bank.
You would need to know the distribution of customer arrivals to the bank. This may be dependent on time
of the day and/or day of the week. You would also need information on the time it takes for an employee to
serve a particular customer. You may need to know if there are different types of customers entering the
bank and if the time it takes to serve them differs. For comparison, you would want to know how the bank
currently operates – how long do customer typically wait to be served and how many employees are
potentially reduce the wait time of customers while ensuring that employees do not have too much
downtime.
2.23. Suppose that you manage an airline reservation system and want to improve service quality. What are the
important CTQs for this process? What are the KPIVs and KPOVs? How do these relate to the customer
CTQs that you have identified?
Potential critical to quality characteristics for an airline reservation system are ease of use of the system,
number of flight options, time to process a reservation, and flight costs. Potential KPIVs include the flight
information entered by the user. Potential KPOVs include the time to display all possible flight options and
time to complete the reservation.
2.24. It has been estimated that safe aircraft carrier landings operate at about the 5s level. What level of ppm
defective does this imply?
Assuming a 1.5 shift in the mean that is customary with six-sigma applications
[X' N (μ' =1.50 , σ =1) ], the percentage within the −5 and 5σ limits is:
2.25. Discuss why, in general, determining what to measure and how to make measurements is more difficult in
service processes and transactional businesses than in manufacturing.
Measurement systems and data on system performance often already exist in manufacturing. It often
automated as well. In service and transactional businesses, the need for data may not be as obvious, so there
may not be historical data or measurement systems already in place.
2.26. Suppose that you want to improve the process of loading passengers onto an airplane. Would a discrete-
event simulation model of this process be useful? What data would have to be collected to build this
model?
Testing different plans of loading passengers onto an airplane would be difficult to implement in practice.
A discrete event simulation would be useful in order to simulate different processes before implementing a
pilot study on the new process that leads to the most improvement of loading passengers. Information on
the current process will be needed in order to compare any potential changes to the process. This would
include current boarding procedure and distribution of time for passengers to board. In the simulations for
the potential improvements to the boarding process, estimates on the improvements of boarding times and
other changes would also be necessary.
CHAPTER 3
Exercises
3.1. What are chance and assignable causes of variability? What part do they play in the operation and
interpretation of a Shewhart control chart?
Chance cause variability is the natural variability or “background noise” present in a system. This type of
variability is essentially unavoidable. An assignable cause of variability is an identifiable source of
variation in a process and is not a part of the normal variability of the process. For example, an assignable
cause could be an operator error or defective materials. A process that only has chance causes is said to be
in statistical control. Assignable causes will be flagged in a control chart so that the process can be
investigated.
3.2. Discuss the relationship between a control chart and statistical hypothesis testing.
A control chart is a test of the hypothesis that the process is in a state of statistical control. A point plotting
outside the control limits is equivalent to rejecting the hypothesis that the process is in a state of statistical
control. A point plotting within the control limits is equivalent of failing to reject the hypothesis that the
process is in a state of statistical control.
3.3. What is meant by the statement that a process is in a state of statistical control?
A process can have natural or common cause variation. This type of variation is usually unavoidable. A
system that only has common cause variability is considered to be in a state of statistical control. When
other types of variability outside of the natural variability of the system occur, the process is no longer in
statistical control.
3.4. If a process is in a state of statistical control, does it necessarily follow that all or nearly all of the units of
product produced will be within the specification limits?
No; your process can be in a state of statistical control, but the process is not operating at the desired
specification limits. In this case, a redesign or change to the process may be necessary in order for the
process to operate within the specification limits.
3.5. Discuss the logic underlying the use of three-sigma limits on Shewhart control charts. How will the chart
respond if narrower limits are chosen? How will it respond if wider limits are chosen?
The use of three-sigma limits is common because this range contains 99.7% of points within a normal
distribution. Using narrower limits may cause out of control points to be detected quicker, but will increase
the Type I error, the probability of false positives. Using wider limits will decrease Type I error, but will
lead to more false negatives, as out of control behavior is not detected by the wider limits.
3.6. Consider the control chart shown here. Does the pattern appear random?
No, the last four samples are located at a distance greater than 1 from the center line.
3.7. Consider the control charts shown here. Does the pattern appear random?
3.8. Consider the control charts shown here. Does the pattern appear random?
Yes, the pattern is random.
3.9. You consistently arrive at your office about one-half hour later than you would like. Develop a cause-and-
effect diagram that identifies and outlines the possible causes of this event.
Machin Personnel
3.10. A car has gone out of control during a snowstorm and strikes a tree. Construct a cause-and-effect diagram
that identifies and outlines the possible causes of the accident.
user error
driving too fast
another driver
on cell phone/distracted
pedestrian/object Car
Crash in
Snow
storm
ice on road
Environment Methods
3.12. Construct a cause-and-effect diagram that identifies the possible causes of consistently bad coffee from a
large-capacity office coffee pot.
Measurements Material
3.13. Develop a flow chart for the process that you follow every morning from the time you awake until you
arrive at your workplace (or school). Identify the value-added and non-value-added activities.
3.14. Develop a flow chart for the pre-registration process at your university. Identify the value-added and non-
value-added activities.
3.15. Many process improvement tools can be used in our personal lives. Develop a check sheet to record
“defect” you have in your personal life (such as overeating, being rude, not meeting commitments, missing
classes, etc.). Use the check sheet to keep a record of these “defects” for one month. Use a Pareto chart to
analyze these data. What are the underlying causes of these “defects”?
4.1. The fill amount of liquid detergent bottles is being analyzed. Twelve bottles, randomly selected from the
process, are measured, and the results are as follows (in fluid ounces): 16.05, 16.03, 16.02, 16.04, 16.05,
16.01, 16.02, 16.02, 16.03, 16.01, 16.00, 16.07.
∑ xi 16.05+⋯+16.07
i=1
= =16.0292 fluid ounces
n 12
b. Calculate the sample standard deviation.
√
n
∑ ( xi −x )2
i=1
n−1
=
√ ( 16.05−16.0292 )2 +⋯+ ( 16.07−16.0292 )2
11
=0.0202 fluid ounces
4.2. Monthly sales for tissues in the northwest region are (in thousands) 50.001, 50.002, 49.998, 50.006, 50.005,
49.996, 50.003, 50.004.
∑ xi 50.001+⋯+50.004
i=1
= =50.002
n 8
b. Calculate the sample standard deviation.
√
n
∑ ( xi −x )2
i=1
n−1
=
√ ( 50.001−50.002 )2 +⋯+ ( 50.004−50.002 )2
7
=0.0034
4.3. Waiting times for customers in an airline reservation system are (in seconds) 953, 955, 948, 951, 957, 949,
954, 950, 959.
∑ xi 953+⋯+ 959
i=1
= =952.8889 seconds
n 9
b. Calculate the sample standard deviation.
√
n
∑ ( xi −x )2
i=1
n−1
=
√
( 953−952.8889 )2 +⋯+ ( 959−952.8889 )2
8
=3.7231 seconds
Data: 953, 955, 948, 951, 957, 949, 954, 950, 959
Ordered Data: 948, 949, 950, 951, 953, 954, 955, 957, 959
n+1
Since n = 9 is odd, the position of the median value is =5 in the ordered data set.
2
Median = Q2 = 953 seconds
b. How much could the largest time increase without changing the sample median?
Since the median is the middle value in the ordered data, the largest time could increase infinitely and the
sample median would not change.
4.5. The time to complete an order (in seconds) is as follows: 96, 102, 104, 108, 126, 128, 150, and 156.
∑ xi 96+ ⋯+ 156
i=1
= =121.25 seco nds
n 8
b. Calculate the sample standard deviation.
√
n
∑ ( xi −x )2
i=1
n−1
=
√ ( 96−121.25 )2 +⋯+ ( 156−121.25 )2
7
=22.6258 seconds
4.6. The time to failure in hours of an electronic component subjected to an accelerated life test is shown in
Table 4E.1. To accelerate the failure test, the units were tested at an elevated temperature (read down, then
across).
127 124 121 118
125 123 136 131
131 120 140 125
124 119 137 133
129 128 125 141
121 133 124 125
142 137 128 140
151 124 129 131
160 142 130 129
125 123 122 126
a. Calculate the sample average and standard deviation.
∑ xi 127+⋯+126
x= i=1 = =129.97 hours
n 40
√
n
∑ ( x i−x )2
s= i=1
n−1
=
√ ( 127−129.97 )2 +⋯+ ( 126−129.97 )2
39
=8.91 hours
b. Construct a histogram.
12
10
Frequency
0
120 130 140 150 160
Failure Time
Stem-and-leaf of failureTime N = 40
Leaf Unit = 1.0
2 11 89
12 12 0112334444
(12) 12 555556788999
16 13 011133
10 13 677
7 14 00122
2 14
2 15 1
1 15
1 16 0
Since n = 40 is even, the median is the average of the (n/2)st and (n/2 + 1)st ranked observations.
Therefore, the median is the average of the 20th and 21st ranked observations.
128+128
Q 2= =128 hours
2
The lower quartile is the observation with rank (0.25)*(40) + .5 = 10.5.
124+124
Q 1= =124 hours
2
The upper quartile is the observation with rank (0.75)*(40) + .5 = 30.5
133+136
Q 3= =134.5 hours
2
4.7. An article in Quality Engineering (Vol.4, 1992, pp. 487-495) presents viscosity data from a batch chemical
process. A sample of these data is presented in Table 4E.2. (read down, then across).
12 68
13 3134
13 776978
14 3133101332423404
14 585669589889695
15 3324223422112232
15 568987666
16 140114
16 85996
17 0
Interval Frequency
[12.5,13.0) 2
[13.0,13.5) 4
[13.5,14.0) 6
[14.0,14.5) 16
[14.5,15.0) 15
[15.0,15.5) 16
[15.5,16.0) 9
[16.0,16.5) 6
[16.5,17.0) 5
[17.0,17.5) 1
15
Frequency
10
0
13 14 15 16 17
viscosity
c. Convert the stem-and-leaf plot in part (a) into an ordered stem-and-leaf plot. Use this graph to assist in
locating the median and the upper and lower quartiles of the viscosity data.
2 12 68
6 13 1334
12 13 677789
28 14 0011122333333444
(15) 14 555566688889999
37 15 1122222222333344
21 15 566667889
12 16 011144
6 16 56899
1 17 0
Since n = 80 is even, the median is the average of the (n/2)st and (n/2 + 1)st ranked observations.
Therefore, the median is the average of the 40th and 41st ranked observations.
14.9+14.9
Q 2= =14.9 units
2
The lower quartile is the observation with rank (0.25)*(80) + .5 = 20.5.
14.3+14.3
Q 1= =14.3 units
2
The upper quartile is the observation with rank (0.75)*(80) + .5 = 60.5
15.5+15.6
Q 3= =15.55 units
2
d. What are the ninetieth and tenth percentiles of viscosity?
16.1+16.4
90 th percentile= =16.25 units
2
The 10th percentile is the observation with rank (0.10)*(80) + .5 = 8.5
12.7+12.7
10 th percentile= =12.7 units
2
4.8. Construct and interpret a normal probability plot of the volumes in Exercise 4.1.
Data: 16.05, 16.03, 16.02, 16.04, 16.05, 16.01, 16.02, 16.02, 16.03, 16.01, 16.00, 16.07
60
50
40
30
20
10
5
1
15.950 15.975 16.000 16.025 16.050 16.075 16.100
volume
4.9. Construct and interpret a normal probability plot of the waiting time measurements in Exercise 4.3.
Data: 953, 955, 948, 951, 957, 949, 954, 950, 959
Percent
60
50
40
30
20
10
5
1
940 945 950 955 960 965 970
Wait Time
4.10. Construct a normal probability plot of the failure time data in Exercise 4.6. Does the assumption that failure
time for this component is well modeled by a normal distribution seem reasonable?
60
50
40
30
20
10
5
1
100 110 120 130 140 150 160
FailureTime
There are outliers in both tails of the normal probability plot and the points do not form a straight line. This
indicates that the data does not come from a normal distribution.
4.11. Construct a normal probability plot of the Viscosity data in Exercise 4.7. Does the assumption that process
yield is well modeled by a normal distribution seem reasonable?
13.3 14.3 14.9 15.2 15.8 14.2 16.0 14.0
14.5 16.1 13.7 15.2 13.7 16.9 14.9 14.4
15.3 13.1 15.2 15.9 15.1 14.9 13.6 13.7
15.3 15.5 14.5 16.5 13.4 15.2 15.3 13.8
14.3 12.6 15.3 14.8 14.1 14.4 14.3 15.6
14.8 14.6 15.6 15.1 14.8 15.2 15.6 14.5
15.2 14.3 15.8 17.0 14.3 14.6 16.1 12.8
14.5 15.4 13.3 14.9 14.3 16.4 13.9 16.1
14.6 15.2 14.1 14.8 16.4 14.2 15.2 16.6
14.1 16.8 15.4 14.0 16.9 15.7 14.4 15.6
70
60
50
40
30
20
10
5
0.1
11 12 13 14 15 16 17 18 19
Viscosity
4.12. Consider the yield data in Table 4E.3. Construct a time-series plot for these data. Interpret the plot.
87. 84.
94.1 94.1 92.4 85.4
3 6
84. 83.
93.2 92.1 90.6 86.6
1 6
90. 85.
90.6 96.4 89.1 91.7
1 4
95. 89.
91.4 88.2 88.8 87.5
2 7
86. 87.
88.2 86.4 86.4 84.2
1 6
94. 85.
86.1 85.0 85.1 85.1
3 1
93. 89.
95.1 84.9 84.0 90.5
2 6
86. 90.
90.0 87.3 93.7 95.6
7 0
83. 90.
92.4 89.6 87.7 88.3
0 1
95. 94.
87.3 90.3 90.6 84.1
3 3
94. 97.
86.6 93.1 89.4 83.7
1 3
97. 96.
91.2 94.6 88.6 82.9
8 8
93. 94.
86.1 96.3 84.1 87.3
1 4
86. 96.
90.4 94.7 82.6 86.4
4 1
87. 98.
89.1 91.1 83.1 84.5
6 0
95
Yield
90
85
1 9 18 27 36 45 54 63 72 81 90
Index
4.13. Consider the chemical process yield data in Exercise 4.12. Calculate the sample average and standard
deviation.
87. 84.
94.1 94.1 92.4 85.4
3 6
84. 83.
93.2 92.1 90.6 86.6
1 6
90. 85.
90.6 96.4 89.1 91.7
1 4
95. 89.
91.4 88.2 88.8 87.5
2 7
86. 87.
88.2 86.4 86.4 84.2
1 6
94. 85.
86.1 85.0 85.1 85.1
3 1
93. 89.
95.1 84.9 84.0 90.5
2 6
86. 90.
90.0 87.3 93.7 95.6
7 0
83. 90.
92.4 89.6 87.7 88.3
0 1
95. 94.
87.3 90.3 90.6 84.1
3 3
94. 97.
86.6 93.1 89.4 83.7
1 3
97. 96.
91.2 94.6 88.6 82.9
8 8
93. 94.
86.1 96.3 84.1 87.3
1 4
90.4 86. 94.7 82.6 96. 86.4
4 1
87. 98.
89.1 91.1 83.1 84.5
6 0
∑ xi 94.1+⋯+84.5
i=1
= =89.4756 units
n 90
√
n
∑ ( xi −x )2
i=1
n−1
=
√
( 94.1−89.4756 )2 +⋯+ ( 90−89.4756 )2
89
=4.1578 units
4.14. Consider the chemical process yield data in Exercise 4.12. Construct a stem-and-leaf plot and a histogram.
Which display provides more information about the process?
Stem-and-leaf of yield N = 90
Leaf Unit = 0.10
2 82 69
6 83 0167
14 84 01112569
20 85 011144
30 86 1114444667
38 87 33335667
43 88 22368
(6) 89 114667
41 90 0011345666
31 91 1247
27 92 144
24 93 11227
19 94 11133467
11 95 1236
7 96 1348
3 97 38
1 98 0
Frequency
5
0
84 86 88 90 92 94 96 98
Yield
The stem-and-leaf plot provides more information because it shows the individual data values. However,
the histogram provides a more concise visual summary of the data.
Data: 16.05, 16.03, 16.02, 16.04, 16.05, 16.01, 16.02, 16.02, 16.03, 16.01, 16.00, 16.07
Boxplot of Volume
16.07
16.06
16.05
Volume
16.04
16.03
16.02
16.01
16.00
Boxplot of Sales
50.0075
50.0050
50.0025
Sales
50.0000
49.9975
49.9950
4.17. Suppose that two fair dice are tossed and the sum of the dice is observed. Determine the probability
distribution of x , the sum of the dice.
Let x=d 1 +d 2 represent the sum of two dice. First, let us determine the possible values for x , and then,
let us determine how likely it is to obtain each value. For example, x=3 can occur when d 1=1 and
d 2=2 or when d 1=2 and d 2=1. Hence, there are two possible scenarios that lead to x=3.
Scenario d 1 d 2 x Scenario d 1 d 2 x
1 1 1 2 19 4 1 5
2 1 2 3 20 4 2 6
3 1 3 4 21 4 3 7
4 1 4 5 22 4 4 8
5 1 5 6 23 4 5 9
6 1 6 7 24 4 6 10
7 2 1 3 25 5 1 6
8 2 2 4 26 5 2 7
9 2 3 5 27 5 3 8
10 2 4 6 28 5 4 9
11 2 5 7 29 5 5 10
12 2 6 8 30 5 6 11
13 3 1 4 31 6 1 7
14 3 2 5 32 6 2 8
15 3 3 6 33 6 3 9
16 3 4 7 34 6 4 10
17 3 5 8 35 6 5 11
18 3 6 9 36 6 6 12
From the previous table, we can conclude x can assume integer values between 2 and 12. Now, let us
define the likelihood of each scenario by counting how many times each scenario occurs and dividing it by
the total number of possible scenarios (36).
x p(x )
2 1/36
3 2/36
4 3/36
5 4/36
6 5/36
7 6/36
8 5/36
9 4/36
10 3/36
11 2/36
12 1/36
n
Mean: E ( x )=∑ x∗p ( x )=2∗1/36+⋯+12∗1/36=7
i=1
Standard Deviation:
n
V ( x )=∑ ( x 2∗p ( x ) )− [ E ( x ) ] =(22∗1 /36+ ⋯+122∗1/36)−7 2=5.8333
2
i=1
Probability that a part fails to meet 35 lb specification (look up probability in a normal probability table):
4.20. The output voltage of a power supply is normally distributed with mean 5 V and standard deviation 0.02 V.
If the lower and upper specifications for voltage are 4.95 V and 5.05 V, respectively, what is the probability
that a power supply selected at random will conform to the specifications on voltage?
X N (μ=5 , σ=0.02)
Probability that a power supply will be within the specifications (look up probability in a normal
probability table):
P ( LSL< X <USL )=P ( 4.95< X <5.05 )=P ( X <5.05 )−P ( X < 4.95 )
(
¿ P Z<
5.05−5
0.02 ) (
−P Z<
4.95−5
0.02 )
=P ( Z <2.5 ) −P ( Z ←2.5 )=0.99379−0.00621
¿ 0.98758
4.21. Continuation of Exercise 4.20. Reconsider the power supply manufacturing process in Exercise 4.20.
Suppose we wanted to improve the process. Can shifting the mean reduce the number of nonconforming
units produced? How much would the process variability need to be reduced in order to have all but one out
of 1,000 units conform to the specifications?
Shifting the mean will not reduce the number of nonconforming units since the mean is centered between
the specification limits.
P ( X −μ
σ
<
5.05−5
σ )−P ( X−μ
σ
<
4.95−5
σ )=0.999
(
P Z<
0.05
σ ) (
−P Z <
−0.05
σ )
=0.999
0.05 0.05
Therefore, 3.29= ⇒σ = =0.01519
σ 3.29
So:
( Z<
0.05
0.01519) (
−P Z <
−0.05
0.01519 )
=P ( Z< 3.29 )−P ( Z ←3.29 ) =0.998998−0.0005=0.999
Reducing σ to 0.01519 is needed to have all but 1 out of 1,000 units conform to specifications.
4.22. The life of an automotive battery is normally distributed with mean 900 days and standard deviation 35
days. What fraction of these batteries would be expected to survive beyond 1,000 days?
X N (μ=900 , σ =35)
0.212 %of the batteries are expected to survive beyond 1,000 days.
4.23. A light bulb has a normally distributed light output with mean 5,000 end foot-candles and standard
deviation of 50 end foot-candles. Find a lower specification limit such that only 0.5% of the bulbs will not
exceed this limit.
X N ( μ=5000 , σ =50 )
LSL−5000
Look up inverse value of z in probability table: z= =−2.5758
50
LSL=5000+50 z=5000−2.5758∗50=4,871.21
4.24. The specifications on an electronic component in a target-acquisition system are that its life must be
between 5,000 and 10,000 h. The life is normally distributed with mean 7,500 h. The manufacturer realized
a price of $10 per unit produced; however, defective units must be replaced at a cost of $5 to the
manufacturer. Two different manufacturing processes can be used, both of which have the same mean life.
However, the standard deviation of life for process 1 is 1000h, whereas for process 2 it is only 500h.
Production costs for process 2 are twice those for process 1. What value of production costs will determine
the selection between processes 1 and 2?
(
¿ P Z<
10000−7500
1000
−P Z< ) (
5000−7500
1000 )
=P ( Z< 2.5 )−P ( Z ←2.5 )
¿ 0.99379−0.00621=0.98758
Therefore, in process 1, 98.758% of units produced are within spec and 1.242% are outside the
specification limits, i.e., defective.
Profit from Process 1:
98.758∗10−1.242∗5−100∗c1=981.37−100 c1
Process 2:
(
¿ P Z<
10000−7500
500
−P Z< ) (
5000−7500
500 )
=P ( Z< 5.0 )−P ( Z ←5.0 )
¿ 0.9999997129−0.000000287=0.9999994
Therefore, in process 2, 99.99994% of units produced are within spec and 0.00006% are outside the
specification limits, i.e., defective.
We will use process 2 when the profit from process 2 exceeds the profit from process 1:
18.6291>100 c 1
0.186291> c1
Therefore, process 2 is preferred when the production cost per unit, c 1, is less than $0.1863.
4.25. The tensile strength of fiber used in manufacturing cloth is of interest to the purchaser. Previous
experience indicates that the standard deviation of tensile strength is 2 psi. A random sample of eight fiber
specimens is selected, and the average tensile strength is found to be 127 psi.
x=127 , σ =2
a. Build a 95% lower confidence interval in the mean tensile strength.
For 95% of the CIs constructed, the mean tensile strength is expected to be equal to or exceed 125.8369 psi.
4.26. Payment times of 100 randomly selected customers this month had an average of 35 days. The standard
deviation from this group was 2 days.
x=35 , σ =2 , n=100
a. Build a 90% two-sided confidence interval on the mean payment time.
x−z α /2 σ / √ n ≤ μ ≤ x+ z α /2 σ / √ n
34.671 ≤ μ ≤ 35.329
b. Build a 99% two-sided confidence interval on the mean payment time.
x−z α /2 σ / √ n ≤ μ ≤ x+ z α /2 σ / √ n
34.4848 ≤ μ ≤ 35.5152
c. Is it possible that the average time is 30 days?
It is highly unlikely that the average time is 30 days. In 99% of confidence intervals constructed, the
average payment time will be between 34.5 and 35.5 days.
4.27. The service life of a battery used in a cardiac pacemaker is assumed to be normally distributed. A random
sample of 10 batteries is subjected to an accelerated life test by running the, continuously at an elevated
temperature until failure, and the following lifetimes (in hours) are obtained: 25.5, 26.1, 26.8, 23.2, 24.2,
28.4, 25.0, 27.8, 27.3, and 25.7. The manufacturer wants to be certain that the mean battery life exceeds 25
hours in accelerated life testing.
25.0581 ≤ μ ≤ 26.9419
b. Construct a normal probability plot of the battery life data. What conclusions can you draw?
Percent
60
50
40
30
20
10
5
1
20 22 24 26 28 30 32
Service Life
Normality assumption is reasonable. Also, the manufacturer can be certain that the mean exceeds 25 hours,
95% of the CIs constructed will not contain a mean of 25 hrs.
NOTE: Conclusions on the confidence interval are highly dependent on the number of significant digits used
throughout the calculations.
4.28. A local neighborhood has just installed speed bumps to slow traffic. Two weeks after the installation the
city recorded the following speeds 500 feet after the last speed bump: 29, 29, 31, 42, 30, 24, 30, 27, 33, 44,
32, 30, 24, 35, 34, 30, 23, 35, 27.
μ ≤ x +t α ,n−1∗s / √ n
μ ≤30.85+ 2.539∗5.3830 / √ 20
μ ≤33.91
The hypothesized average speed of 25 is contained in the 99% CI. Therefore, there is no evidence that the
mean speed is less than 25 mph.
Percent
60
50
40
30
20
10
5
1
10 20 30 40 50
Speed
There is one value outside the 95% confidence interval; however, it is reasonable to assume the data comes
from a normal population.
4.29. A company has just purchased a new billboard near the freeway. Sales for the past 10 days have been 483,
532, 444, 510, 467, 461, 450, 444, 540, and 499. Build a 95% two-sided confidence interval on sales. Is
there any evidence that the billboard has increased sales from its previous average of 475 per day?
457.4217 ≤ μ ≤508.5783
Since 475 is within the confidence interval, there is no evidence that the billboard has increased sales from
its previous average of 475 per day.
NOTE: Confidence intervals endpoints are highly dependent on the number of significant digits used throughout the
calculations.
4.30. A machine is used to fill container with a liquid product. Fill volume can be assumed to be normally
distributed. A random sample of 10 containers is selected, and the net contents (oz.) are as follows: 12.03,
12.01, 12.04, 12.02, 12.05, 11.98, 11.96, 12.02, 12.05, and 11.99. Suppose that the manufacturer wants to
be sure that the mean net contents exceed 12 oz. What conclusions can be drawn from the data using a 95%
two-sided confidence interval on the mean fill volume?
12.015−2.262∗0.0303 / √ 10 ≤ μ ≤ 12.015+2.262∗0.0303/ √ 10
11.9933≤ μ ≤12.0367
12 is contained in the 95% CI, so we do not have evidence at the 5% significance level that the mean fill
volume exceeds 12 oz.
4.31. A company is evaluating the quality of aluminum rods received in a recent shipment. Diameters of
aluminum alloy rods produced on an extrusion machine are known to have a standard deviation of 0.0001
in. A random sample of 25 rods has an average diameter of 0.5046 in. Test whether or not the mean rod
diameter is 0.5025 using a two-sided confidence interval.
0.504561 ≤ μ ≤ 0.504639
Since 0.5025 is not within the confidence interval, we can conclude that the mean diameter is not 0.5025 at
the 5% significance level.
4.32. The output voltage of a power supply is assumed to be normally distributed. Sixteen observations taken at
random on voltage are as follows: 10.35, 9.30, 10.00, 9.96, 11.65, 12.00, 11.25, 9.58, 11.54, 9.95, 10.28,
8.37, 10.44, 9.25, 9.38, and 10.85.
9.7268 ≤ μ ≤10.7912
Since 12 is not in the 95% CI, we conclude that the mean voltage is not equal to 12 V.
b. Test the hypothesis that the variance equals 1 V using a 95% two-sided confidence interval.
2 2
(n−1) s (n−1)s
2
≤ σ2 ≤ 2
χ α /2 , n−1 χ 1−α /2 , n−1
2 2
15∗(0.999) 15∗(0.999)
2
≤ σ2≤
χ .025 , 16−1 χ 2.975 , 16−1
2 2
15∗(0.999) 15∗(0.999)
≤ σ2≤
27.49 6.27
2
0.5456 ≤ σ ≤ 2.3876
Since 1 is contained in the CI, we cannot conclude that the variance is not equal to 1.
4.33. Last month, a large national bank’s average payment time was 33 days with a standard deviation of 4 days.
This month, the average payment time was 33.5 days with a standard deviation of 4 days. They had 1,000
customers both months.
a. Build a 95% confidence interval on the difference between this month’s and last month’s payment times.
Is there evidence that payment times are increasing?
Since the lower limit for the difference is positive, there is evidence to state that the payment times are
increasing.
√
2
σ Current σ 2Previous
≤ xCurrent + x Previous + z 0.05 /2 +
nCurrent n Previous
33.5−33−1.96
√ 42
+
42
1000 1000
≤ μCurrent −μ Previous ≤ 33.5−33+1.96
42
+
42
1000 1000 √
0.149 ≤ μ Current −μPrevious ≤ 0.851
We still arrive at the same conclusion that payment times have increased given the entire confidence
interval is greater than zero.
4.34. Two machines are used for filling glass bottles with a soft-drink beverage. The filling processes have
known standard deviations σ 1=0.010 liter and σ 2=0.015 liter, respectively. A random sample of
n1=25 bottles from machine 1 and n2 =20 bottles from machine 2 results in average net contents of 2.04
liters and 2.07 liters from machine 1 and 2, respectively. Test the hypothesis that both machines fill to the
same net contents using a 95% two-sided confidence interval on the difference of fill volume. What are
your conclusions?
x 1−x 2−z α / 2
√ σ 21 σ 22
+ ≤ μ −μ ≤ x −x + z
σ 21 σ 22
n 1 n2 1 2 1 2 α /2 n1 n 2
+
√
√ ( 0.010 )2 ( 0.015 )2
√
2 2
(0.010) (0.015)
2.04−2.07−1.96 + ≤ μ 1−μ2 ≤ 2.04−2.07+1.96 +
25 20 25 20
−0.0377 ≤ μ1−μ2 ≤−0.0223
Since 0 is not included in the 95% confidence interval, we can conclude that there is a difference of fill
volume between the two filling processes.
4.35. A supplier received results on the hardness of metal from two different hardening process (1) salt-water
quenching and (2) oil quenching. The results are shown in Table 4E.4.
Salt
Oil Quench
Quench
145 152
150 150
153 147
148 155
141 140
152 146
146 158
154 152
139 151
148 143
2 2
x SQ=147.6 , s SQ=24.7111 , nSQ =10 , x OQ =149.4 , s OQ=29.8222 , nOQ =10
( )
2 2 2
s SQ s OQ
( )
2
+ 24.7111 29.8222
+
n SQ nOQ 10 10
¿ = =17.3992 ⇒ 17
( ) ( ) ( ) ( )
2
sSQ
2
sOQ
2 2
24.7111 2 29.8222 2
nSQ nOQ 10 10
+ +
10−1 10−1
n SQ−1 nOQ−1
√ s 2SQ sOQ
√
2
s 2SQ s 2OQ
x SQ−x OQ −t 0.05/ 2 , + ≤ μ SQ−μOQ ≤ x SQ−x OQ +t 0.05/ 2 , +
n SQ nOQ n SQ nOQ
147.6−149.4−2.1098
√ 24.7111 29.8222
10
+
10
≤ μSQ −μ OQ ≤33.5−33+2.1098
24.7111 29.8222
10
+
10 √
−6.7269 ≤ μ SQ−μ OQ ≤ 3.1269
24.7111 2 2 24.7111
F1−0.05 /2 , 10−1 , 10−1 ≤ σ SQ / σ OQ ≤ F
29.8222 29.8222 0.05/ 2 ,10−1 ,10−1
24.7111 2 2 24.7111
∗0.2484 ≤ σ SQ /σ OQ ≤ ∗4.0260
29.8222 29.8222
2 2
0.2058 ≤ σ SQ /σ OQ ≤3.3360
60 StDev 5.461
50 N 10
40 AD 0.169
30 P-Value 0.906
20
10
5
1
130 140 150 160 170
For the confidence interval based on the mean difference, zero is included within the interval. For the
confidence interval based on the ratio of the variances, one is included within the interval. Hence, there is
no significant difference in the population means or variances.
4.36. A random sample of 200 printed circuit boards contains 18 defective or nonconforming units. Estimate the
process fraction nonconforming. Using a 90% two-sided confidence interval, evaluate whether or not it is
possible that the true fraction nonconforming in this process is 10%.
18
Process fraction nonconforming estimate: ^p= =0.09
200
4.37. A random sample of 500 connecting rod pins contains 65 nonconforming units. Estimate the process
fraction nonconforming. Construct a 90% upper confidence interval on the true process fraction
nonconforming. Is it possible that the true fraction nonconforming defective is 10%?
65
Process fraction nonconforming estimate: ^p= =0.13
500
p ≤ 0.13+1.2816
√ 0.13∗( 1−0.13 )
500
p ≤ 0.1493
For 90% of the CIs constructed, the true defective fraction is less than 14.93% (evidence suggests p might
be larger than 0.10)
4.38. During shipment testing, product was flown from Indianapolis to Seattle and back again to simulate 4
takeoffs and landings which can cause cans to open due to pressure changes. Prototype units of the 100
were shipped and 15 opened. Using a 90% two-sided confidence interval, determine if it is possible that the
average failure rate is 11%.
15
^p= =0.15 , α =0.10 , z .05=1.645
100
^p−z α /2
√ ^p ( 1− ^p )
n
≤ p ≤ ^p + z α /2
√
^p (1− ^p )
n
0.15−1.645
√ 0.15 ( 0.85 )
100
≤ p ≤ 0.15+1.645
0.15(0.85)
100 √
0.0913 ≤ p ≤ 0.2087
Since 0.11, 11%, is within the 90% CI, it is possible that the average failure rate is 11%. We cannot
conclude that it the average failure rate is not 11%.
4.39. Continuation of Exercise 4.38. The company has made improvements and has repeated the experiment. In
this iteration, 12 opened. Using a 95% two-sided confidence interval on the difference in proportions, is it
possible to cite improvement?
12
^p1=0.15 , n1 =100 , ^p2= =0.12 , n2=100 , α=0.05 , z .025=1.96
100
^p1− ^p2−z α /2
√ ^p1 ( 1−^p1 ) p^ 2 ( 1− ^p2 )
n1
+
n2
≤ p1− p 2 ≤ ^p 1− ^p2 + z α / 2
√
p^ 1 ( 1− ^p 1) ^p 2 ( 1−^p 2 )
n1
+
n2
.15−.12−1.96
√ 0.15 ( 0.85 ) 0.12 ( 0.88 )
100
+
100
≤ p 1− p2 ≤0.15−0.12+1.96
100
+
100 √
0.15 ( 0.85 ) 0.12 ( 0.88 )
−0.0646 ≤ p ≤0.1246
Since 0 is contained in the confidence interval, we cannot conclude that there was an improvement in the
failure rate.
4.40. Of 1,000 customers, 200 had payments greater than 30 days last month. This month, there are 1,100
customers, of which 230 had payments greater than 30 days.
a. Estimate the fraction late for last month and this month.
Let p1 represent the fraction late last month and p2 represent the fraction late this month.
200 230
^p1= =0.20 , ^p2= =0.2091
1000 1100
b. Construct a 90% confidence interval on the difference in the percentage of late payments.
^p1− ^p2−z α /2
√ ^p1 ( 1−^p1 ) p
n1
+
^ 2 ( 1− ^p2 )
n2
≤ p1− p 2 ≤ ^p 1− ^p2 + z α / 2
√
^ 1 ( 1− ^p 1) ^p 2 ( 1−^p 2 )
p
n1
+
n2
0.20−0.2091−1.645
√ 0.20 ( 0.80 ) 0.2091 ( 0.7909 )
1000
+
1100
≤ p1− p2 ≤ 0.20−0.2091+1.645
1000
+
√
0.20 ( 0.80 ) 0.2091 (
110
Since zero is within the 90% CI, there is no statistical evidence that there is a difference in the percentage
of late payments between the two months.
4.41. A new purification unit is installed in a chemical process. Before and after installation data was collected
regarding the percentage of impurity:
Before (1): Sample mean = 9.85, Sample variance = 6.79, Number of samples = 10
After (2): Sample mean = 8.08, Sample variance = 6.18, Number of samples = 8
a. Can you conclude that the two variances are equal using a two-sided 95% confidence interval?
2 2
s1 2 2 s1
2
F1−α /2 , n −1 , n −1 ≤ σ /σ ≤
1 2 2
F α / 2 ,n −1 ,n −1
s2 2 1
s2 2 1
6.79 2 2 6.79
F 1−0.05/ 2 ,8−1 ,10−1 ≤ σ 1 /σ 2 ≤ F
6.18 6.18 0.05/ 2 ,8−1 ,10−1
24.7111 2 2 24.7111
∗0.2073 ≤ σ 1 /σ 2 ≤ ∗4.6113
29.8222 29.8222
2 2
0.2278 ≤ σ 1 /σ 2 ≤ 4.6113
Since one is included in the interval, we can conclude that the variances are equal.
b. Can you conclude that the new purification device has reduced the mean percentage of impurity using a
two-sided 95% confidence interval?
( )
2 2 2
s 1 s2
( )
2
+ 6.79 6.18
+
n1 n2 10 8
¿ = =15.4373 ⇒ 15
( ) ( ) ( ) ( )
2 2
s1
2
s2
2
6.79 2 6.18 2
n1 n2 10 8
+ +
10−1 8−1
n 1−1 n2−1
Since zero is within the interval, we cannot conclude that the new device has reduced the mean percentage
of impurity.
4.42. Two different types of glass bottles are suitable for use by a soft-drink beverage bottler. The internal
pressure strength (psi) of the bottle is an important quality characteristic. It is known that σ 1=σ 2=¿3.0
psi. From a random sample of n1=n2=16 bottles, the mean pressure strengths are observed to be
x 1=175.8psi and x 2=181.3 psi. The company will not use bottle design 2 unless its pressure strength
exceeds that of bottle design 1 by at least 5 psi. Based on the sample data, should they use bottle design 2 if
they want no larger than 5% chance of excluding bottle 2 if it meets this target?
μ2−μ1 ≥ x2 −x1−z α
√ σ 21 σ 22
+
n1 n 2
μ2−μ1 ≥ 181.3−175.8−1.645
√ 9 9
+
16 16
μ2−μ1 ≥ 3.7553
Since the 95% CI contains 5, there is no evidence that the pressure strength of bottle design 2 exceeds
design 1 by 5 psi.
4.43. The diameter of a metal rod is measured by 12 inspectors, each using both a micrometer caliper and a
vernier caliper. The results are shown in Table 4E.5. Is there a difference between the mean measurements
produced by the two types of caliper? Use alpha = 0.01.
Since the same inspector measures the rod with two calipers, a confidence interval for paired data should be
used.
12
1 −0.001+…+(−0.001) −0.005
d= ∑d=
n i=1 i 12
=
12
=−0.000416667
12
1 1
∑ d i−d ) = (( −0.001−(−0.00041667 ) ) +…+ ( −0.001−(−000041667 ) ) )=0.00131
2 2 2
sd =
n−1 i=1
( 11
sd s
d−t α /2 , n−1 ≤ μ d ≤ d+t α /2 ,n −1 d
√n √n
0.00131 0.00131
−0.000416667−t 0.005, 11 ≤ μ d ≤−0.000416667+t 0.005 ,11
√12 √ 12
0.00131 0.00131
−0.000416667−3.106 ≤ μ d ≤−0.000416667+ 3.106
√12 √12
−0.00159 ≤ μ d ≤ 0.000758
Since zero is within the confidence interval, there is no evidence to suggest the mean measurements
produced by the two types of calipers are different.
4.44. An experiment was conducted to investigate the filling capability of packaging equipment at a winery in
Newberg, Oregon. Twenty bottles of Pinot Gris were randomly selected and the fill volume (in mL)
measured. Assume that fill volume has a normal distribution. The data are as follows: 753, 751, 752, 753,
753, 753, 752, 753, 754, 754, 752, 751 752, 750, 753, 755, 753, 756, 751, and 750.
2 2
(n−1) s (n−1)s
2
≤ σ2 ≤ 2
χ α /2 , n−1 χ 1−α /2 , n−1
2 2
19∗( 1.5381 ) 19∗( 1.5381 )
2
≤ σ 2≤
χ .025 ,19 χ 2.975 ,19
2 2
19∗( 1.5381 ) 19∗( 1.5381 )
≤ σ 2≤
32.85 8.91
2
1.3683 ≤ σ ≤ 5.0448
1.1697 ≤ σ ≤ 2.2461
Since one is not contained in the 95% CI, there is evidence that the standard deviation of fill volume is not
equal to one.
b. Does it seem reasonable to assume that fill volume has a normal distribution?
60
50
40
30
20
10
5
1
748 750 752 754 756 758
Volume
2 2
x 1=180.43 , s 1=353.0155 , n1=10 , x 2=154.649 , s 2=258.3751 , n2=10
a. Build a 95% two-sided confidence interval for the differences of the average of the recovery times. Is there
evidence to support that the facilities are different?
( )
2 2 2
s 1 s2
( )
2
+ 353.0122 258.3751
+
n1 n2 10 10
¿ = =17.5788 ⇒ 17
( ) ( ) ( ) ( )
2 2 2 2 2
s1
2
s2 353.0122 258.3751
n1 n2 10 10
+ +
10−1 10−1
n 1−1 n2−1
s 21 s22
x 1−x 2−t 0.05/ 2 ,
√
+ ≤ μ 1−μ 2 ≤ x 1−x 2+ t 0.05 /2 ,
n1 n2
s21 s 22
+
n1 n2 √
180.43−154.649−2.1098
10
+
√
353.0122 258.3751
10
≤ μ 1−μ2 ≤ 180.43−154.649+ 2.1098
353.0122 258.3751
10
+
10 √
9.2841 ≤ μ 1−μ2 ≤ 42.2779
The confidence interval is positive; thus, there is evidence that recovery times are larger at Northbrook.
b. Build a 95% two-sided confidence interval for the ratio of the variance of recovery times. Is there evidence
to support that the facilities are different?
2 2
s1 2 2 s1
F
2 1−α /2 , n −1 , n −1
≤ σ 1 /σ 2 ≤ 2
F α / 2 ,n −1 ,n −1
s2 2 1
s2 2 1
353.0122 2 2 353.0122
F1−0.05 /2 , 10−1 , 10−1 ≤ σ 1 /σ 2 ≤ F
258.3751 258.3751 0.05/ 2 ,10−1 ,10−1
353.0122 2 2 353.0122
∗0.2484 ≤ σ 1 /σ 2 ≤ ∗4.0260
258.3751 258.3751
2 2
0.3394 ≤ σ 1 /σ 2 ≤ 5.5006
Since one is included in the interval, we cannot conclude that variances are different.
c. Build a 95% two-sided confidence interval for the differences on the percentage of motion restored. Is
there evidence to support that the facilities are different?
0.7490−0.674−1.96
√ 0.7490 ( 1−0.7490 ) 0.674 (1−0.674 )
10
+
10
≤ p1 −p 2 ≤ 0.0909−0.2397+1.96
0.7490 ( 1−
10 √
−0.3208 ≤ p 1− p2 ≤ 0.4708
Since zero is within the interval, there is not enough evidence to conclude the facilities are different in
terms of the percentage of motion restored.
4.46. An article in Solid State Technology (May 1987) describes an experiment to determine the effect of flow
rate on etch uniformity on a silicon wafer used in integrated-circuit manufacturing. Three flow rates are
tested, and the resulting uniformity (in percent) is observed for six test units at each flow rate. The data are
shown in Table 4E.7.
C 2 F 6 Flow Observations
(SCCM) 1 2 3 4 5 6
125 2.7 2.6 4.6 3.2 3.0 3.8
160 4.6 4.9 5.0 4.2 3.6 4.2
200 4.6 2.9 3.4 3.5 4.1 5.1
a. Does flow rate affect etch uniformity? Answer this question by using an analysis of variance.
Source DF SS MS F P
FlowRat 2 3.648 1.824 3.59 0.053
Error 15 7.630 0.509
Total 17 11.278
Since the p-value ¿ 0.053> α =0.05, we cannot conclude that the flow rate affects the etch uniformity.
b. Construct a boxplot of the etch uniformity data. Use this plot, together with the analysis of variance results,
to determine which gas flow rate would be best in terms or etch uniformity (a small percentage is best).
5.0
4.5
Uniformity
4.0
3.5
3.0
2.5
125 160 200
Flow Rate
The boxplots overlap, which supports the conclusion from the ANOVA output. There does not seem to be
much difference between etch uniformity between flow rates 160 and 200 and 125 and 200. Although, the
etch uniformity for flow rate 160 has a smaller variance compared to the other two flow rates.
1.0
0.5
Residual
0.0
-0.5
-1.0
The residuals for fitted value of 4.4 have a smaller variance compared to the residuals at the other fitted
values; this supports the fact that the variance of etch uniformity at flow rate 160 was smaller than at the
other flow rates. Otherwise, the residuals appear to be random.
95
90
80
70
Percent
60
50
40
30
20
10
5
1
-2 -1 0 1 2
Residual
4.47. An article in the ACI Materials Journal (Vol. 84, 1987, pp. 213-216) describes several experiments
investigating the rodding of concrete to remove entrapped air. A 3-in diameter cylinder was used, and the
number of times this rod was used is the design variable. The resulting compressive strength of the
concrete specimen is the response. The data are shown in Table 4E.8.
a. Is there any difference in compressive strength due to rodding level? Answer the question by using
analysis of variance with alpha = 0.05.
Source DF SS MS F P
Rodding level_1 3 28633 9544 1.87 0.214
Error 8 40933 5117
Total 11 69567
Since p-value = 0.214 > 0.05, there is no evidence that suggests rodding level influences compressive
strength.
NOTE: Let us now assume only data on levels 10 and 15 is available, and that we will use a 95% two-sided
confidence interval to evaluate whether there are any differences in terms of compressive strength between
the two rodding levels.
( )
2 2 2
s 1 s2
( )
2
+ 2700 6033.3333
+
n1 n2 3 3
¿ = =3.4914 ❑ 3
( ) ( ) ( ) ( )
2 2 2 2 2
s1
2
s2 2700 6033.3333 ⇒
n1 n2 3 3
+ +
3−1 3−1
n 1−1 n2−1
Since zero is within the confidence interval, we cannot conclude there is a difference in the compressive
strength associated with the two different rodding levels.
b. Construct box plots of compressive strength by rodding level. Provide a practical interpretation of the
plots.
MTB > Graph > Boxplot > One Y > With Groups
Boxplot of Strength
1750
1700
1650
Strength
1600
1550
1500
1450
1400
10 15 20 25
Rodding Level
There does not seem to be a significant difference in terms of the mean compressive strength for rodding
levels15, and 20 and between, 10 and 25. Rodding level 25 shows remarkably smaller variability compared
to other levels.
c. Construct a normal probability plot of the residuals from this experiment. Does the assumption of a
normal distribution for compressive strength seem reasonable?
60
50
40
30
20
10
5
1
-200 -100 0 100 200
Residual
4.48. An article in environment International (Vol. 18, No.4, 1992) describes an experiment in which the amount
of radon released in showers was investigated. Radon-enriched water was used in the experiment, and six
different orifice diameters were tested in shower heads. The data from the experiment are shown in Table
4E.9.
a. Does the size of orifice affect the mean percentage of radon released? Use the analysis of variance.
MTB > Stat > ANOVA > One-way
Source DF SS MS F P
Diameter 5 1133.38 226.68 30.85 0.000
Error 18 132.25 7.35
Total 23 1265.63
Since p-value < 0.05, there is evidence that suggests orifice diameter influences the percentage radon
released.
95
90
80
70
Percent
60
50
40
30
20
10
5
1
-5.0 -2.5 0.0 2.5 5.0
Residual
A plot of the residuals versus the fitted values appears random; the assumption of constant variance seems
reasonable.
Versus Fits
(response is Radon Released)
5.0
2.5
Residual
0.0
-2.5
-5.0
60 65 70 75 80 85
Fitted Value
Orifice Diameters of 0.37 and 0.51 had similar effects on percent radon released; orifice diameters 1.40 and
1.99 had the smallest percent radon released.
CHAPTER 5
5.1. What are chance and assignable causes of variability? What part do they play in the operation and
interpretation of a Shewhart control chart?
Chance cause variability is the natural variability or “background noise” present in a system. This type of
variability is essentially unavoidable. An assignable cause of variability is an identifiable source of
variation in a process and is not a part of the normal variability of the process. For example, an assignable
cause could be an operator error or defective materials. A process that only has chance causes is said to be
in statistical control. Assignable causes will be flagged in a control chart so that the process can be
investigated.
5.2. Discuss the relationship between a control chart and statistical hypothesis testing.
A control chart is a test of the hypothesis that the process is in a state of statistical control. A point plotting
outside the control limits is equivalent to rejecting the hypothesis that the process is in a state of statistical
control. A point plotting within the control limits is equivalent of failing to reject the hypothesis that the
process is in a state of statistical control.
5.3. Discuss type I and type II errors relative to the control chart. What practical implication in terms of process
operation do these two types of errors have?
A type I error occurs when you reject the null hypothesis when in fact the null hypothesis is true. A type I
error in control charts is a false-positive signal, a point that is outside the control limits that is actually in
control. When we use z α/ 2=3 to construct the control limits, the probability of a type I error is .0027
(assuming a normal distribution). A type II error is when you fail to reject the null hypothesis when in fact
the null hypothesis is false. A type II error in a control chart is a false-negative, a point that was out of
control, but was not detected by the control chart.
5.4. What is meant by the statement that a process is in a state of statistical control?
A process can have natural or common cause variation. This type of variation is usually unavoidable. A
system that only has common cause variability is considered to be in a state of statistical control. When
other types of variability outside of the natural variability of the system occur, the process is no longer in
statistical control.
5.5. If a process is in a state of statistical control, does it necessarily follow that all or nearly all of the units of
product produced will be within the specification limits?
No; your process can be in a state of statistical control, but the process is not operating at the desired
specification limits. In this case, a redesign or change to the process may be necessary in order for the
process to operate within the specification limits.
5.6. Discuss the logic underlying the use of three-sigma limits on Shewhart control charts. How will the chart
respond if narrower limits are chosen? How will it respond if wider limits are chosen?
The use of three-sigma limits is common because this range contains 99.7% of points within a normal
distribution. Using narrower limits may cause out of control points to be detected quicker, but will increase
the Type I error, the probability of false positives. Using wider limits will decrease Type I error, but will
lead to more false negatives, as out of control behavior is not detected by the wider limits.
5.7. Discuss the rational subgroup concept. What part does it play in control chart analysis?
A rational subgroup is a sample selected so that if assignable causes are present, the chance for differences
between subgroups is maximized, while the chance for differences due to the assignable cause within a
subgroup will be minimized. Within subgroup variability determines the control limits. If the subgroups are
poorly chosen so that assignable causes occur within the subgroup, the control limits will be wider than
necessary. This will reduce the control chart’s ability to detect shifts. Careful consideration must be used
when selecting rational subgroups to ensure the control charts are effective in detecting shifts. Other
relevant points: the size of the sample and the frequency of sampling (For further details refer to pages
168-169).
5.8. When taking samples of subgroups from a process, do you want assignable causes occurring within the
subgroups or between them? Fully explain your answer.
We want assignable causes occurring between subgroups. Within subgroup variability determines the
control limits, so if there are assignable causes within the subgroups, the control limits will be wider than
necessary. This would decrease the chart’s ability to detect shifts.
5.9. A molding process uses a five-cavity mold for a part used in an automotive assembly. The wall thickness of
the part is the critical quality characteristic. It has been suggested to use X and R charts to monitor this
process, and to use as the subgroup or sample all five parts that result from a single “shot” of the machine.
What do you think of this sampling strategy? What impact does it have on the ability of the charts to detect
assignable causes?
This sampling approach is appropriate when the purpose is to detect process shifts. It will minimize the
chance of variability due to assignable causes within the sample and maximize chance of variability
between samples because the samples consist of units that were produced at the same time. It will provide a
good estimate of the standard deviation of the process as well.
5.10. A manufacturing process produces 500 parts per hour. A sample part is selected about every half-hour, and
after five parts are obtained, the average of these five measurements is plotted on a control chart. (a) Is this
an appropriate sampling scheme if the assignable cause in the process results in an instantaneous upward
shift in the mean that is of very short duration? (b) If your answer is no, propose an alternative procedure.
a. No, this sampling scheme would not be appropriate for this situation. Because a part is selected only
once every half hour, the sample will likely miss the short shift in the mean. The shift would go
undetected.
b. Because there could be quick shifts in the mean, small, more frequent samples would be more
appropriate.
5.11. Consider the sampling scheme proposed in Exercise 5.10. Is this scheme appropriate if the assignable cause
in the process results in a slow, prolonged upward shift? If your answer is no, propose an alternative
procedure.
Yes, this sampling scheme is more appropriate if the assignable cause results in a slow upward shift.
5.12. If the time order of production has not been recorded in a set of data from a process, is it possible to detect
the presence of assignable causes?
Yes, provided that the data are uncorrelated and stationary. However, knowing the time order may help
identify the cause of out of control behavior. If the control chart is based on subgroups, you would also
need to know which observations belong in which subgroups in order to distinguish within subgroup
variability and between subgroup variability. Not knowing the time order will also not allow you to detect
unusual runs or trends in the process as well which could be an indication of the mean slowly rising or
falling.
5.13. How do the costs of sampling, the costs of producing an excessive number of defective units, and the costs
of searching for assignable causes impact the choice of parameters of a control chart?
The costs of sampling will impact the sampling scheme for constructing the control charts. If sampling is
expensive, then smaller samples will be required which impacts the value of n . If the cost of producing an
excessive number of defective units is high, then we would want to be able to detect an assignable cause as
soon as possible and therefore may decrease the width of the control limits. This will lead to an increase in
false positive, however. Therefore, if the cost of searching for assignable causes is high, we would not want
to make the control limits too narrow.
5.14. Consider the control chart shown here. Does the pattern appear random?
No. The last four runs appear to plot at a distance of one-sigma or beyond from the center line.
5.15. Consider the control chart shown here. Does the pattern appear random?
5.16. Consider the control chart shown here. Does the pattern appear random?
Yes, the pattern appears random.
5.17. Apply the Western Electric rules to the control chart in Exercise 5.14. Are any of the criteria for declaring
the process out of control satisfied?
Check:
The process can be declared out-of-control because the last four runs appear to plot at a distance of one-
sigma or beyond from the center line.
5.18. Apply the Western Electric rules to the control chart presented in Exercise 5.16. Would these rules result in
any out-of-control signals?
Behavio
Control Chart
r
(a) (2)
(b) (4)
(c) (5)
(d) (1)
(e) (3)
5.20. The thickness of a printed circuit board is an important quality parameter. Data on board thickness (in
inches) are given in Table 5E.1 for 25 samples of three boards each.
Sample Number x1 x2 x3
1 0.0629 0.0636 0.0640
2 0.0630 0.0631 0.0622
3 0.0628 0.0631 0.0633
4 0.0634 0.0630 0.0631
5 0.0619 0.0628 0.0630
6 0.0613 0.0629 0.0634
7 0.0630 0.0639 0.0625
8 0.0628 0.0627 0.0622
9 0.0623 0.0626 0.0633
10 0.0631 0.0631 0.0633
11 0.0635 0.0630 0.0638
12 0.0623 0.0630 0.0630
13 0.0635 0.0631 0.0630
14 0.0645 0.0640 0.0631
15 0.0619 0.0644 0.0632
16 0.0631 0.0627 0.0630
17 0.0616 0.0623 0.0631
18 0.0630 0.0630 0.0626
19 0.0636 0.0631 0.0629
20 0.0640 0.0635 0.0629
21 0.0628 0.0625 0.0616
22 0.0615 0.0625 0.0619
23 0.0630 0.0632 0.0630
24 0.0635 0.0629 0.0635
25 0.0623 0.0629 0.0630
MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-R
__
0.0630 X=0.062952
0.0625
0.0620
1
LCL=0.062011
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample
1
0.0024 UCL=0.002368
Sample Range
0.0018
0.0012 _
R=0.00092
0.0006
0.0000 LCL=0
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample
σ^ =R/d 2=0.00092/1.693=0.0005
c. What are the limits that you would expect to contain nearly all the process measurements?
We expect 99.7% of points within a normal distribution to fall within ± 3 standard deviations of the mean.
Therefore, we expect 99.7% of process measurements to fall within x́ ± 3 σ^.
5.21. The net weight of a soft drink is to be monitored by X and R control charts using a sample size of n=5.
Data for 20 preliminary samples are shown in Table 5E.2.
Sample
x1 x2 x3 x4 x5
Number
16. 16.
1 15.8 16.2 16.6
3 1
15. 16.
2 16.3 15.9 16.4
9 2
16. 16.
3 16.1 16.5 16.3
2 4
16. 16.
4 16.3 15.9 16.2
2 4
16. 16.
5 16.1 16.4 16.0
1 5
15. 16.
6 16.1 16.7 16.4
8 6
16. 16.
7 16.1 16.5 16.5
3 1
16. 16.
8 16.2 16.2 16.3
1 1
16. 16.
9 16.3 16.4 16.5
2 3
16. 16.
10 16.6 16.4 16.5
3 1
16. 16.
11 16.2 15.9 16.4
4 3
16. 16.
12 15.9 16.7 16.5
6 2
16. 16.
13 16.4 16.6 16.1
1 4
16. 16.
14 16.5 16.2 16.4
3 3
16. 16.
15 16.4 16.3 16.2
1 2
16. 16.
16 16.0 16.3 16.2
2 3
16. 16.
17 16.4 16.4 16.2
2 3
18 16.0 16. 16.4 16. 16.1
2 5
16. 16.
19 16.4 16.3 16.4
0 4
16. 16.
20 16.4 16.5 15.8
4 0
a. Set up X and R control charts using these data. Does the process exhibit statistical control?
MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-R
16.0 LCL=15.9940
1 3 5 7 9 11 13 15 17 19
Sample
1.00 UCL=1.004
Sample Range
0.75
_
0.50 R=0.475
0.25
0.00 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample
60
50
40
30
20
10
5
0.1
15.50 15.75 16.00 16.25 16.50 16.75 17.00
Net Weight
MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-S
Sample Mean
__
0.0630 X=0.062952
0.0625
0.0620
1
LCL=0.062017
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample
1
0.0012 UCL=0.001228
Sample StDev
0.0009
0.0006 _
S=0.000478
0.0003
0.0000 LCL=0
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample
σ^ =s/c 4 =0.000478/0.8862=0.000539
c. What are the limits that you would expect to contain nearly all the process measurements?
We expect 99.7% of points within a normal distribution to fall within ± 3 standard deviations of the mean.
Therefore, we expect 99.7% of process measurements to fall within x́ ± 3 σ^.
(0.0614 , 0.0646)
a. Set up X −S control charts using these data. Does the process exhibit statistical control?
MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-S
16.4
__
X=16.268
16.2
16.0 LCL=15.9876
1 3 5 7 9 11 13 15 17 19
Sample
0.4 UCL=0.4104
Sample StDev
0.3
_
0.2 S=0.1965
0.1
0.0 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample
60
50
40
30
20
10
5
0.1
15.50 15.75 16.00 16.25 16.50 16.75 17.00
Net Weight
5.24. Samples of six items are taken from a service process at regular intervals. A quality characteristic is
measured and x and R values are calculated from each sample. After 50 groups of size 6 have been taken
we have x=40 and R=4 . The data is normally distributed.
a. Compute control limits for the X and R control charts. Do all points fall within the control limits?
x Chart :
UCL= x́+ A 2 R=40+0.483∗4=41.93 ,CL=x́=40
LCL=x́− A2 R=40−0.483∗4=38.07
b. Estimate the mean and standard deviation of the process. What are the ± 2 standard deviation limits for the
individual data?
^ = x́ =40
μ
σ^ =R/d 2=4 /2.326=1.720
c. If the specification limits are 41 ± 5, do you think the process is capable of producing within these
specifications?
Since C p< 1, the process uses up more than 100% of the specification range. C p is close to 1, so some of
the process measurements will fall outside the specification limits.
5.25. Table 5E.3 presents 20 subgroups of five measurements on the time it takes to
service a customer.
a. Set up X and R control charts for this process and verify that it is in statistical
control.
Sample Number x1 x2 x3 x4 x5 x R
138. 137.
1
1 110.8 138.7 4 125.4 130.1 27.9
149. 134.
2
3 142.1 105.0 0 92.3 124.5 57.0
115. 155.
3
9 135.6 124.2 0 117.4 129.6 39.1
118. 122.
4
5 116.5 130.2 6 100.2 117.6 30.0
108. 142.
5
2 123.8 117.1 4 150.9 128.5 42.7
102. 135.
6
8 112.0 135.0 0 145.8 126.1 43.0
120. 118.
7
4 84.3 112.8 5 119.3 111.0 36.1
132. 123.
8
7 151.1 124.0 9 105.1 127.4 46.0
136. 127.
9
4 126.2 154.7 1 173.2 143.5 46.9
135. 138.
10
0 115.4 149.1 3 130.4 133.6 33.7
139. 143.
11
6 127.9 151.1 7 110.5 134.6 40.6
125. 152.
12
3 160.2 130.4 4 165.1 146.7 39.8
145. 113.
13
7 101.8 149.5 3 151.8 132.4 50.0
138. 140.
14
6 139.0 131.9 2 141.1 138.1 9.2
110. 113.
15
1 114.6 165.1 8 139.6 128.7 54.8
145. 120.
16
2 101.0 154.6 2 117.3 127.6 53.3
125. 147.
17
9 135.3 121.5 9 105.0 127.1 42.9
129. 109.
18
7 97.3 130.5 0 150.5 123.4 53.2
123. 148.
19
4 150.0 161.6 4 154.2 147.5 38.3
144. 151.
20
8 138.3 119.6 8 142.7 139.4 32.2
MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-R
Sample Mean
140
__
130 X=130.88
120
110
LCL=107.31
1 3 5 7 9 11 13 15 17 19
Sample
UCL=86.40
80
Sample Range
60
_
40 R=40.86
20
0 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample
b. Following establishing of the control charts in part (a), 10 new samples have been
provided in Table 5E.4. Plot the new X and R values using the control chart limits
you established in part (a) and draw conclusions.
Sample Number x1 x2 x3 x4 x5 x R
131. 143.
1 184.8 182.2 212.8 170.8 81.8
0 3
181. 169.
2 193.2 180.7 174.3 179.7 24.0
3 1
154. 202.
3 170.2 168.4 174.4 174.1 48.0
8 7
157. 142.
4 154.2 169.1 161.9 157.0 26.9
5 2
216. 155.
5 174.3 166.2 184.3 179.3 60.8
3 5
186. 175.
6 180.2 149.2 185.0 175.3 37.8
9 2
167. 171.
7 143.9 157.5 194.9 167.2 51.0
8 8
178. 159.
8 186.7 142.4 167.6 166.9 44.2
2 4
162. 168.
9 143.6 132.8 177.2 157.0 44.5
6 9
172. 150.
10 191.7 203.4 196.3 182.8 53.0
1 4
MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-R
Xbar-R Chart of Service Time
1
1 1
180 1 1
1
1 1
Sample Mean
160 1 1
UCL=154.45
140 __
X=130.88
120
LCL=107.31
100
1 4 7 10 13 16 19 22 25 28
Sample
UCL=86.40
80
Sample Range
60
_
40 R=40.86
20
0 LCL=0
1 4 7 10 13 16 19 22 25 28
Sample
All new samples exceed the UCL indicating a significant increase in the time to service a customer.
Suppose that the assignable cause responsible for the action signals generated in part
(b) has been identified and adjustments made to the process to correct its performance.
Plot the X and R values from the new subgroups shown in Table 5E.5 which were taken
following the adjustment against the control chart limits established in part (a). What are
your conclusions?
Sample Number x1 x2 x3 x4 x5 x R
131. 103.
1 143.1 118.5 121.6 123.6 39.8
5 2
111.
2 127.3 110.4 91.0 143.9 116.7 52.8
0
129. 105.
3 98.3 134.0 133.1 120.1 35.7
8 1
145. 131.
4 132.8 106.1 99.2 122.8 46.0
2 0
114. 177.
5 111.0 108.8 121.6 126.7 68.7
6 5
125. 137.
6 86.4 64.4 117.5 106.1 72.6
2 1
145. 129.
7 109.5 84.9 110.6 116.1 61.0
9 8
123.
8 114.0 135.4 83.2 107.6 112.8 52.2
6
9 85.8 156.3 119.7 96.2 153.0 122.2 70.6
107. 125.
10 148.7 127.4 127.5 127.2 41.3
4 0
MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-R
Xbar-R Chart of Service Time
160
UCL=153.18
Sample Mean
140
__
X=127.07
120
100 LCL=100.95
1 4 7 10 13 16 19 22 25 28
Sample
100
UCL=95.7
75
Sample Range
_
50
R=45.3
25
0 LCL=0
1 4 7 10 13 16 19 22 25 28
Sample
5.26. Parts manufactured by an injection modeling process are subjected to a compressive strength test. Twenty
samples of five parts each are collected, and the compressive strengths (in psi) are shown in Table 5E.6.
Sample
x1 x2 x3 x4 x5 x R
Number
1 83.0 81.2 78.7 75.7 77.0 79.1 7.3
2 88.6 78.3 78.8 71.0 84.2 80.2 17.6
3 85.7 75.8 84.3 75.2 81.0 80.4 10.4
4 80.8 74.4 82.5 74.1 75.7 77.5 8.4
5 83.4 78.4 82.6 78.2 78.9 80.3 5.2
6 75.3 79.9 87.3 89.7 81.8 82.8 14.5
7 74.5 78.0 80.8 73.4 79.7 77.3 7.4
8 79.2 84.4 81.5 86.0 74.5 81.1 11.4
9 80.5 86.2 76.2 83.9 80.2 81.4 9.9
10 75.7 75.2 71.1 82.1 74.3 75.7 10.9
11 80.0 81.5 78.4 73.8 78.1 78.4 7.7
12 80.6 81.8 79.3 73.8 81.7 79.4 8.0
13 82.7 81.3 79.1 82.0 79.5 80.9 3.6
14 79.2 74.9 78.6 77.7 75.3 77.1 4.3
15 85.5 82.1 82.8 73.4 71.7 79.1 13.8
16 78.8 79.6 80.2 79.1 80.8 79.7 2.0
17 82.1 78.2 75.5 78.2 82.1 79.2 6.6
18 84.5 76.9 83.5 81.2 79.2 81.1 7.6
19 79.0 77.8 81.2 84.4 81.6 80.8 6.6
20 84.5 73.1 78.6 78.7 80.6 79.1 11.4
NOTE: There is a typo in sample 9, value x 4 . The value in the book, 64.1, does not match the sample mean
and sample range for that sample. x 4 =83.9 is used in this solution.
a. Establish X and R control charts for compressive strength using these data. Is the process in statistical
control?
MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-R
Xbar-R Chart of Compressive Strength
85.0
UCL=84.58
82.5
Sample Mean
__
80.0
X=79.53
77.5
75.0
LCL=74.49
1 3 5 7 9 11 13 15 17 19
Sample
20
UCL=18.49
15
Sample Range
10
_
R=8.75
5
0 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample
b. After establishing the control charts in part (a), 15 new subgroups were collected; the compressive
strengths are shown in Table 5E.7. Plot the X and R values against the control limits from part (a) and draw
conclusions.
Sample
x1 x2 x3 x4 x5 x R
Number
1 68.9 81.5 78.2 80.8 81.5 78.2 12.6
2 69.8 68.6 80.4 84.3 83.9 77.4 15.7
3 78.5 85.2 78.4 80.3 81.7 80.8 6.8
4 76.9 86.1 86.9 94.4 83.9 85.6 17.5
5 93.6 81.6 87.8 79.6 71.0 82.7 22.5
6 65.5 86.8 72.4 82.6 71.4 75.9 21.3
7 78.1 65.7 83.7 93.7 93.4 82.9 27.9
8 74.9 72.6 81.6 87.2 72.7 77.8 14.6
9 78.1 77.1 67.0 75.7 76.8 74.9 11.0
10 78.7 85.4 77.7 90.7 76.7 81.9 14.0
11 85.0 60.2 68.5 71.1 82.4 73.4 24.9
12 86.4 79.2 79.8 86.0 75.4 81.3 10.9
13 78.5 99.0 78.3 71.4 81.8 81.7 27.6
14 68.8 62.0 82.0 77.5 76.1 73.3 19.9
15 83.0 83.7 73.1 82.2 95.3 83.5 22.2
MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-R
In Xbar-R Options Estimate Tab, use the “omit following subgroups when estimating parameters” to plot
the X and R values against the control limits from part (a).
Xbar-R Chart of Compressive Strength
1
85.0 UCL=84.58
82.5
Sample Mean
__
80.0 X=79.53
77.5
75.0
LCL=74.49
1 1
1 4 7 10 13 16 19 22 25 28 31 34
Sample
30 1 1
1
1 1
1
Sample Range
1
20
UCL=18.49
_
10
R=8.75
0 LCL=0
1 4 7 10 13 16 19 22 25 28 31 34
Sample
The control chart indicates that the process is in statistical control until the 25 th sample, when the process
variability goes out of control. While the 32nd sample is within the control limits in the R chart, it fails
Western Electric’s 4th Rule; it is the 9th consecutive sample above the center line. This indicates the process
variability has become unstable. There are also points where the sample mean is out control.
a. Establish X and S control charts for compressive strength using these data. Is the process in statistical
control? After establishing the control charts, 15 new subgroups were collected; the compressive strengths
are shown in Table 5E.7. Plot the X and S values against the previous control limits and draw conclusions.
MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-S
__
80.0
X=79.53
77.5
75.0
LCL=74.43
1 3 5 7 9 11 13 15 17 19
Sample
8
UCL=7.464
6
Sample StDev
4
_
S=3.573
2
0 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample
Sample Mean
__
80.0 X=79.53
77.5
75.0
LCL=74.43
1 1
1 4 7 10 13 16 19 22 25 28 31 34
Sample
1
1 1
10.0 1
1
Sample StDev
1 1 1
7.5 UCL=7.46
5.0 _
S=3.57
2.5
0.0 LCL=0
1 4 7 10 13 16 19 22 25 28 31 34
Sample
The new subgroups are not in control. The chart detects an out of control point beginning at the 22 nd sample
in the S chart.
b. Does the S chart detect the shift in process variability more quickly than the R chart did originally in part
(b) of Exercise 5.26?
The S-chart does detect out of control behavior faster than the R chart. The R chart detects the shift
beginning at sample 25 while the s-chart detects the shift beginning at sample 22.
5.28. One-pound coffee cans are filled by a machine, sealed, and then weighed by a local coffee store. After
adjusting for the weight of the can, any package that weighs less than 16 oz is cut out of the conveyor. The
weights of 25 successive cans are shown in Table 5E.8. Set up a moving range control chart and a control
chart for individuals. Estimate the mean and standard deviation of the amount of coffee packed in each can.
Is it reasonable to assume that weight is normally distributed? If the process remains in statistical control at
this level, what percentage of cans will be underfilled?
Can Can
Weight Weight
Number Number
1 16.11 14 16.12
2 16.08 15 16.10
3 16.12 16 16.08
4 16.10 17 16.13
5 16.10 18 16.15
6 16.11 19 16.12
7 16.12 20 16.10
8 16.09 21 16.08
9 16.12 22 16.07
10 16.10 23 16.11
11 16.09 24 16.13
12 16.07 25 16.10
13 16.13
MTB > Stat > Control Charts > Variables Charts for Individuals > I-MR
I-MR Chart of Weight
16.17 UCL=16.1684
Individual Value
16.14
_
16.11
X=16.1052
16.08
16.05
LCL=16.0420
1 3 5 7 9 11 13 15 17 19 21 23 25
Observation
0.08 UCL=0.07760
0.06
Moving Range
0.04
__
0.02
MR=0.02375
0.00 LCL=0
1 3 5 7 9 11 13 15 17 19 21 23 25
Observation
60
50
40
30
20
10
5
1
16.050 16.075 16.100 16.125 16.150 16.175
Weight
5.29. Fifteen successive heats of a steel alloy are tested for hardness. The resulting data
are shown in Table 5E.9. Set up a control chart for the moving range and a control
chart for individual hardness measurements. Is it reasonable to assume that
hardness is normally distributed?
MTB > Stat > Control Charts > Variables Charts for Individuals > I-MR
Individual Value
55 _
X=53.27
50
45 LCL=44.72
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Observation
10.0
UCL=10.50
Moving Range
7.5
5.0
__
MR=3.21
2.5
0.0 LCL=0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Observation
60
50
40
30
20
10
5
1
45.0 47.5 50.0 52.5 55.0 57.5 60.0 62.5
Hardness
5.30. The viscosity of a polymer is measured hourly. Measurements for the last 20 hours are shown as follows:
Percent
60
50
40
30
20
10
5
1
2500 2600 2700 2800 2900 3000 3100 3200 3300 3400
Viscosity
b. Set up a control chart on viscosity and a moving range chart. Does the process exhibit statistical control?
3200
3000 _
X=2928.9
2800
2600
LCL=2534.9
1 3 5 7 9 11 13 15 17 19
Observation
480 UCL=484.1
Moving Range
360
240
__
120
MR=148.2
0 LCL=0
1 3 5 7 9 11 13 15 17 19
Observation
5.31. Continuation of exercise 5.20. The next five measurements on viscosity are 3163, 3199, 3054, 3147, and
3156. Do these measurements indicate that the process is in statistical control?
Individual Value
3200
3000 _
X=2928.9
2800
2600
LCL=2534.9
1 3 5 7 9 11 13 15 17 19 21 23 25
Observation
480 UCL=484.1
Moving Range
360
240
__
120
MR=148.2
0 LCL=0
1 3 5 7 9 11 13 15 17 19 21 23 25
Observation
5.32. A machine is used to fill cans with an energy drink. A single sample can is selected every hour and the
weight of the can is obtained. Since the filling process is automated, it has very stable variability, and long-
term experience indicates that σ =0.05 oz. the individual observations for 24 hours of operation are shown
in Table 5E.11.
a. Assuming that the process target is 8.02 oz, create a CUSUM chart for this process. Design the chart using
the standardized values h=4.77 and k =0.5 .
Sample Sample
x x
Number Number
1 8.00 13 8.05
2 8.01 14 8.04
3 8.02 15 8.03
4 8.01 16 8.05
5 8.00 17 8.06
6 8.01 18 8.04
7 8.06 19 8.05
8 8.07 20 8.06
9 8.01 21 8.04
10 8.04 22 8.02
11 8.02 23 8.03
12 8.01 24 8.05
MTB > Stat > Control Charts > Time-Weighted Charts > CUSUM
CUSUM Chart of Can Weight
0.3
UCL=0.2385
0.2
Cumulative Sum
0.1
0.0 0
-0.1
-0.2
LCL=-0.2385
-0.3
2 4 6 8 10 12 14 16 18 20 22 24
Sample
MR 0.0170
MR=0.01870 , σ^ = = =0.0151
d2 1.128
5.33. Rework Exercise 5.30 using the standardized CUSUM parameters of h=8.01 and k =0.25. Compare the
results with those obtained previously in Exercise 5.32. What can you say about the theoretical
performance of those two CUSUM schemes?
MTB > Stat > Control Charts > Time-Weighted Charts > CUSUM
0.2
Cumulative Sum
0.1
0.0 0
-0.1
-0.2
-0.3
-0.4 LCL=-0.4005
-0.5
2 4 6 8 10 12 14 16 18 20 22 24
Sample
In Exercise 5.32:
ARL0=370.84, the theoretical performance of these two CUSUM schemes is the same.
5.34. The data in Table 5E.12 are the times it takes for a local payroll company to process checks (in minutes).
The target value for the turnaround times is μ0=950 minutes (two working days).
95 98 94 93 95 94 95 95
3 5 9 7 9 8 8 2
94 97 94 94 93 93 95 93
5 3 1 6 9 7 5 1
97 95 96 95 94 95 94 92
2 5 6 4 8 5 7 8
94 95 96 93 95 92 94 93
5 0 6 5 8 7 1 7
97 94 93 94 96 94 93 95
5 8 4 1 3 0 8 0
97 95 93 93 97 96 94 97
0 7 7 3 3 2 5 0
95 94 94 96 94 96 96 93
9 0 6 0 9 3 3 3
MTB > Stat > Control Charts > Variables Charts for Individuals > Moving Range
UCL=44.83
40
Moving Range
30
20
__
MR=13.72
10
0 LCL=0
1 9 17 25 33 41 49 57 65 73
Observation
b. Create a CUSUM chart for this process, using standardized values h=5 andk =1/2. Interpret this chart.
μ0=950 , σ^ =12.16 , k=1/2 , h=5
MTB > Stat > Control Charts > Time-Weighted Charts > CUSUM
UCL=60.8
50
Cumulative Sum
25
0 0
-25
-50
LCL=-60.8
1 9 17 25 33 41 49 57 65 73
Sample
The process signals out of control at sample 12. The assignable cause occurred after sample 12 – 10=2.
5.35. Calcium hardness is measured hourly for a public swimming pool. Data (in ppm) for the last 32 hours are
shown in Table 5E.13 (read down from left). The process target is μ0 = 175 ppm.
MTB > Stat > Control Charts > Variables Charts for Individuals > Moving Range
Moving Range Chart of Hardness
25 1
20
UCL=20.76
Moving Range
15
10
__
MR=6.35
5
0 LCL=0
1 4 7 10 13 16 19 22 25 28 31
Observation
b. Construct a CUSUM chart for this process using standardized values of h=5 and k =1/2.
300
Cumulative Sum
200
100
UCL=28.2
0 0
LCL=-28.2
-100
-200
1 4 7 10 13 16 19 22 25 28 31
Sample
5.36. Reconsider the data in Exercise 5.32. Set up an EWMA control chart with λ=0.2 and L=3 for this
process. Interpret the results.
UCL=μ 0+ Lσ
√ λ
2−λ
=8.02+3∗0.05
0.2
2−0.2
=8.07
√
CL=μ0=8.02
LCL=μ0−Lσ
√ λ
2−λ
=8.02−3∗0.05
0.2
2−0.2
=7.97
√
MTB > Stat > Control Charts > Time-Weighted Charts > EWMA
EWMA Chart of Can Weight
8.08
UCL=8.0700
8.06
8.04
__
EWMA
8.02 X=8.02
8.00
7.98
LCL=7.9700
7.96
2 4 6 8 10 12 14 16 18 20 22 24
Sample
5.37. Reconstruct the control chart in Exercise 5.36 using λ=0.1 and L=3 for this process. Interpret the
results.
¿ 0.05 , μ0=8.02 , λ=0.1 , L=3
UCL=μ 0+ Lσ
√ λ
2−λ
=8.02+3∗0.05
0.1
2−0.1
=8.05
√
CL=μ0=8.02
LCL=μ0−Lσ
√ λ
2−λ
=8.02−3∗0.05
0.1
2−0.1
=7.99
√
MTB > Stat > Control Charts > Time-Weighted Charts > EWMA
8.04
8.03
__
EWMA
8.02 X=8.02
8.01
8.00
7.99
LCL=7.98570
7.98
1 3 5 7 9 11 13 15 17 19 21 23
Sample
5.38. Reconsider the data in Exercise 5.34. Apply an EWMA control chart to these data using λ=0.1 and
L=2.7 .
¿ 0.1 , L=2.7 , σ^ =12.16 ,CL=❑0=950 ,
UCL=μ 0+ Lσ
√ λ
2−λ
=950+2.7∗12.16
0.1
2−0.1
=957.53
√
LCL=μ0 + Lσ
√ λ
2− λ
=950−2.7∗12.16
0.1
2−0.1
=942.47
√
MTB > Stat > Control Charts > Time-Weighted Charts > EWMA
UCL=957.53
955
EWMA
__
950 X=950
945
LCL=942.47
1 9 17 25 33 41 49 57 65 73
Sample
TEST 1. One point more than 2.70 standard deviations from center line.
Test Failed at points: 8, 12, 13
5.39. Reconstruct the control chart in Exercise 5.34 using λ=0.4 and L=3 . Compare this chart to the one
constructed in Exercise 5.38.
UCL=μ 0+ Lσ
√ λ
2−λ
=950+3∗12.16
0.4
2−0.4
=968.25
√
LCL=μ0 + Lσ
√ λ
2− λ
=950−3∗12.16
0.4
2−0.4
=931.75
√
MTB > Stat > Control Charts > Time-Weighted Charts > EWMA
EWMA Chart of Time
970
UCL=968.25
960
__
EWMA
950 X=950
940
LCL=931.75
930
1 9 17 25 33 41 49 57 65 73
Sample
TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 70
The EWMA chart does not detect out of control points until observation 70 compared to observation 8 in
the EWMA chart in Exercise 5.38.
5.40. Reconsider the data in Exercise 5.35. Set up and apply an EWMA control chart to these data using
λ=0.05 and L=2.6 .
UCL=μ 0+ Lσ
√ λ
2−λ
=175+2.6∗5.634
0.05
2−0.05
=177.30
√
LCL=μ0 + Lσ
√ λ
2− λ
=175−2.6∗5.634
0.05
2−0.05
=172.70
√
MTB > Stat > Control Charts > Time-Weighted Charts > EWMA
185
EWMA
180
UCL=177.30
__
175 X=175
LCL=172.70
170
1 4 7 10 13 16 19 22 25 28 31
Sample
Process is out of control where the process mean ( ^μ=183.594 ) is remarkably larger than the process
target of μ0=175 .
5.41. A process is in control with x́=100 , s=1.05 , and n=5. The process specifications are at95 ± 10. The
quality characteristic has a normal distribution.
^ pk =min ( C
C ^ pl , C
^ pu) =1.49
c. How much could the fallout in the process be reduced if the process were corrected to operate at the
nominal specification?
(
¿ P Z<
LSL− μ^
σ^ )[ (
+ 1−P Z <
USL−^μ
σ^
=P Z<
85−100
1.117 )] (
+ 1−P Z<
105−100
1.117 )[ ( )]
¿ P ( Z ←13.429 ) + [ 1−P ( Z< 4.476 ) ] =0.0000+ [ 1−0.999996 ] =0.000004
(
^p Potential=P Z <
85−95
1.117 )[ (
+ 1−P Z<
105−95
1.117 )]
=P ( Z ←8.953 ) + [ 1−P ( Z <8.953 ) ]
5.42. A process is in statistical control with x́=199 and R=3.5. The control chart uses a sample size of n=4 .
Specifications are at 200 ± 8. The quality characteristic is normally distributed.
USL=200+8=208 , LSL=200−8=192
a. Estimate the potential capability of the process.
^
^ pl = u−LSL 199−192
C = =1.373
3∗σ^ 3∗1.6998
^ pk =min( C
C ^ pu , C
^ pl )=min(1.765 , 1.373)=1.373
c. How much improvement could be made in process performance if the mean could be centered at the
nominal value?
(
¿ P Z<
LSL− μ^
σ^ )[ (
+ 1−P Z <
USL−^μ
σ^
=P Z<
192−199
1.6998)] (
+ 1−P Z <
208−199
1.6998 )[ ( )]
¿ P ( Z ←4.118 ) + [ 1−P ( Z<5.295 ) ] =0.000019+ [ 1−1 ] =0.000019
If the process mean could be centered at the specification target, the fraction nonconforming would be:
(
^p Potential=P Z <
192−200
1.6998 )[ (
+ 1−P Z <
208−200
1.6998 )]
=P ( Z ← 4.706 ) + [ 1−P ( Z< 4.706 ) ]
5.43. A process is in statistical control with x=39.7 and R=2.5. The control chart uses a sample size of
n=2. Specifications are at 40 ± 5. The quality characteristic is normally distributed.
μ^ −LSLx 39.7−35
^ =
C = =0.71
pl
3 σ^ x 3∗2.216
USL x − μ^ 45−39.7
^ =
C = =0.80
pu
3 σ^ x 3∗2.216
^ pk =min ( C
C ^ pl , C
^ pu) =0.71
c. How much improvement could be made in process performance if the mean could be centered at the
nominal value?
(
¿ P Z<
LSL− μ^
^σ )[ (
+ 1−P Z <
USL−^μ
^σ
=P Z<
35−39.7
2.216 )] (
+ 1−P Z<
45−39.7
2.216 )[ ( )]
¿ P ( Z ←2.12094 )+ [ 1−P ( Z <2.39170 ) ] =0.0169634+ [ 1−0.991615 ] =0.0253
If the process mean could be centered at the specification target, the fraction nonconforming would be:
(
^p Potential=2∗P Z<
35−40
2.216 )
=2∗P ( Z ←2.25632 )=2∗0.0120253=0.0241
5.44. A process is in control with x́=75 and s=2. The process specifications are at 80 ± 8. The sample size is
n=5.
n=5 , μ^ =x́=75 , s=2 , σ^ x =s/c 4 =2/0.9400=2.128
USL=80+8=88 , LSL=80−8=72
a. Estimate the potential capability.
μ^ −LSLx 75−72
^ =
C = =0.47
pl
3 σ^ x 3∗2.128
USL x − μ^ 88−75
^ =
C = =2.04
pu
3 σ^ x 3∗2.128
^ pk =min ( C
C ^ pl , C
^ pu) =0.47
c. How much could process fallout be reduced by shifting the mean to the nominal dimension? Assume that
the quality characteristic is normally distributed.
If the process mean could be centered at the specification target, the fraction nonconforming would be:
(
^p Potential=2∗P Z<
72−80
2.128 )
=2∗P ( Z ←3.75940 ) =2∗0.0000852=0.0001704
5.45. The weights of nominal 1-kg containers of a concentrated chemical ingredient are shown in Table 5E.14.
Prepare a normal probability plot of the data and estimate process capability.
Prepare a normal probability plot of the disk height data and estimate process capability.
60
50 50
40
30
20
10
0.9968
1.0183
1
0.92 0.94 0.96 0.98 1.00 1.02 1.04 1.06 1.08
Weight
A normal probability plot of the 1-kg container weights shows that the normality assumption for the data is
reasonable.
x ≈ p50=0.9975 , p 84=1.0200
σ^ = p84− p50=1.0200−0.9975=0.0225
6 σ^ =6∗0.0225=0.1350
5.46. Consider the package weight data in Exercise 5.45. Suppose there is a lower specification at 0.985 kg.
Calculate an appropriate process capability ratio for this material. What percentage of the packages
produced by this process is estimated to be below the specification limit?
5.47. The height of the disk used in a computer disk drive assembly is a critical quality
characteristic. Table 5E.15 gives the heights (in mm) of 25 disks randomly selected
from the manufacturing process. Prepare a normal probability plot of the disk height
data and estimate process capability.
60
50 50
40
30
20
19.99986
20.00905
10
5
1
19.97 19.98 19.99 20.00 20.01 20.02 20.03
Height
A normal probability plot of computer disk heights shows the distribution is close to normal.
x ≈ p50=19.99986 , p 84=20.00905
σ^ = p84− p50=20.00905−19.99986=0.00919
6 σ^ =6∗0.00919=0.0551
5.48. The length of time required to reimburse employee expense claims is a characteristic that can be used to
describe the performance of the process. Table 5E.16 gives the cycle times (in days) of 30 randomly
selected employee expense claims. Estimate the capability of this process.
5 5 16 17 14 12
8 13 6 12 11 10
18 18 13 12 19 14
17 16 11 22 13 16
10 18 12 12 12 14
60
50 50
40
30
20
10
5
17.27
1 13.2
0 5 10 15 20 25
Time
A normal probability plot of reimbursement time shows the distribution is approximately normal.
x= p50=13.2 , p 84=17.27
σ^ = p84− p50=17.27−13.2=4.07
6 σ^ =6∗4.07=24.42
5.49. An electric utility tracks the response time to customer reported outages. The data in
Table 5E.17 are a random sample of 40 of the response times (in minutes) for one
operating division of this utility during a single month.
10
80 102 86 94 86 106 110 127 97
5
110 104 97 128 98 84 97 87 99 94
105 104 84 77 125 85 80 104 103 109
10
115 89 100 96 96 87 100 102 93
6
Percent
60
50 50
40
30
20
10
110.98
5
98.78
1
60 70 80 90 100 110 120 130 140
Time
Ignoring the three outliers in upper right portion of the graph, the normality assumption is reasonable.
a. Estimate the capability of the utility’s process for responding to customer-reported outages.
x ≈ p50=98.78 , p 84=110.98
σ^ = p84− p50=110.98−98.78=12.2
6 σ^ =6∗12.2=73.2
b. The utility wants to achieve a 90% response rate in under two hours, as response to emergency outages is
an important measure of customer satisfaction. What is the capability of the process with respect to this
objective?
USL=2 hrs=120 mins
(
^p=P Z >
USL−^μ
σ^
=1−P Z<
USL−^μ
σ^ )
=1−P Z < (
120−98.78
12.2 ) ( )
^p=1−P ( Z <1.739 )=1−0.958983=0.0410
5.50. The failure time in hours of 10 memory devices follows: 1210, 1275, 1400, 1695, 1900, 2105, 2230, 2250,
2500, and 2625. Plot the data on normal probability paper and, if appropriate, estimate the process
capability. Is it safe to estimate the proportion of circuits that fail below 1200 h?
Percent
60
50 50
40
30
20
10
5
1919
2423
1
0 1000 2000 3000 4000
Failure Time
A normal probability plot of failure time shows the distribution is approximately normal.
σ^ = p84− p50=2423−1919=504
6 σ^ =6∗504=3024
To use the capability metric to estimate the proportion of circuits that fail below 1200 h, the process must
be in statistical control. As we can see in the following I-MR control charts, the process is not in statistical
control.
2000
_
X=1919
1500 LCL=1501
1
1
1
1000
1 2 3 4 5 6 7 8 9 10
Observation
UCL=513.7
480
Moving Range
360
240
__
MR=157.2
120
0 LCL=0
1 2 3 4 5 6 7 8 9 10
Observation
CHAPTER 6
Exercises
6.1. The data in Table 6E.1 give the number of nonconforming bearing and seal assemblies in samples of size
100. Construct a fraction nonconforming control chart for these data. If any points plot out of control,
assume that assignable causes can be found and determine the revised control limits.
Sample Sample
Number of Number of
Numbe Numbe
Nonconforming Assemblies Nonconforming Assemblies
r r
1 7 11 6
2 4 12 15
3 1 13 0
4 3 14 9
5 6 15 5
6 8 16 1
7 10 17 4
8 5 18 5
9 2 19 7
10 7 20 12
m
117
∑ Di
n=100 , m=20 , ∑ Di=117 , p= i=1 = =0.0585
i=1 mn 20∗100
UCL p= p+ 3
√ p (1− p )
n
=0.0585+3
0.0585 ( 1−0.0585 )
100
=0.1289
√
LCL p= p−3
√ p ( 1− p )
n
=0.0585−3
0.0585 (1−0.0585 )
100 √
=0.0585−0.0704 ⇒ 0
MTB > Stat > Control Charts > Attributes Charts > P
0.14
UCL=0.1289
0.12
0.10
Proportion
0.08
_
0.06 P=0.0585
0.04
0.02
0.00 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample
TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 12
m
102
∑ Di
n=100 , m=19 , ∑ D i=102 , p= i=1 = =0.0537
i =1 mn 19∗100
UCL p= p+ 3
√ p (1− p )
n
=0.0537+3
0.0537 ( 1−0.0537 )
100
=0.1213
√
LCL p= p−3
√ p ( 1− p )
n
=0.0537−3
0.0537 ( 1−0.0537 )
100 √
=0.0537−0.0676 ⇒ 0
MTB > Stat > Control Charts > Attributes Charts > P
P Chart of Nonconforming Assemblies
0.16 1
0.14
0.12 UCL=0.1213
0.10
Proportion
0.08
0.06 _
P=0.0537
0.04
0.02
0.00 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample
TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 12
69
m ∑ Di
n=150 , m=20 , ∑ Di=69 , p= i=1 = =0.023
i=1 mn 20∗150
UCL= p+ 3
√ p (1− p )
n
=0.023+3
0.023 ( 1−0.023 )
150 √
=0.0597
LCL= p−3
√ p ( 1− p )
n
=0.023−3
0.023 (1−0.023 )
150 √
=0.023−0.0367 ⇒ 0
0.08
Proportion
0.06 UCL=0.0597
0.04
_
0.02
P=0.023
0.00 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample
TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 9, 17
m
44
∑ Di
n=100 , m=18 , ∑ D i=44 , p= i =1 = =0.01630
i =1 mn 18∗150
UCL p= p+ 3
√ p (1− p )
n
=0.01630+3
0.01630 (1−0.01630 )
150
=0.0473
√
LCL p= p−3
√ p ( 1− p )
n
=0.01630−3
0.01630 (1−0.01630 )
150 √
=0.01630−0.0310 ⇒ 0
MTB > Stat > Control Charts > Attributes Charts > P
0.08
1
Proportion
0.06 1
UCL=0.0473
0.04
0.02
_
P=0.0163
0.00 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample
Test Results for P Chart of Nonconforming Switches
TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 1, 9, 17
Sample 1 is now out of control, assuming an assignable cause is identified, remove it from control limit
calculations:
36
m ∑ Di
n=100 , m=17 , ∑ Di=36 , p= i=1 = =0.01412
i=1 mn 17∗150
UCL p= p+ 3
√ p (1− p )
n
=0.01412+3
0.01412 ( 1−0.01412 )
150 √
=0.0430
LCL p= p−3
√ p ( 1− p )
n
=0.01412−3
0.01412 ( 1−0.01412 )
150 √
=0.01630−0.0289 ⇒ 0
MTB > Stat > Control Charts > Attributes Charts > P
0.08
1
Proportion
0.06 1
0.04
UCL=0.0430
0.02 _
P=0.0141
0.00 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample
m m ∑ Di 60
∑ ni=1,000 , m=10 , ∑ Di=60 , p= i=1m =
1,000
=0.06
i=1 i=1
∑ ni
i=1
UCLi= p+3
√ √ p ( 1−p )
ni
=0.06+3
0.06 (1−0.06 )
ni
UCL1= p+3
√ p ( 1−p )
ni
=0.06+3
0.06 (1−0.06 )
80 √
=0.1397
{
LCL 1=max 0 , p−3
√ p ( 1− p )
ni } {
=max 0 , 0.06−3
0.06 ( 1−0.06 )
80
=0
√ }
MTB > Stat > Control Charts > Attributes Charts > P
0.14
UCL=0.1331
0.12
0.10
Proportion
0.08
_
0.06 P=0.06
0.04
0.02
0.00 LCL=0
1 2 3 4 5 6 7 8 9 10
Sample
Tests performed with unequal sample sizes
6.4. A process that produces titanium forgings for automobile turbocharger wheels is to be controlled through
the use of a fraction nonconforming chart. Initially, one sample of size 150 is taken each day for 20 days,
and the results shown in Table 6E.4 are observed.
Nonconforming Nonconforming
Day Day
Units Units
1 3 11 2
2 2 12 4
3 4 13 1
4 2 14 3
5 5 15 6
6 2 16 0
7 1 17 1
8 2 18 2
9 0 19 3
10 5 20 2
50
m ∑ Di
n=150 , m=20 , ∑ Di=50 , p= i=1 = =0.01667
i=1 mn 20∗150
UCL= p+ 3
√ p (1− p )
n
=0.01667+3
0.01667∗(1−0.01667)
150
=0.04802
√
CL= p=0.01667
LCL= p−3
√ p ( 1− p )
n
=0.01667−3
0.01667∗( 1−0.01667 )
150 √
=−0.0147=0
MTB > Stat > Control Charts > Attributes Charts > P
0.04
Proportion
0.03
0.02 _
P=0.01667
0.01
0.00 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample
The process appears to be in statistical control, so we can use these control limits to monitor future
production.
b. What is the smallest sample size that could be used for this process and still give a positive lower
control limit on the chart?
LCL> 0
⇒ p−3
√ p ( 1− p )
n
>0
9∗( 1− p )
⇒ n>
p
⇒ n> 530.9
n=531 is the smallest sample size that could be used to give a positive lower control limit.
6.5. A process produces rubber belts in lots of size 2,500. Inspection records on the last 20 lots reveal the data
in Table 6E.5.
Number of
Lot Lot Number of
Nonconforming
Number Number Nonconforming Belts
Belts
1 230 11 456
2 435 12 394
3 221 13 285
4 346 14 331
5 230 15 198
6 327 16 414
7 285 17 131
8 311 18 269
9 342 19 221
10 308 20 407
m
6141
∑ Di
n=2500 , m=20 , ∑ Di=6141 , p= i=1 = =0.1228
i=1 mn 20∗2500
UCL p= p+ 3
√ p (1− p )
n
=0.1228+3
√
0.1228 ( 1−0.1228 )
2500
=0.1425
CL= p=0.1228
LCL p= p−3
√ p ( 1− p )
n
=0.1228−3
√
0.1228 (1−0.1228 )
2500
=0.1031
MTB > Stat > Control Charts > Attributes Charts > P
P Chart of Nonconforming Belts
0.200
1
1
0.175 1 1
1
0.150
UCL=0.1425
Proportion
_
0.125 P=0.1228
0.100 LCL=0.1031
1 1
1 1
0.075 1
0.050 1
1 3 5 7 9 11 13 15 17 19
Sample
TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 1, 2, 3, 5, 11, 12, 15, 16, 17, 19, 20
Since many subgroups are out of control (11 of 20), the data should not be used to establish control limits
for future production. Instead, the process should be investigated for causes of the wild swings in p.
6.6. Based on the data in Table 6E.6 if an np chart is to be established, what would you recommend as the
center line and control limits? Assume that n=500.
Number of
Day Nonconforming
Units
1 3
2 4
3 3
4 2
5 6
6 12
7 5
8 1
9 2
10 2
m
40
∑ Di
n=500 , m=10 , ∑ D i=40 , p= i=1 = =0.008
i =1 mn 10∗500
MTB > Stat > Control Charts > Attributes Charts > NP
10 UCL=9.98
Sample Count
8
__
4 NP=4
0 LCL=0
1 2 3 4 5 6 7 8 9 10
Sample
TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 6
Sample 6 is out of control. Assume an assignable cause can be found and remove it from control limit
calculations:
28
m ∑ Di
n=500 , m=9 , ∑ Di=28 , p= i=1 = =0.006222
i=1 mn 9∗500
CL=n p=500∗0.00622=4
MTB > Stat > Control Charts > Attributes Charts > NP
10
UCL=8.39
Sample Count
4 __
NP=3.11
2
0 LCL=0
1 2 3 4 5 6 7 8 9 10
Sample
All other sample points remain in control use the following control limits for future monitoring:
10
0.044
∑ pi
m=10 , n=250 , p= i=1 = =0.0044
m 10
UCL= p+ 3
√ p (1− p )
n
=0.0044+ 3
250 √
0.0044∗(1−0.0044)
=0.01696
CL= p=0.01667
LCL= p−3
√ p ( 1− p )
n
=0.0044−3
0.0044∗( 1−0.0044 )
250 √
=0.0044−0.012560 ⇒ 0
Since p6=0.02> UCL=0.01696 , sample 6 is out of control. The shipment is not in statistical control.
MTB > Stat > Control Charts > Attributes Charts > P
UCL=0.01696
0.015
Proportion
0.010
_
0.005
P=0.0044
0.000 LCL=0
1 2 3 4 5 6 7 8 9 10
Sample
6.8. A control chart for the fraction nonconforming is to be established using a center line of p=0.10. What
sample size is required if we wish to detect a shift in the process fraction nonconforming to 0.20 with
probability 0.50?
δ= pnew − p=0.20−0.10=0.10
() ( )
2 2
L 3
n= p ( 1−p )= 0.10 ( 1−0.10 )=81
δ 0.10
6.9. A maintenance group improves the effectiveness of its repair work by monitoring the number of
maintenance requests that require a second call to complete the repair. Twenty weeks of data are shown in
Table 6E.7.
n=100 m=10
m
∑ D i=164 ∑ Di
164
i=1 p= i=1 = =0.164
mn 10∗100
UCL p= p+ 3
√ p (1− p )
n
=0.164+ 3
100 √
0.164 ( 1−0.164 )
=0.2751
LCL p= p−3
√ p ( 1− p )
n
=0.164−3
100 √
0.164 ( 1−0.164 )
=0.0529
0.30
UCL=0.2751
0.25
Proportion
0.20
_
P=0.164
0.15
0.10
0.05 LCL=0.0529
1 2 3 4 5 6 7 8 9 10
Sample
Sample 3 is out of control. Let us assume there are assignable causes for sample 3, and it can be removed.
m
n=100 m=9
m
∑ D i=133 ∑ Di
133
i=1 p= i=1 = =0.1478
mn 9∗100
UCL p= p+ 3
√ p (1− p )
n
=0.1478+3
100 √
0.1478 ( 1−0.1478 )
=0.2542
LCL p= p−3
√ p ( 1− p )
n
=0.1478−3
100 √
0.1478 (1−0.1478 )
=0.0413
0.25 UCL=0.2542
0.20
Proportion
_
0.15 P=0.1478
0.10
0.05
LCL=0.0413
1 2 3 4 5 6 7 8 9
Sample
The revised control limits for the p chart are: UCL=0.2542 ,CL=0.1478 , LCL=0.0413
There are two approaches for controlling future production. The first approach would be to plot and use
the revised limits in (a) unless there is at least one sample of different size. In those cases, calculate the
√ √
exact control limits by p ± p ( 1− p ) /n i=¿ 0.1640 ± 0.1640 ( 1−0.1640 ) /ni ¿. The second
approach, preferred in many cases, would be to construct standardized control limits with control limits at
√
3, and to plot Zi =( ^pi−0.1640 ) / 0.1640 (1−0.1640 ) /ni .
6.10. Why is the np chart not appropriate with variable sample sizes?
When there are variable sample sizes, the results can be misleading because the chart does not display the
relative magnitude of the number of defects within each sample. For example, suppose n pi=5 and
n pi+1 =7. Looking at the magnitude for the two samples only and there does not seem to be much
difference between the two. Now consider the sample sizes of the two samples: ni =100 and ni +1=200 .
Looking at the relative frequency of the two samples now indicates that the sample i has worse quality.
6.11. A process that produces bearing housings is controlled with a fraction nonconforming control chart, using
sample size n=100 and a center line p=0.02.
LCL p= p−3
√ p ( 1− p )
ni
=0.02+3
100 √
0.02 ( 1−0.02 )
=−0.0220 ⇒ 0
b. Analyze the ten new samples ( n=100 ) shown in Table 6E.8 for statistical control. What conclusions can
you draw about the process now?
Sampl Sampl
Number Number
e e
Numb Nonconformi Numb Nonconformi
er ng er ng
1 5 6 1
2 2 7 2
3 3 8 6
4 8 9 3
5 4 10 4
MTB > Stat > Control Charts > Attributes Charts > P
0.07
0.06 UCL=0.062
Proportion
0.05
0.04
0.03
_
0.02 P=0.02
0.01
0.00 LCL=0
1 2 3 4 5 6 7 8 9 10
Sample
TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 4
Sample 4 exceeds the upper control limit, signaling a potentially unstable process ( p=0.038 ,
σ^ p=0.0191 ¿
6.12. Consider the fraction nonconforming control chart in Exercise 6.4. Find the equivalent np chart.
m
50
m ∑ Di
n=150 , m=20 , ∑ Di=50 , p= i=1 = =0.01667
i=1 mn 20∗150
CLnp=n p=2.5005
MTB > Stat > Control Charts > Attributes Charts > NP
7
UCL=7.204
6
Sample Count
3 __
NP=2.5
2
0 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample
6.13. Consider the fraction nonconforming control chart in Exercise 6.5. Find the equivalent np chart.
m
6141
∑ Di
n=2500 , m=20 , ∑ Di=6141 , p= i=1 = =0.12282
i=1 mn 20∗2500
CL=n p=2500∗0.12282=307.05
MTB > Stat > Control Charts > Attributes Charts > NP
NP Chart of Nonconforming Belts
500
1
1
1 1
1
400
UCL=356.3
Sample Count
__
300 NP=307.1
LCL=257.8
1 1
1 1
200
1
1
100
1 3 5 7 9 11 13 15 17 19
Sample
The process is not in control; results are the same as for the p chart.
6.14. Surface defects have been counted on 25 rectangular steel plates, and the data are shown in Table 6E.9. Set
up a control chart for nonconformities using these data. Does the process producing the plates appear to be
in statistical control?
25
59
∑ ci
n=25 , c= i=1 = =2.36
n 25
CL=c=2.36
7 UCL=6.969
6
Sample Count
5
3 _
C=2.36
2
0 LCL=0
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample
The process is not in statistical control; there is an out of control point at sample 13.
6.15. A paper mill uses a control chart to monitor the imperfection in finished rolls of paper. Production output
is inspected for 20 days, and the resulting data are shown in Table 6E.10. Use these data to set up a control
chart for nonconformities per roll of paper. Does the process appear to be in statistical control? What center
line and control limits would you recommend for controlling current production?
Total Total
Number Total Number Number Total Number
Number Number
Rolls of Rolls of
of of
Produce Imperfections Produce Imperfections
Day Day
d d
1 18 12 11 18 18
2 18 14 12 18 14
3 24 20 13 18 9
4 22 18 14 20 10
5 22 15 15 20 14
6 22 12 16 20 13
7 20 11 17 24 16
8 20 15 18 24 18
9 20 12 19 22 20
10 20 10 20 21 17
20
∑ ui 288
u= i=1
20
= =0.7007
411
∑ ni
i=1
CL=u=0.7007
For example, control limits for the following sample sizes are:
n UCL CL LCL
18 1.2926 0.7007 0.1088
20 1.2622 0.7007 0.1392
22 1.2361 0.7007 0.1653
MTB > Stat > Control Charts > Attributes Charts > U
1.2
UCL=1.249
0.8 _
U=0.701
0.6
0.4
0.2
LCL=0.153
0.0
1 3 5 7 9 11 13 15 17 19
Sample
Tests performed with unequal sample sizes
6.16. Consider the papermaking process in Exercise 6.15. Set up a u chart based on an average sample size to
control this process.
20
∑ ni411
n= i =1 = =20.55
20 20
20
∑ ui 288
u= i=1
20
= =0.7007
411
∑ ni
i=1
CL=u=0.7007
UCL=1.255
1.2
0.8 _
U=0.701
0.6
0.4
0.2
LCL=0.147
0.0
1 3 5 7 9 11 13 15 17 19
Sample
6.17. The number of nonconformities found on final inspection of a tape deck is shown in Table 6E.11. Can you
conclude that the process is in statistical control? What center line and control limits would you
recommend for controlling future production?
25
27
∑ ci
n=18 , c= i =1 = =1.5
n 18
CL=c=1.5
LCL=c−3∗√ c=2.36−3∗√2.36=−2.174 ⇒ 0
C Chart of Nonconformities in Tape Decks
5
UCL=5.174
Sample Count
3
2
_
C=1.5
1
0 LCL=0
1 3 5 7 9 11 13 15 17
Sample
22
189
∑ ci
n=22 , c= i=1 = =8.591
n 22
CL=c=8.591
1
20 1
UCL=17.38
Sample Count
15
10 _
C=8.59
0 LCL=0
1 3 5 7 9 11 13 15 17 19 21
Sample
TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 10, 11, 22
Process is not in statistical control; three subgroups exceed the UCL. Exclude subgroups 10, 11 and 22,
then re-calculate the control limits.
1
20 1
1
Sample Count
15
UCL=14.36
10
_
C=6.63
5
0 LCL=0
1 3 5 7 9 11 13 15 17 19 21
Sample
Subgroup 15 is now out of control, so exclude it as well and update control limits.
22
111
∑ ci
n=18 , c= i =1 = =6.167
n 18
CL=c=6.167
1
20 1
Sample Count
15
UCL=13.62
10
_
C=6.17
5
0 LCL=0
1 3 5 7 9 11 13 15 17 19 21
Sample
TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 10, 11, 15, 22
6.19. Consider the data in Exercise 6.17. Suppose we wish to define a new inspection unit of four tape decks.
a. What are the center line and control limits for a control chart for monitoring future production based on the
total number of defects in the new inspection unit?
The new inspection unit is n=4 of the old unit. A c chart of the total number of nonconformities per
inspection unit is appropriate.
CL=n c=4∗1.5=6
Note plot point, c^ , is the total number of nonconformities found while inspecting a sample of four tape
decks.
b. What are the center line and control limits for a control chart for nonconformities per unit used to monitor
future production?
The sample is n=1 new inspection units. A u chart of average nonconformities per inspection unit is
appropriate.
total nonconformities 27
u= = =6
Number of inspection units (18∗1/4 )
^ , is the average number of nonconformities found in 4 tape decks, and since n = 1, this is
The plot point,u
the same as the total number of nonconformities.
a. What are the center line and control limits for a control chart for monitoring future production based on the
total number of nonconformities in the new inspection unit?
The new inspection unit is n=2500/1000=2.5 of the old unit. A c chart of the total number of
nonconformities per inspection unit is appropriate.
CL=n c=2.5∗6.17=15.43
Note plot point, c^ , is the total number of nonconformities found while inspecting a sample 2500m in
length.
b. What are the center line and control limits for a control chart for average nonconformities per unit used to
monitor future production?
The sample is n=1 new inspection units. A u chart of average nonconformities per inspection unit is
appropriate.
CL=u=15.42
^ , is the average number of nonconformities found in 2500 m, and since n = 1, this is the
The plot point,u
same as the total number of nonconformities.
16
¿ nonconformities i=1 27
∑ ui
n=4 ,m=16 ,u= = = =0.422
¿ inspection units n∗m 4∗16
CL=u=0.422
1.0
0.8
0.6
_
0.4 U=0.422
0.2
0.0 LCL=0
1 3 5 7 9 11 13 15
Sample
The process has no out of control points. The process appears to be in statistical control.
c. Suppose the inspection unit is redefined as eight transmissions. Design an appropriate control chart for
monitoring future production.
The sample is n=1 new inspection units. A u chart of average nonconformities per inspection unit is
appropriate.
total nonconformities 27
u= = =3.375
total inspection units ( 16∗4 ) /8
CL=u=3.375
The plot point,u^ , is the average number of nonconformities found in 8 transmissions, and since n = 1, this
is the same as the total number of nonconformities.
Number of Number of
Total Number Total Number
Assemblie Assemblie
Day of Day of
s s
Imperfections Imperfections
Inspected Inspected
1 2 10 6 4 24
2 4 30 7 2 15
3 2 18 8 4 26
4 1 10 9 3 21
5 3 20 10 1 8
UCL=u+3 √ u /ni
CL=u=7
LCL=u−3 √ u /n i
For example, control limits for the following sample sizes are:
n UCL CL LCL
1 14.937 7 0
2 12.612 7 1.388
3 11.583 7 2.417
4 10.969 7 3.031
MTB > Stat > Control Charts > Attributes Charts > U
10
8 _
U=7
6
0 LCL=0
1 2 3 4 5 6 7 8 9 10
Sample
Tests performed with unequal sample sizes
6.23. The manufacturer wishes to set up a control chart at the final inspection station for a gas water heater.
Defects in workmanship and visual quality features are checked in this inspection. For the past 22 working
days, 176 water heaters were inspected and a total of 924 nonconformities reported.
a. What type of control chart would you recommend here and how would you use it?
Given that 1 water heater is the inspection unit, we can use a c chart to monitor the number of
nonconformities per water heater.
924
c= =5.25
176
CL=c=5.25
The new inspection unit is n=2/1=2 of the old unit. A c chart of the total number of nonconformities
per inspection unit is appropriate.
CL=n c=2∗5.25=10.50
LCL=n c−3 √ n c=2∗5.25−3 √ 2∗5.25=0.779
Using EXCEL
Ex7.55X Ex7.55alpha
0 =POISSON(0,4,TRUE)
1 =POISSON(1,4,TRUE)
2 =POISSON(2,4,TRUE)
3 =POISSON(3,4,TRUE)
4 =POISSON(4,4,TRUE)
5 =POISSON(5,4,TRUE)
6 =POISSON(A8,4,TRUE)
7 =POISSON(A9,4,TRUE)
8 =POISSON(A10,4,TRUE)
9 =POISSON(A11,4,TRUE)
10 =POISSON(A12,4,TRUE)
11 =POISSON(A13,4,TRUE)
Ex7.55X Ex7.55alpha
0 0.02
1 0.09
2 0.24
3 0.43
4 0.63
5 0.79
6 0.89
7 0.95
8 0.98
9 0.99
10 1.00
11 1.00
An UCL=9 will give a probability of 0.99 of concluding the process is in control, when in fact it is.
6.25. A control chart for nonconformities is to be established in conjunction with the final inspection of a radio.
The inspection unit is to be a group of ten radios. The average number of nonconformities per radio has, in
the past, been 0.5. Find three-sigma control limits for a c chart based on this size of inspection unit.
n=10 , c=0.5
CL=n c=10∗0.5=5
n=6 , c=0.75
CL=n c=6∗0.75=4.5
√ √
Days ( Days Between 4) Days Mont Dat Days ( Days Between 4) Days
Mont Dat
Betwee Betwee
h e
n ¿ 0.2777 Between h e
n ¿ 0.2777 Between
Jan. 20 July 8 2 1.212261 1.189207
Feb. 23 34 2.662513 2.414736 July 9 1 1 1
Feb. 25 2 1.212261 1.189207 July 26 17 2.19632 2.030543
Marc
5 8 1.781509 1.681793 Sep. 9 45 2.878042 2.59002
h
Marc
10 5 1.563522 1.495349 Sep. 22 13 2.038647 1.898829
h
April 4 25 2.444601 2.236068 Sep. 24 2 1.212261 1.189207
May 7 33 2.640531 2.396782 Oct. 1 7 1.716658 1.626577
May 24 17 2.19632 2.030543 Oct. 4 3 1.35674 1.316074
May 28 4 1.469576 1.414214 Oct. 8 4 1.469576 1.414214
June 7 10 1.895396 1.778279 Oct. 19 11 1.946233 1.82116
June 16* 9.25 1.854802 1.743956 Nov. 2 14 2.081037 1.934336
June 16* 0.5 0.824905 0.840896 Nov. 25 23 2.388646 2.189939
June 22 5.25 1.58485 1.5137 Dec. 28 33 2.640531 2.396782
June 25 3 1.35674 1.316074 Dec. 29 1 1 1
July 6 11 1.946233 1.82116
b. Transform the data using the 0.2777 root of the data. Plot the transformed data on a normal probability plot.
Does this plot indicate that the transformation has been successful in making the new data more closely
resemble data from a normal distribution?
Expression : 'DaysBetween'^0.2777
Percent
60
50
40
30
20
10
5
1
0 1 2 3 4
Transformed Days Between
The transformed data points now form a straight line in the normal probability plot; the transformation was
successful in making the new data more closely resemble data from a normal distribution.
c. Transform the data using the fourth root (0.25) of the data. Plot the transformed data on a normal
probability plot. Does this plot indicate that the transformation has been successful in making the new data
resemble more closely data from a normal distribution? Is the plot very different from the one in part (b)?
Expression : 'DaysBetween'^0.25
60
50
40
30
20
10
5
1
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5
Transformed 2 Days Between
The transformed data points now form a straight line in the normal probability plot; the transformation was
successful in making the new data more closely resemble data from a normal distribution. This
transformation is very similar to that of the transformation in part (b).
d. Construct an individual control chart using the transformed data from part (b).
UCL= x+3
MR
d2
=1.806+ 3
0.587
1.128
=3.367 ( )
CL=x=1.806
LCL=x−3
MR
d2
=1.806−3
0.587
1.128
=0.245 ( )
MTB > Stat > Control Charts > Variables Charts for Individuals > I-MR
Individual Value
2
_
X=1.806
LCL=0.246
0
1 4 7 10 13 16 19 22 25 28
Observation
2.0
UCL=1.917
1.5
Moving Range
1.0
__
0.5
MR=0.587
0.0 LCL=0
1 4 7 10 13 16 19 22 25 28
Observation
e. Construct an individual control chart using the transformed data from part (c). How similar is it to the one
you constructed in part (d)?
UCL= x+3
MR
d2
=1.695+ 3
0.500
1.128
=3.025 ( )
CL=x=1.695
LCL=x−3
MR
d2
=1.695−3
0.500
1.128
=0.365 ( )
MTB > Stat > Control Charts > Variables Charts for Individuals > I-MR
I-MR Chart of Transformed (Fourth Root) Days Between
3 UCL=3.025
Individual Value
2 _
X=1.695
1
LCL=0.365
0
1 4 7 10 13 16 19 22 25 28
Observation
1.6 UCL=1.634
Moving Range
1.2
0.8
__
MR=0.500
0.4
0.0 LCL=0
1 4 7 10 13 16 19 22 25 28
Observation
The control chart for this transformation leads to similar results as the transformation in part (b).
6.28. Suggest at least two nonmanufacturing scenarios in which attributes control charts could be useful for
process monitoring.
Scenario 1: In health care, tracking the number of errors in entries to a patient’s health record.
Scenario 2: In service industry, tracking the number of errors made in completing a transaction.
6.29. What practical difficulties could be encountered in monitoring the days-between-events data?
The events might occur rarely, so it may take a long time to collect enough data to monitor days-between-
events.
6.30. A paper by R. N. Rodriguez (“Health Care Applications of Statistical Process Control: Examples Using the
SAS® System,” SAS Users Group International: Proceedings of the 21st Annual Conference, 1996)
illustrated several informative applications of control charts to the health care environment. One of these
showed how a control chart was employed to analyze the rate of CAT scans performed each month at a
clinic. The data used in this example are shown in Table 6E.16. NSCANB is the number of CAT scans
performed each month and MMSB is the number of members enrolled in the health care plan each month,
in units of member months. “Days” is the number of days in each month. The variable NYRSB converts
MMSB to units of thousand members per year, and is computed as follows: NYRSB=MMSB
(days/30)/12000. NYRSB represents the “area of opportunity.”
Construct an appropriate control chart to monitor the rate at which CAT scans are performed at this clinic.
NSCAN
Month MMSB Days NYRSB
B
Jan. 94 50 26838 31 2.31105
Feb. 94 44 26903 28 2.09246
Mar 94 71 26895 31 2.31596
Apr. 94 53 26289 30 2.19075
May 94 53 26149 31 2.25172
Jun. 94 40 26185 30 2.18208
July 94 41 26142 31 2.25112
Aug. 94 57 26092 31 2.24681
Sept. 94 49 25958 30 2.16317
Oct. 94 63 25957 31 2.23519
Nov. 94 64 25920 30 2.16
Dec. 94 62 25907 31 2.23088
Jan. 95 67 26754 31 2.30382
Feb. 95 58 26696 28 2.07636
Mar 95 89 26565 31 2.28754
The variable NYRSB can be thought of as an “inspection unit”, representing an identical “area of
opportunity” for each “sample”. The “process characteristic” to be controlled is the rate of CAT scans. A
u chart which monitors the average number of CAT scans per NYRSB is appropriate.
15
∑ ui 861
u= i=1
15
= =25.857
33.2989
∑ ni
i=1
CL=u=25.857
MTB > Stat > Control Charts > Attributes Charts > U
U Chart of Scans
40 1
35
UCL=35.94
Sample Count Per Unit
30
_
25
U=25.86
20
15
LCL=15.77
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Sample
Tests performed with unequal sample sizes
TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 15
u chart :UCL=35.95 , CL=25.86 , LCL=15.77
The rate of monthly CAT scans is out of control because sample 15 exceeds the UCL.
6.31. A paper by R. N. Rodriguez (“Health Care Applications of Statistical Process Control: Examples Using the
SAS® System,”, SAS Users Group International: Proceedings of the 21st Annual Conference, 1996)
illustrated several informative applications of control charts to the health care environment. One of these
showed how a control chart was employed to analyze the number of office visits by health care plan
members. The data for clinic E are shown in table 6E.17. The variable NVISITE is the number of visits to
clinic E each month, and MMSE is the number of members enrolled in the health care plan each month, in
units of member months. “Days” is the number of days in each month. The variable NYRSE converts
MMSE to units of thousand members per year and is computed as follows:
NYRSE=MMSE(days /30)/12000.NYRSE represents the “area of opportunity.” The variable Phase
separates the data into two time periods.
The variable NYRSE can be thought of as an “inspection unit”, representing an identical “area of
opportunity” for each “sample”. The “process characteristic” to be controlled is the rate of clinic visits. A
u chart which monitors the average number clinic visits per NYRSE is appropriate.
a. Use the data from P1 to construct a control chart for monitoring the rate of office visits performed at clinic
E. Does this chart exhibit control?
15
∑ ui 12112
u= i=1
15
= =2303.0
5.25923
∑ ni
i=1
CL=u=2303.0
LCL=u−3 √ u /n i=2303.0−3 √ 2303.0 / ni
MTB > Stat > Control Charts > Attributes Charts > U
_
2300 U=2303.0
2200
LCL=2129.5
2100
1 2 3 4 5 6 7 8
Sample
Tests performed with unequal sample sizes
b. Plot the data from P2 on the chart constructed in part (a). Is there a difference in the two phases?
MTB > Stat > Control Charts > Attributes Charts > U
In “Estimate” tab, enter 9:15 to omit the phase 2 data when estimating parameters.
U Chart of Clinic Visits
1
2800
1
2700 1
Sample Count Per Unit
1 1
2600
1
1
2500
UCL=2465.0
2400
_
2300 U=2303.0
2200
LCL=2141.0
2100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Sample
Tests performed with unequal sample sizes
There is a difference between the two phases; all the points in phase two are out of control.
15
∑ ui 13202
u= i=1
15
= =2623.52
5.03217
∑ ni
i=1
CL=u=2623.52
LCL=u−3 √( u/ ni )=2623.52−3 √ 2623.52/ ni
MTB > Stat > Control Charts > Attributes Charts > U
2800 UCL=2796.5
_
U=2623.5
2600
2500
LCL=2450.6
2400
1 2 3 4 5 6 7
Sample
Tests performed with unequal sample sizes
6.32. The data in Table 6E.18 are the number of information errors found in customer records in a marketing
company database. Five records were sampled each day. Set up a c chart for the total number of errors. Is
the process in control?
∑ ∑ c ij 698
n=5 , m=25 , c= i=1 j=1
= =27.92
m 25
CL=c=27.92
1
UCL=43.77
40
Sample Count
30 _
C=27.92
20
LCL=12.07
10
1 3 5 7 9 11 13 15 17 19 21 23 25
Day
6.33. Kaminski et al. (1992) present data on the number of orders per truck at a distribution center. Some of this
data is shown in Table 6E.19. Set up a c chart for the number of orders per truck. Is the process in control?
542
∑ ci
n=32 , c= i=1 = =16.9375
n 32
CL=c=16.9375
50
1
40
Sample Count
1
1
1
30 UCL=29.28
20 _
C=16.94
10
1
LCL=4.59
0
1 4 7 10 13 16 19 22 25 28 31
Truck
Exercises
7.1. Draw the type-B OC curve for the single sampling plan n=50 , c=1.
1
50 !
Pa=P {d ≤ c }=∑ pd (1− p)50−d
d =0 d ! ( 50−d ) !
Excel Formulas:
Excel Results:
7.2. Draw the type-B OC curve for the single-sampling plan n=100 , c=2.
2
100 !
Pa=P {d ≤ c }=∑ pd (1− p)100−d
d =0 d ! ( 100−d ) !
Excel Formulas:
Excel Results:
7.3. Suppose that a product is shipped in lots of size N=5,000 . The receiving inspection
procedure used is single-sampling with n=50 and c=1 .
Hypergeometric distribution:
Pa=Pr { d ≤ c }=∑
( d )( 50−d )
k 5000−k
1
(5000
50 )
d=0
where k = p∗N
Excel formulas:
A E F G H
1 N= 5000
2 n= 50
3 c= 1
4 d= 0 1
5
6 p D f(d=0) f(d=1) Pr{d<=1}
7 0.001 =INT(D7) =HYPGEOMDIST(E$4,$E$2,$E7,$E$1) =HYPGEOMDIST(F$4,$E$2,$E7,$E$1) =F7+G7
8 0.002 =INT(D8) =HYPGEOMDIST(E$4,$E$2,$E8,$E$1) =HYPGEOMDIST(F$4,$E$2,$E8,$E$1) =F8+G8
9 0.003 =INT(D9) =HYPGEOMDIST(E$4,$E$2,$E9,$E$1) =HYPGEOMDIST(F$4,$E$2,$E9,$E$1) =F9+G9
10 0.004 =INT(D10) =HYPGEOMDIST(E$4,$E$2,$E10,$E$1) =HYPGEOMDIST(F$4,$E$2,$E10,$E$1) =F10+G10
11 0.005 =INT(D11) =HYPGEOMDIST(E$4,$E$2,$E11,$E$1) =HYPGEOMDIST(F$4,$E$2,$E11,$E$1) =F11+G11
12 0.006 =INT(D12) =HYPGEOMDIST(E$4,$E$2,$E12,$E$1) =HYPGEOMDIST(F$4,$E$2,$E12,$E$1) =F12+G12
13 0.007 =INT(D13) =HYPGEOMDIST(E$4,$E$2,$E13,$E$1) =HYPGEOMDIST(F$4,$E$2,$E13,$E$1) =F13+G13
14 0.0075 =INT(D14) =HYPGEOMDIST(E$4,$E$2,$E14,$E$1) =HYPGEOMDIST(F$4,$E$2,$E14,$E$1) =F14+G14
15 0.008 =INT(D15) =HYPGEOMDIST(E$4,$E$2,$E15,$E$1) =HYPGEOMDIST(F$4,$E$2,$E15,$E$1) =F15+G15
16 0.009 =INT(D16) =HYPGEOMDIST(E$4,$E$2,$E16,$E$1) =HYPGEOMDIST(F$4,$E$2,$E16,$E$1) =F16+G16
17 …
Excel results:
A D E F G H
5 hypergeometric - Type A
6 p N*p D f(d=0) f(d=1) Pr{d<=1}
7 0.001 5 5 0.95097 0.04807 0.99904
8 0.002 10 10 0.90430 0.09151 0.99581
9 0.003 15 15 0.85988 0.13065 0.99053
10 0.004 20 20 0.81759 0.16581 0.98340
11 0.005 25 25 0.77735 0.19726 0.97461
12 0.006 30 30 0.73905 0.22527 0.96432
13 0.007 35 35 0.70260 0.25011 0.95271
14 0.008 37.5 37 0.68852 0.25921 0.94773
15 0.008 40 40 0.66791 0.27201 0.93992
16 0.009 45 45 0.63491 0.29118 0.92609
17 0.010 50 50 0.60350 0.30785 0.91135
18 …
Excel graph:
0.70000
0.60000
0.50000
0.40000
0.30000
0.20000
0.10000
0.00000
0 0.05 0.1 0.15
b. Draw the type-B OC curve for this plan and compare it to the type-A OC curve found
in part (a).
Excel formulas:
A B C D E
1 N= 5000
2 n= 50
3 c= 1
4 d= 0 1
Excel results:
A B
5 binomial - Type B
6 p Pr{d<=1}
7 0.001 0.99881
8 0.002 0.99540
9 0.003 0.98998
10 0.004 0.98274
11 0.005 0.97387
12 0.006 0.96353
13 0.007 0.95190
14 0.008 0.94563
15 0.008 0.93910
16 0.009 0.92528
17 0.010 0.91056
18 …
Excel graph:
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.05 0.1 0.15
Based on values for and , the difference between the two curves is small; either is appropriate.
7.4. Find a single sample-sampling plan for which p1=0 , 01 , α =0.05 , p 2=0.10 ,and β=0.10 .
From the binomial nomograph (Figure 7.10), the sampling plan is n=35 and c=1 .
7.5. Find a single-sampling plan for which p1=0.05 , α =0.05 , p2=0.15 ,∧β=0.10 .
From the binomial nomograph (Figure 7.10), the sampling plan is n=80 and c=7 .
7.6. Find a single-sampling plan for which p1=0 , 02 , α =0.01 , p 2=0.06 ,and β=0.10 .
From the binomial nomograph (Figure 7.10), the sampling plan is n=300 and c=11
Excel formulas:
A B C D E F
1 LTPD = 0.05
2
3 N1 = 5000 N2 = 10000
4 n1 = =0.1*C3 n1 = =0.1*F3
5 pmax = 0.02 pmax = 0.02
6 cmax = =C5*C4 cmax = =F5*F4
7 binomial binomial
8 p Pr{d<=10} Pr{reject} Pr{d<=20} Pr{reject}
9 0.001 =BINOMDIST($C$6,$C$4,A9,TRUE) =1-B9 =BINOMDIST($F$6,$F$4,A9,TRUE) =1-E9
10 0.002 =BINOMDIST($C$6,$C$4,A10,TRUE) =1-B10 =BINOMDIST($F$6,$F$4,A10,TRUE) =1-E10
11 0.003 =BINOMDIST($C$6,$C$4,A11,TRUE) =1-B11 =BINOMDIST($F$6,$F$4,A11,TRUE) =1-E11
12 0.004 =BINOMDIST($C$6,$C$4,A12,TRUE) =1-B12 =BINOMDIST($F$6,$F$4,A12,TRUE) =1-E12
13 0.005 =BINOMDIST($C$6,$C$4,A13,TRUE) =1-B13 =BINOMDIST($F$6,$F$4,A13,TRUE) =1-E13
14 0.006 =BINOMDIST($C$6,$C$4,A14,TRUE) =1-B14 =BINOMDIST($F$6,$F$4,A14,TRUE) =1-E14
15 0.007 =BINOMDIST($C$6,$C$4,A15,TRUE) =1-B15 =BINOMDIST($F$6,$F$4,A15,TRUE) =1-E15
16 0.0075 =BINOMDIST($C$6,$C$4,A16,TRUE) =1-B16 =BINOMDIST($F$6,$F$4,A16,TRUE) =1-E16
17 0.008 =BINOMDIST($C$6,$C$4,A17,TRUE) =1-B17 =BINOMDIST($F$6,$F$4,A17,TRUE) =1-E17
18 0.009 =BINOMDIST($C$6,$C$4,A18,TRUE) =1-B18 =BINOMDIST($F$6,$F$4,A18,TRUE) =1-E18
19 0.01 =BINOMDIST($C$6,$C$4,A19,TRUE) =1-B19 =BINOMDIST($F$6,$F$4,A19,TRUE) =1-E19
20 …
Excel results:
A B C D E F G H
8 p Pr{d<=10} Pr{reject} Pr{d<=20} Pr{reject} difference
9 0.0010 1.00000 0.0000 1.00000 0.0000 0.00000
10 0.0020 1.00000 0.0000 1.00000 0.0000 0.00000
11 0.0030 1.00000 0.0000 1.00000 0.0000 0.00000
12 0.0040 0.99999 0.0000 1.00000 0.0000 -0.00001
13 0.0050 0.99994 0.0001 1.00000 0.0000 -0.00006
14 0.0060 0.99972 0.0003 1.00000 0.0000 -0.00027
15 0.0070 0.99903 0.0010 0.99999 0.0000 -0.00095
16 0.0075 0.99834 0.0017 0.99996 0.0000 -0.00163
17 0.0080 0.99729 0.0027 0.99991 0.0001 -0.00263
18 0.0090 0.99359 0.0064 0.99959 0.0004 -0.00600
19 0.0100 0.98676 0.0132 0.99850 0.0015 -0.01175
20 0.0110 0.97545 0.0245 0.99556 0.0044 -0.02010
21 0.0120 0.95837 0.0416 0.98886 0.0111 -0.03049
22 0.0130 0.93444 0.0656 0.97579 0.0242 -0.04135
23 0.0140 0.90298 0.0970 0.95330 0.0467 -0.05031
24 0.0150 0.86386 0.1361 0.91861 0.0814 -0.05474
25 0.0200 0.58304 0.4170 0.55910 0.4409 0.02395
26 0.0250 0.29404 0.7060 0.18221 0.8178 0.11183
27 0.0300 0.11479 0.8852 0.03328 0.9667 0.08151
28 0.0350 0.03631 0.9637 0.00380 0.9962 0.03251
29 0.0400 0.00967 0.9903 0.00030 0.9997 0.00938
30 0.0450 0.00224 0.9978 0.00002 1.0000 0.00222
31 0.0500 0.00046 0.9995 0.00000 1.0000 0.00046
32 …
Samples of 5,000 (10,000) units where 0.025% of the sample is defective will be accepted only 29.40%
(18.22%) of the time. If the LTPD=0.05, this sampling scheme will often lead to rejecting acceptable
lots.
7.8. A company uses a sample size equal to the square root of the lot size. If 1% or less of the items in the
sample are defective, the lot is accepted; otherwise, it is rejected. Submitted lots vary in size from 1,000 to
5,000 units. Comment on the effectiveness of this procedure.
Excel formulas:
Excel Results:
Different sample sizes offer different levels of protection. The company will reject the lot if one or more
items in the sample are defective.
Samples of 5,000 (10,000) units where 0.025% of the sample is defective will be accepted only 45.62%
(16.99%) of the time.
7.9. Consider the single-sampling plan found in Exercise 7.4. Suppose that lots of
N=2,000 are submitted. Draw the ATI curve for this plan. Draw the AOQ curve and
find the AOQL.
n=35 , c=1 , N =2,000
Pa p ( N −n ) 1965
AOQ= = P p
N 2000 a
AOQL=0.0234
Excel formulas:
Excel results:
Excel graphs:
ATI Curve for n=35, c=1 AOQ Cur ve for n=35, c=1
2500 0.0250
2000 0.0200
1500 0.0150
AOQ
ATI
1000 0.0100
500 0.0050
0 0.0000
0 0.05 0.1 0.15 0.2 0.25 0 0.05 0.1 0.15 0.2 0.25
p p
7.10. Suppose that a single-sampling plan with n=150 and c=2 is being used for receiving inspection where
the supplier ships the product in lots of size N=3,000.
n 150
Note that = =0.05≤ 0.10 . Therefore, the Type-A and Type-B OC curves are virtually
N 3000
indistinguishable.
Pa p(N−n) (3000−150)
AOQ= = Pa p=0.95∗Pa p
N 3000
AOQL=0.008676
c. Draw the ATI curve for this plan.
7.11. Suppose that a supplier ships components in lots of size 5,000. A single-sampling
plan with n=50 and c=2 is being used for receiving inspection. Rejected lots are
screened, and all defective items are reworked and returned to the lot.
Excel formulas:
A B C
1 N = 5000 n = 50
2 c= 2
3 binomial
4 p Pa=Pr{d<=2} Pr{reject}
5 0.001 =BINOMDIST($C$2,$C$1,A5,TRUE) =1-B5
6 0.002 =BINOMDIST($C$2,$C$1,A6,TRUE) =1-B6
7 0.003 =BINOMDIST($C$2,$C$1,A7,TRUE) =1-B7
8 0.004 =BINOMDIST($C$2,$C$1,A8,TRUE) =1-B8
9 0.005 =BINOMDIST($C$2,$C$1,A9,TRUE) =1-B9
10 0.006 =BINOMDIST($C$2,$C$1,A10,TRUE) =1-B10
11 0.007 =BINOMDIST($C$2,$C$1,A11,TRUE) =1-B11
12 0.0075 =BINOMDIST($C$2,$C$1,A12,TRUE) =1-B12
13 0.008 =BINOMDIST($C$2,$C$1,A13,TRUE) =1-B13
14 0.009 =BINOMDIST($C$2,$C$1,A14,TRUE) =1-B14
15 0.01 =BINOMDIST($C$2,$C$1,A15,TRUE) =1-B15
16 …
Excel results:
A B C
1 N = 5000 n = 50
2 c= 2
3 binomial
4 p Pa=Pr{d<=2} Pr{reject}
5 0.0010 0.99998 0.00002
6 0.0020 0.99985 0.00015
7 0.0030 0.99952 0.00048
8 0.0040 0.99891 0.00109
9 0.0050 0.99794 0.00206
10 0.0060 0.99657 0.00343
11 0.0070 0.99474 0.00526
12 0.0075 0.99364 0.00636
13 0.0080 0.99242 0.00758
14 0.0090 0.98957 0.01043
15 0.0100 0.98618 0.01382
16 …
Excel graph:
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.05 0.1 0.15 0.2 0.25
Fraction Defective, p
b. Find the level of lot quality that will be rejected 90% of the time.
If P ( Rejection )=0.90 , then, P ( acceptance )=0.10. From the graph in part (a) with Pa=0.10,
p=0.057 will be rejected about 90% of the time.
c. Management has objected to the use of the above sampling procedure and wants to
use a plan with an acceptance number c=0, arguing that this is more consistent with
their Zero Defects program. What do you think of this?
A zero-defects sampling plan, with acceptance number c = 0, will be extremely hard on the vendor because
the Pa is low even if the lot fraction defective is low. Generally, quality improvement begins with the
manufacturing process control, not the sampling plan.
d. Design a single-sampling plan with c=0 that will give a 0.90 probability of rejection
of lots having the quality level found in part (b). Note that the two plans are now
matched at the LTPD point. Draw the OC curve for this plan and compare it to the
one forn=50, c=2 in part (a).
From the nomograph (Figure 7.10), select n = 20, yielding Pa=1 – 0.11372=0.88638 0.90. The OC
curve for this zero-defects plan is much steeper.
Excel formulas:
E F G
1 N = 5000 n = 20
2 c= 0
3 binomial
4 p Pa=Pr{d<=0} Pr{reject}
5 0.001 =BINOMDIST($G$2,$G$1,A5,TRUE) =1-F5
6 0.002 =BINOMDIST($G$2,$G$1,A6,TRUE) =1-F6
7 0.003 =BINOMDIST($G$2,$G$1,A7,TRUE) =1-F7
8 0.004 =BINOMDIST($G$2,$G$1,A8,TRUE) =1-F8
9 0.005 =BINOMDIST($G$2,$G$1,A9,TRUE) =1-F9
10 0.006 =BINOMDIST($G$2,$G$1,A10,TRUE) =1-F10
11 0.007 =BINOMDIST($G$2,$G$1,A11,TRUE) =1-F11
12 0.0075 =BINOMDIST($G$2,$G$1,A12,TRUE) =1-F12
13 0.008 =BINOMDIST($G$2,$G$1,A13,TRUE) =1-F13
14 0.009 =BINOMDIST($G$2,$G$1,A14,TRUE) =1-F14
15 0.01 =BINOMDIST($G$2,$G$1,A15,TRUE) =1-F15
16 …
Excel results:
E F G
1 N = 5000 n = 20
2 c= 0
3 binomial
4 p Pa=Pr{d<=0} Pr{reject}
5 0.0010 0.98019 0.01981
6 0.0020 0.96075 0.03925
7 0.0030 0.94168 0.05832
8 0.0040 0.92297 0.07703
9 0.0050 0.90461 0.09539
10 0.0060 0.88660 0.11340
11 0.0070 0.86893 0.13107
12 0.0075 0.86022 0.13978
13 0.0080 0.85160 0.14840
14 0.0090 0.83459 0.16541
15 0.0100 0.81791 0.18209
16 …
Excel graph:
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.05 0.1 0.15 0.2 0.25
Fraction Defective, p
e. Suppose that incoming lots are 0.5% nonconforming. What is the probability of
rejecting these
lots under both plans? Calculate the ATI at this point for both plans. Which plan do
you prefer? Why?
The c=2 plan is preferred because the c=0 plan will reject good lots 1.98% of the time as opposed to
0.0001% of the time with c=2 .
7.12. Draw the primary and supplementary OC curves for a double-sampling plan with
n1=50 ,c 1=2 , n2=100 ,c 2=6. If the incoming lots have fraction nonconforming p=0.05, what is the
probability of acceptance on the first sample? What is the probability of final acceptance? Calculate the
probability of rejection on the first sample.
Excel Formulas:
Excel Results:
Excel graph:
d1=0 d 1 ! ( 50−d 1 ) !
I II
Pa=P a+ Pa
Pa=0.54053+0.07536=0.61589
d 1=0 d 1 ! ( 50−d 1 ) !
¿ 1−0.98821=0.01179
7.13.
p 2( 1− p1) 0.10(0.99)
k =log =log =1.0414
p 1( 1− p2) 0.01(0.90)
1−α 0.95
h1=(log )/k =( log )/1.0414=0.9389
β 0.10
1−β 0.90
h2 =(log )/k =( log )/1.0414=1.2054
α 0.05
Excel formulas:
A B C D E
1 p1 = 0.01 k= =LOG((B2*(1-B1))/(B1*(1-B2)))
2 p2 = 0.1 h1 = =(LOG((1-B3)/B4))/E1
3 alpha = 0.05 h2 = =(LOG((1-B4)/B3))/E1
4 beta = 0.1 s= =LOG((1-B1)/(1-B2))/E1
5
6 n XA XR Acc Rej
7 1 =-$E$2+$E$4*A7 =$E$3+$E$4*A7 n/a 2
8 =A7+1 =-$E$2+$E$4*A8 =$E$3+$E$4*A8 n/a 2
9 =A8+1 =-$E$2+$E$4*A9 =$E$3+$E$4*A9 n/a 2
10 =A9+1 =-$E$2+$E$4*A10 =$E$3+$E$4*A10 n/a 2
11 =A10+1 =-$E$2+$E$4*A11 =$E$3+$E$4*A11 n/a 2
12 =A11+1 =-$E$2+$E$4*A12 =$E$3+$E$4*A12 n/a 2
13 =A12+1 =-$E$2+$E$4*A13 =$E$3+$E$4*A13 n/a 2
14 =A13+1 =-$E$2+$E$4*A14 =$E$3+$E$4*A14 n/a 2
15 =A14+1 =-$E$2+$E$4*A15 =$E$3+$E$4*A15 n/a 2
16 =A15+1 =-$E$2+$E$4*A16 =$E$3+$E$4*A16 n/a 2
17 …
Excel results:
A B C D E
6 n XA XR Acc Rej
7 1 -0.899 1.245 n/a 2
8 2 -0.859 1.285 n/a 2
9 3 -0.820 1.325 n/a 2
10 4 -0.780 1.364 n/a 2
11 5 -0.740 1.404 n/a 2
17 …
27 20 -0.144 2.000 n/a 2
28 21 -0.104 2.040 n/a 3
29 22 -0.064 2.080 n/a 3
30 23 -0.025 2.120 n/a 3
31 24 0.015 2.159 0 3
32 25 0.055 2.199 0 3
33 …
53 45 0.850 2.994 0 3
54 46 0.890 3.034 0 4
55 47 0.929 3.074 0 4
56 48 0.969 3.113 0 4
57 49 1.009 3.153 1 4
58 50 1.049 3.193 1 4
n Accept Reject
24 0 3
49 1 4
71 1 5
74 2 5
96 2 6
10
3 6
5
Note from exercise 7.4, the single sampling plan is n=35 , c=1. Therefore, the item-by-item sequential-
sampling plan would be truncated after the inspection of the 105th unit.
7.14.
a. Derive an item-by-item sequential-sampling plan for which
p1=0.02 , α =0.05 , p2=0.15 ,∧β=0.10 .
p 2( 1− p1) 0.15(0.98)
k =log =log =0.936868
p 1( 1− p2) 0.02(0.85)
1−α 0.95
h1=(log )/k =( log )/0.936868=1.0436
β 0.10
1−β 0.90
h2 =(log )/k =( log )/0.936868=1.33986
α 0.05
Excel formulas:
Excel Results:
Item-by-Item Sampling Plans:
n Accept Reject
22 0 3
31 1 4
41 1 5
62 3 6
90 4 8
Note that the single sampling plan is n=30 , c=2. Therefore, the item-by-item sequential- sampling plan
would be truncated after the inspection of the 90th unit.
7.15. Consider rectifying inspection for single sampling. Develop an AOQ equation
assuming that all defective items are removed but not replaced with good ones.
N=3000 , AQL=1 %
General Level II
Normal sampling plan: Sample size code letter ¿ K , n=125 , Acc=3 , Rej=4
Tightened sampling plan: Sample size code letter ¿ K , n=125 , Acc=2 , Rej=3
Reduced sampling plan: Sample size code letter ¿ K , n=50 , Acc=1, Rej=4
7.17. Repeat Exercise 7.16, using general inspection level I. Discuss the differences in the
various sampling plans.
N=3000, AQL=1 %
General level I
Normal sampling plan: Sample size code letter ¿ H , n=50, Acc=1, Rej=2
Tightened sampling plan: Sample size code letter ¿ J , n=80, Acc=1, Rej=2
Reduced sampling plan: Sample size code letter ¿ H , n=20, Acc=0, Rej=2
The impact of changing from General level II to I has been to reduce the sample size by 50%, while also
reduces the Accept and Reject numbers. Recall that Level I may be used when less discrimination is
needed.
7.18. A product is supplied in lots of size N=10,000 . The AQL has been specified at 0.10%. Find the normal,
tightened, and reduced single-sampling plans from MIL STD 105E, assuming general inspection level II.
N=10,000 , AQL=0.10 %
General level II
Normal sampling plan: Sample size code letter ¿ K , n=125 , Acc=0 , Rej=1
Tightened sampling plan: Sample size code letter ¿ L , n=200 , Acc=0 , Rej=1
Reduced sampling plan: Sample size code letter ¿ K , n=50 , Acc=0 , Rej=1
7.19. MIL STD 105E is being used to inspect incoming lots of size N=5,000 . Single-
sampling, general inspection level II, and an AQL of 0.65% are being used.
N=5000 , AQL=0.65 %
General level II
From Table 7.4:
b. Draw the OC curves of the normal, tightened, and reduced inspection plans on the
same graph.
Excel formulas:
A B C D
1 N = 5000 normal tightened reduced
2 n= 200 200 80
3 c= 3 2 1
4
5 p Pa=Pr{d<=3} Pa=Pr{d<=2} Pa=Pr{d<=1}
6 0.001 =BINOMDIST($B$3,$B$2,A6,TRUE) =BINOMDIST($C$3,$C$2,A6,TRUE) =BINOMDIST($D$3,$D$2,A6,TRUE)
7 0.002 =BINOMDIST($B$3,$B$2,A7,TRUE) =BINOMDIST($C$3,$C$2,A7,TRUE) =BINOMDIST($D$3,$D$2,A7,TRUE)
8 0.003 =BINOMDIST($B$3,$B$2,A8,TRUE) =BINOMDIST($C$3,$C$2,A8,TRUE) =BINOMDIST($D$3,$D$2,A8,TRUE)
9 0.004 =BINOMDIST($B$3,$B$2,A9,TRUE) =BINOMDIST($C$3,$C$2,A9,TRUE) =BINOMDIST($D$3,$D$2,A9,TRUE)
10 0.005 =BINOMDIST($B$3,$B$2,A10,TRUE) =BINOMDIST($C$3,$C$2,A10,TRUE) =BINOMDIST($D$3,$D$2,A10,TRUE)
11 0.006 =BINOMDIST($B$3,$B$2,A11,TRUE) =BINOMDIST($C$3,$C$2,A11,TRUE) =BINOMDIST($D$3,$D$2,A11,TRUE)
12 0.007 =BINOMDIST($B$3,$B$2,A12,TRUE) =BINOMDIST($C$3,$C$2,A12,TRUE) =BINOMDIST($D$3,$D$2,A12,TRUE)
13 0.0075 =BINOMDIST($B$3,$B$2,A13,TRUE) =BINOMDIST($C$3,$C$2,A13,TRUE) =BINOMDIST($D$3,$D$2,A13,TRUE)
14 0.008 =BINOMDIST($B$3,$B$2,A14,TRUE) =BINOMDIST($C$3,$C$2,A14,TRUE) =BINOMDIST($D$3,$D$2,A14,TRUE)
15 0.009 =BINOMDIST($B$3,$B$2,A15,TRUE) =BINOMDIST($C$3,$C$2,A15,TRUE) =BINOMDIST($D$3,$D$2,A15,TRUE)
16 0.01 =BINOMDIST($B$3,$B$2,A16,TRUE) =BINOMDIST($C$3,$C$2,A16,TRUE) =BINOMDIST($D$3,$D$2,A16,TRUE)
17 …
Excel results:
A B C D
1 N = 5000 normal tightened reduced
2 n= 200 200 80
3 c= 3 2 1
4
5 p Pa=Pr{d<=3} Pa=Pr{d<=2} Pa=Pr{d<=1}
6 0.0010 0.9999 0.9989 0.9970
7 0.0020 0.9992 0.9922 0.9886
8 0.0030 0.9967 0.9771 0.9756
9 0.0040 0.9911 0.9529 0.9588
10 0.0050 0.9813 0.9202 0.9389
11 0.0060 0.9667 0.8800 0.9163
12 0.0070 0.9469 0.8340 0.8916
13 0.0075 0.9351 0.8093 0.8786
14 0.0080 0.9220 0.7838 0.8653
15 0.0090 0.8922 0.7309 0.8377
16 0.0100 0.8580 0.6767 0.8092
17 …
Excel graph:
0.8
0.6
0.4
0.2
0
0 0.02 0.04 0.06 0.08 0.1
Fraction Defective, p
7.20. A product is shipped in lots of size N=2,000. Find a Dodge-Romig single-sampling plan for which the
LTPD = 1%, assuming that the process average is 0.25% defective. Draw the OC curve and the ATI curve
for this plan. What is the AOQL for this sampling plan?
Excel Graphs:
7.21. We wish to find a single-sampling plan for a situation where lots are shipped from a
supplier. The supplier’s process operates at a fallout level of 0.50% defective. We
want the AOQL from the inspection
activity to be 3%.
a. Find the appropriate Dodge–Romig plan.
From Table 7.8, minimum sampling plan that meets the quality requirements is 50,001 N 100,000 ;
n=65; c=3 (last row, second process average column)
b. Draw the OC curve and the ATI curve for this plan. How much inspection will be
necessary, on the average, if the supplier’s process operates close to the average
fallout level?
Excel formulas:
A B C
1 N= 50001
2 n= 65
3 c= 3
4
5 p Pa ATI
6 0.001 =BINOMDIST($B$3,$B$2,A6,TRUE) =$B$2+(1-B6)*($B$1-$B$2)
7 0.002 =BINOMDIST($B$3,$B$2,A7,TRUE) =$B$2+(1-B7)*($B$1-$B$2)
8 0.003 =BINOMDIST($B$3,$B$2,A8,TRUE) =$B$2+(1-B8)*($B$1-$B$2)
9 0.004 =BINOMDIST($B$3,$B$2,A9,TRUE) =$B$2+(1-B9)*($B$1-$B$2)
10 0.005 =BINOMDIST($B$3,$B$2,A10,TRUE) =$B$2+(1-B10)*($B$1-$B$2)
11 0.006 =BINOMDIST($B$3,$B$2,A11,TRUE) =$B$2+(1-B11)*($B$1-$B$2)
12 0.007 =BINOMDIST($B$3,$B$2,A12,TRUE) =$B$2+(1-B12)*($B$1-$B$2)
13 0.0075 =BINOMDIST($B$3,$B$2,A13,TRUE) =$B$2+(1-B13)*($B$1-$B$2)
14 0.008 =BINOMDIST($B$3,$B$2,A14,TRUE) =$B$2+(1-B14)*($B$1-$B$2)
15 0.009 =BINOMDIST($B$3,$B$2,A15,TRUE) =$B$2+(1-B15)*($B$1-$B$2)
16 0.01 =BINOMDIST($B$3,$B$2,A16,TRUE) =$B$2+(1-B16)*($B$1-$B$2)
17 …
Excel results:
A B C
1 N= 50001
2 n= 65
3 c= 3
4
5 p Pa ATI
6 0.001 1.00000 65
7 0.002 0.99999 65
8 0.003 0.99995 67
9 0.004 0.99986 72
10 0.005 0.99967 82
11 0.006 0.99934 98
12 0.007 0.99884 123
13 0.008 0.99851 139
14 0.008 0.99812 159
15 0.009 0.99713 208
16 0.010 0.99583 273
17 …
Excel graph:
OC Cur ve for Dodge-Romig n=65, c=3 ATI Cur ve for N-50001, n=65, c=3
1.2 60000
Probability of Acceptance, Pa
1 50000
0.8 40000
ATI
0.6 30000
0.4 20000
0.2 10000
0 0
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2
Let N=50,001
Pa= ( 3 ,65 , 0.005 ) 0.99967
¿
ATI =n+ ( 1−Pa ) ( N−n )=65+ ( 1−0.99967 ) ( 50,001−65 )=82
On average, if the vendor’s process operates close to process average, the average inspection required will
be 82 units.
From Table 7.8, Lot ¿ 50,001 – 100,000 (last row); p=0.50 % defective (2nd process average column)
LTPD=10.3 %
7.22. A supplier ships a product in lots of size N=8,000. We wish to have an AOQL of 3%, and we are going
to use single-sampling. We do not know the supplier’s process fallout but suspect that it is at most 1%
defective.
N=8000 , AOQL=3 %
From Table 7.8, Lot ¿ 7,001−10,000 ; p=1% defective (3rd process average column)
b. Find the ATI for this plan, assuming that incoming lots are 1% defective.
3
Pa= ∑ 65 ( 0.01 ) (0.99)65−d=¿0.99583
d=0 d
d
( )
ATI =n+ ( 1−Pa ) ( N−n )=65+ ( 1−0.99583 ) ( 8000−64 )=98
c. Suppose that our estimate of the supplier’s process average is incorrect and that it is really 0.25% defective.
What sampling plan should we have used? What reduction in ATI would have been realized if we had used
the correct plan?
From Table 7.8, Lot ¿ 7,001−10,000 ; p=0.25 % defective (2nd process average column)
d=0 d
( )
Pa= ∑ 46 ( 0.0025 ) (.9975)46−d=0.999781
d
7.23. An inspector wants to use a variables sampling plan with an AQL of 1.5%. Lots are of size 7,000 and the
standard deviation is unknown. Find a sampling plan using Procedure 1 from MIL STD 414.
N=7,000 , AQL=1.5 %
Under MIL STD 414, Inspection level IV, sample size code letter = M (Table 7.10)
7.24. How does the sample size found in Exercise 7.23 compare with what would have
been used under MIL STD 105 E ?
Under MIL STD 105E, Inspection level II, Sample size code letter ¿ L (Tables 7.4, 7.5, 7.6, 7.7):
The MIL STD 414 sample sizes are considerably smaller than those for MIL STD 105 E .
7.25. A lot of 500 items is submitted for inspection. Suppose that we wish to find a plan from MIL STD 414,
using inspection level II. If the AQL is 4%, find the Procedure 1 sampling plan from the standard.
7.26. A soft-drink bottler purchases nonreturnable glass bottles from a supplier. The lower
specification on the bursting strength of the bottles is 225 psi. The bottler wishes to
use variables sampling to sentence the lots and has decided to use an AQL of 1%.
Find an appropriate set of normal and tightened sampling plans from the standard.
Suppose that a lot is submitted, and the sample results yield
x=255, s=10
Determine the disposition of the lot using Procedure 1. The lot size is N=100,000 .
Assume inspection level IV, sample size code letter ¿ O (Table 7.10)
Normal sampling: n=100, k =2.00 (Table 7.11)
Tightened sampling: n=100, k =2.14 (Table 7.12)
x=255 , S=10
[ Z LSL=( x−LSL ) /S= (2550−225 ) /10=3.000 ] >2, so accept the lot.
7.27. A chemical ingredient is packed in metal containers. A large shipment of these containers has been
delivered to a manufacturing facility. The mean bulk density of this ingredient should not be less than
0.15g/cm3. Suppose that lots of this quality are to have a 0.95 probability of acceptance. If the mean bulk
density is as low as 0.1450, the probability of acceptance of the lot should be 0.10. Suppose we know that
the standard deviation of bulk density is approximately 0.005/cm3. Obtain a variables sampling plan that
could be used to sentence the lots.
3
σ =0.005 g/c m , x 1=0.15 , 1−α =1−0.95=0.05
x A −x 1
=Φ ( 1−α )
σ / √n
x A −0.15
=+1.645
0.005/ √ n
x 2=0.145 , β=0.10
x A −x 2
=Φ ( β )
σ / √n
x A −0.145
=−1.282
0.005/ √ n
7.28. A standard of 0.3 ppm has been established for formaldehyde emission levels in
wood products. Suppose that the standard deviation of emissions in an individual
board is σ =0.10 ppm. Any lot that contains 1% of its items above 0.3 ppm is
considered acceptable. Any lot that has 8% or more of its items above 0.3 ppm is
considered unacceptable. Good lots are to be accepted with probability 0.95, and bad
lots are to be rejected with probability 0.90.
a. Using the 1% nonconformance level as an AQL, and assuming that lots consist of
5,000 panels, find an appropriate set of sampling plans from MIL STD 414 , assuming
σ is unknown.
AQL=1 % , N =5000 ,unknown
Double specification limit, assume inspection level IV
b. Find an attributes-sampling plan that has the same OC curve as the variables
sampling plan derived in part (a). Compare the sample sizes required for equivalent
protection. Under what circumstances would variables sampling be more
economically efficient?
The sample size is slightly larger than required for the variables plan (a). Variables sampling would be
more efficient if were known.
AQL=1 % , N=5,000
Assume General Inspection Level II: sample size code letter = L (Table 7.4)
The sample sizes required are much larger than for the other plans.
7.29. Consider a single-sampling plan with n=25 , c=0.Draw the OC curve for this plan. Now consider chain-
sampling plans with n=25 , c=0 ,∧i=1, 2 , 5 ,7.Sketch the OC curves for these chain-sampling plans
on the same axis. Discuss the behavior of chain sampling in this situation compared to the conventional
single-sampling plan with c=0.
Excel Formulas:
Excel Results:
Excel Graph:
7.30. An electronics manufacturer buys memory devices in lots of 30,000 from a supplier. The supplier has a
long record of good quality performance, with an average fraction defective of approximately 0.10%. The
quality engineering department has suggested using a conventional acceptance-sampling plan with
.
Excel Formulas:
Excel Results:
Excel graph:
b. If lots are of a quality that is near the supplier’s long-term process average, what is the average total
inspection at that level of quality?
p=0.001, Pa =¿
ATI =n+ ( 1−Pa ) ( N−n )
¿ 32+ ( 1−0.96849 )( 30,000−32 )
¿ 976
c. Consider a chain sampling plan with n=32 , c=0 ,∧i=3 . Contrast the performance of this plan with the
conventional sampling plan n=32 , c=0.
The chain-sampling plan with i=3 is similar to the single sampling plan. It provides a higher probability of
acceptance when 0< p< 0.04.
d. How would the performance of this chain sampling plan change if we substituted i=4 in part (c)?
A chain-sampling with i=3 and i=4 give similar results. i=3 provides a higher probability of
acceptance when 0< p< 0.03.
7.31. Suppose that a manufacturing process operates in continuous production, such that continuous sampling
plans could be applied. Determine three different CSP-1 sampling plans that could be used for an AOQL of
0.198%.
Excel formulas:
A B C D
1 AOQL=0.198%
2 p= 0.0015 q= =1-B2
3 f= 0.5 0.1 =1/100
4 i= 140 550 1302
5 u= =(1-$D$2^B4)/($B$2*$D$2^B4) =(1-$D$2^C4)/($B$2*$D$2^C4) =(1-$D$2^D4)/($B$2*$D$2^D4)
6 v= =1/(B3*$B$2) =1/(C3*$B$2) =1/(D3*$B$2)
7 AFI = =(B5+B3*B6)/(B5+B6) =(C5+C3*C6)/(C5+C6) =(D5+D3*D6)/(D5+D6)
8 Pa{p=.0015} =B6/(B5+B6) =C6/(C5+C6) =D6/(D5+D6)
9
10 f = 1/2 and i = 140
11 p u v Pa
12 0.001 =(1-(1-A12)^$B$4)/(A12*(1-A12)^$B$4) =1/($B$3*A12) =C12/(B12+C12)
13 0.0015 =(1-(1-A13)^$B$4)/(A13*(1-A13)^$B$4) =1/($B$3*A13) =C13/(B13+C13)
14 0.002 =(1-(1-A14)^$B$4)/(A14*(1-A14)^$B$4) =1/($B$3*A14) =C14/(B14+C14)
15 0.0025 =(1-(1-A15)^$B$4)/(A15*(1-A15)^$B$4) =1/($B$3*A15) =C15/(B15+C15)
16 0.003 =(1-(1-A16)^$B$4)/(A16*(1-A16)^$B$4) =1/($B$3*A16) =C16/(B16+C16)
17 …
35 0.06 =(1-(1-A35)^$B$4)/(A35*(1-A35)^$B$4) =1/($B$3*A35) =C35/(B35+C35)
36 0.07 =(1-(1-A36)^$B$4)/(A36*(1-A36)^$B$4) =1/($B$3*A36) =C36/(B36+C36)
37 0.08 =(1-(1-A37)^$B$4)/(A37*(1-A37)^$B$4) =1/($B$3*A37) =C37/(B37+C37)
38 0.09 =(1-(1-A38)^$B$4)/(A38*(1-A38)^$B$4) =1/($B$3*A38) =C38/(B38+C38)
39 0.1 =(1-(1-A39)^$B$4)/(A39*(1-A39)^$B$4) =1/($B$3*A39) =C39/(B39+C39)
Excel results:
A B C D E F G H I J K L
1 AOQL=0.198%
2 p= 0.0015 q= 0.9985
3 f= 0.5000 0.1000 0.0100
4 i= 140.0000 550.0000 1302.0000
5 u= 155.9150 855.5297 4040.0996
6 v= 1333.3333 6666.6667 66666.6667
7 AFI = 0.5523 0.2024 0.0666
8 Pa{p=.0015} 0.8953 0.8863 0.9429
9
10 f = 1/2 and i = 140 f = 1/10 and i = 550 f = 1/100 and i = 1302
11 p u v Pa u v Pa u v Pa
12 0.0010 1.5035E+02 2000.0000 0.9301 7.3373E+02 10000.0000 0.9316 2.6790E+03 100000.0000 0.9739
13 0.0015 1.5592E+02 1333.3333 0.8953 8.5553E+02 6666.6667 0.8863 4.0401E+03 66666.6667 0.9429
14 0.0020 1.6175E+02 1000.0000 0.8608 1.0037E+03 5000.0000 0.8328 6.2765E+03 50000.0000 0.8885
15 0.0025 1.6788E+02 800.0000 0.8266 1.1848E+03 4000.0000 0.7715 1.0010E+04 40000.0000 0.7998
16 0.0030 1.7431E+02 666.6667 0.7927 1.4066E+03 3333.3333 0.7032 1.6331E+04 33333.3333 0.6712
17 …
35 0.0600 9.6355E+04 33.3333 0.0003 1.0035E+16 166.6667 0.0000 1.6195E+36 1666.6667 0.0000
36 0.0700 3.6921E+05 28.5714 0.0001 3.0852E+18 142.8571 0.0000 1.5492E+42 1428.5714 0.0000
37 0.0800 1.4676E+06 25.0000 0.0000 1.0318E+21 125.0000 0.0000 1.7586E+48 1250.0000 0.0000
38 0.0900 6.0251E+06 22.2222 0.0000 3.7410E+23 111.1111 0.0000 2.3652E+54 1111.1111 0.0000
39 0.1000 2.5471E+07 20.0000 0.0000 1.4676E+26 100.0000 0.0000 3.7692E+60 1000.0000 0.0000
7.33. Suppose that CSP-1 is used for a manufacturing process where it is desired to maintain an AOQL of 1.90%.
Specify two CSP-1 plans that would meet this AOQL target.
7.34. Compare the plans developed in Exercise 7.33 in terms of average fraction inspected
and their operating-characteristic curves. Which plan would you prefer if p=0.0375 ?
Prefer Plan B over Plan A since it has a lower Pa at the unacceptable level of p.
CHAPTER 8
8.1. An article in Industrial Quality Control (1956, pp. 5–8) describes an experiment to investigate the effect of
glass type and phosphor type on the brightness of a television tube. The response measured is the current
necessary (in microamps) to obtain a specified brightness level. The data are shown in Table 8E.1.
Analyze the data and draw conclusions.
This experiment is three replicates of a factorial design in two factors—two levels of glass type and three
levels of phosphor type—to investigate brightness. Enter the data into the MINITAB worksheet using the
first three columns: one column for glass type, one column for phosphor type, and one column for
brightness. This is how the Excel file is structured (Chap13.xls). Since the experiment layout was not
created in MINITAB, the design must be defined before the results can be analyzed.
After entering the data in MINITAB, select Stat > DOE > Factorial > Define Custom Factorial
Design. Select the two factors (Glass Type and Phosphor Type), then for this exercise, check “General
full factorial”. The dialog box should look:
Next, select “Designs”. For this exercise, no information is provided on standard order, run order, point
type, or blocks, so leave the selections as below, and click “OK” twice.
Note that MINITAB added four new columns (4 through 7) to the worksheet. DO NOT insert or delete
columns between columns 1 through 7. MINITAB recognizes these contiguous seven columns as a
designed experiment; inserting or deleting columns will cause the design layout to become corrupt.
Select Stat > DOE > Factorial > Analyze Factorial Design. Select the response (Brightness), then
click on “Terms”, verify that the selected terms are Glass Type, Phosphor Type, and their interaction, click
“OK”. Click on “Graphs”, select “Residuals Plots : Four in one”. The option to plot residuals versus
variables is for continuous factor levels; since the factor levels in this experiment are categorical, do not
select this option. Now click the box “Residuals versus variables:”, then select the two factors. Click
“OK”. Click on “Storage”, select “Fits” and “Residuals”, and click “OK” twice.
The effects of glass ( p−value=0.000) and phosphor ( p−value=0.004) are significant, while the
effect of the interaction is not significant ( p−value=0.318).
Visual examination of residuals on the normal probability plot and histogram does not reveal any problems.
The plot of residuals versus observation order is not meaningful since no order was provided with the data.
If the model were re-fit with only Glass Type and Phosphor Type, the residuals should be re-examined.
There are some issues with the constant variance assumption across all levels of phosphor since the
variability at level 2 appears larger than the variability observed at levels 1 and 3.
Select Stat > DOE > Factorial > Factorial Plots. Select “Interaction Plot” and click on “Setup”,
select the response (Brightness) and both factors (Glass Type and Phosphor Type), and click “OK” twice.
The absence of a significant interaction is evident in the parallelism of the two lines. Final selected
combination of glass type and phosphor type depends on the desired brightness level.
Alternate Solution: This exercise may also be solved using MINITAB’s ANOVA functionality instead of
its DOE functionality. The DOE functionality was selected to illustrate the approach that will be used for
most of the remaining exercises. Select Stat > ANOVA > Two-Way, and complete the dialog box as
below.
Source DF SS MS F P
Ex13-1Glass 1 14450.0 14450.0 273.79 0.000
Ex13-1Phosphor 2 933.3 466.7 8.84 0.004
Interaction 2 133.3 66.7 1.26 0.318
Error 12 633.3 52.8
Total 17 16150.0
S = 7.265 R-Sq = 96.08% R-Sq(adj) = 94.44%
8.2. A process engineer is trying to improve the life of a cutting tool. He has run a 23 experiment using cutting
speed (A), metal hardness (B), and cutting angle (C) as the factors. The data from two replicates are shown
in Table 8E.2.
Replicate
Run
I II
(1) 221 311
a 325 435
b 354 348
ab 552 472
c 440 453
ac 406 377
bc 605 500
abc 392 419
This experiment is a 23 factorial design (cutting speed, metal hardness, and cutting angle) to investigate
tool life. Generate a 23 factorial design: Select Stat > DOE > Factorial > Create Factorial Design.
Select 2-level factorial with 3 factors. Select Designs and choose number of replicates for corner points as
two. No information is given in the example about the run order of the experiment. Click OK.
Note that MINITAB created 7 columns in a new worksheet. DO NOT insert or delete columns between
columns 1 through 7. MINITAB recognizes these contiguous seven columns as a designed experiment;
inserting or deleting columns will cause the design layout to become corrupt. Now create a new column for
the response variable LIFE and enter in the values for tool life in the appropriate cells of the worksheet.
Select Stat > DOE > Factorial > Analyze Factorial Design. Select the response (Life), then click on
“Terms”, verify that the selected terms are Glass Type, Phosphor Type, and their interaction, click “OK”.
Click on “Graphs”, select “Residuals Plots : Four in one”. Now click the box “Residuals versus
variables:”, then select the two factors. Click “OK”. Click on “Storage”, select “Fits” and “Residuals”,
and click “OK” twice.
Analysis of Variance
Model Summary
The effects of hardness ( p−value=0.009), angle ( p−value=0.020), and the interaction between
speed and angle are significant, while the main effect of speed and the other interaction terms are not
significant (p-value > 0.05).
Residual Plots for Life
Normal Probability Plot Versus Fits
99
50
90
25
Residual
Percent
50 0
-25
10
-50
1
-100 -50 0 50 100 300 360 420 480 540
Residual Fitted Value
Frequency
25
Residual
2
0
1 -25
-50
0
-40 -20 0 20 40 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Residual Observation Order
Visual examination of residuals on the normal probability plot and histogram does not reveal any problems.
The plot of the residuals versus fitted values indicates that the assumption of constant variance may not be
met. The plot of residuals versus observation order is not meaningful since no order was provided with the
data.
Now consider the reduced model which includes factors Speed, Hardness, Angle, and Speed*Angle.
Although Speed was not a significant factor on its own, because its interaction with angle is significant, we
leave it in the model to maintain hierarchy in the model.
Analysis of Variance
Model Summary
The diagnostic plots of the residuals do not reveal any unusual behavior or violations of assumptions.
Residual Plots for Life
Normal Probability Plot Versus Fits
99
50
90
Residual
Percent
0
50
-50
10
1 -100
-100 -50 0 50 100 300 360 420 480 540
Residual Fitted Value
Frequency
Residual
2
0
1 -50
0 -100
-80 -60 -40 -20 0 20 40 60 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Residual Observation Order
Select Stat > DOE > Factorial > Factorial Plots. Select “Interaction Plot” and click on “Setup”,
select the response (Brightness) and both factors (Glass Type and Phosphor Type), and click “OK” twice.
450
Mean of Life
400
350
300
-1 1
Speed
The interaction between speed and angle is evident in the interaction plot.
Consider the cube plot. MTB > Stat > DOE > Factorial > Cube Plot
541.625 440.625
350.625 488.125
1
Angle
266.375 403.875
-1 -1
-1 1
Speed
The low level of speed and high levels of hardness and angle produce the longest tool life.
Consider contour plots of speed and angle for hardness fixed at -1 and +1.
MTB > Stat > DOE > Factorial > Contour Plot
Contour Plot of Life vs Angle, Speed Contour Plot of Life vs Angle, Speed
1.0 1.0
Life Life
< 300 < 360
300 – 350 360 – 400
0.5 350 – 400 0.5 400 – 440
400 – 450 440 – 480
> 450 480 – 520
> 520
Angle
Angle
-0.5 -0.5
-1.0 -1.0
-1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0
Speed Speed
The low level of speed and high level of angle tend to give good results regardless of the metal hardness.
The high level of speed and low level of angle also tends to give good results; however, this region is much
smaller compared to the area with the low level of speed and high level of angle.
8.3. Find the residuals from the tool life experiment in Exercise 8.2. Construct a normal probability plot of the
residuals. Plot the residuals versus the predicted values. Comment on the plots.
Replicate
Run
I II
-1 221 311
a 325 435
b 354 348
ab 552 472
c 440 453
ac 406 377
bc 605 500
abc 392 419
To find the residuals, select Stat > DOE > Factorial > Analyze Factorial Design. Select “Terms”
and verify that all terms for the reduced model (A, B, C, AC) are included. Select “Graphs”, and for
residuals plots choose “Normal plot” and “Residuals versus fits”. To save residuals to the worksheet,
select “Storage” and choose “Residuals”.
Normal Probability Plot of the Residuals Residuals Versus the Fitted Values
(response is Life) (response is Life)
99
95
50
90
25
80
70
Residual
Percent
60 0
50
40
30 -25
20
10 -50
5
-75
1
-100 -50 0 50 100 250 300 350 400 450 500 550
Residual Fitted Value
Normal probability plot of residuals indicates that the normality assumption is reasonable. Residuals
versus fitted values plot shows that the equal variance assumption across the prediction range is reasonable.
8.4. Four factors are thought to possibly influence the taste of a soft-drink beverage: type of sweetener (A), ratio
of syrup to water (B), carbonation level (C), and temperature (D). Each factor can be run at two levels,
producing a 24 design. At each run in the design, samples of the beverage are given to a test panel
consisting of 20 people. Each tester assigns a point score from 1 to 10 to the beverage. Total score is the
response variable, and the objective is to find a formulation that maximized total score. Two replicates of
this design are run, and the results are shown in Table 8E.3. Analyze the data and draw conclusions.
Treatment Replicate
Combination I II
(1) 188 195
a 172 180
b 179 187
ab 185 178
c 175 180
ac 183 178
bc 190 180
abc 175 168
d 200 193
ad 170 178
bd 189 181
abd 183 188
cd 201 188
acd 181 173
bcd 189 182
abcd 178 182
Generate a 24 factorial design: Select Stat > DOE > Factorial > Create Factorial Design. Select 2-
level factorial with 4 factors. Select Designs and choose number of replicates for corner points as two. No
information is given in the example about the run order of the experiment. Click OK.
Select Stat > DOE > Factorial > Analyze Factorial Design. Select the response variable, then click
on “Terms”, verify that the selected factors and their interactions are selected, click “OK”. Click on
“Graphs”, select “Residuals Plots : Four in one”.
Coded Coefficients
Factors A (sweetener) and D (temperature) both have relatively small p-values (<0.10). The two-way
interaction between A (sweetener) and B (water) is significant (p-value =0.048). Two three-way
interactions are also significant in the model: the interaction between sweetener, water, and carbonation
level and the interaction between sweetener, water, and temperature.
We can see these effects in the normal probability plot of the effect estimates as well.
Residual Plots for Total Score
Normal Plot of the Standardized Effects
(response is Total Score, α = 0.05) Normal Probability Plot Versus Fits
99
99
Effect Type 5.0
90
Not Significant 2.5
Residual
ABD
Percent
95 Significant
50 0.0
90 AB Factor Name -2.5
A A 10
80 -5.0
B B
1
70 C C -10 -5 0 5 10 170 180 190 200
Percent
60 D D
Residual Fitted Value
50
40
30 Histogram Versus Order
20 10.0 5.0
ABC
Frequency
10 2.5
Residual
7.5
5 A 5.0
0.0
-2.5
2.5
1 -5.0
-5 -4 -3 -2 -1 0 1 2 3 0.0
-6 -4 -2 0 2 4 6 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
Standardized Effect
Observation Order
: Residual
The normal probability plot of the residuals indicates that the normality assumption may not be reasonable.
The points do not fall in a straight line. The assumption of constant variance may not be reasonable.
We can remove a few insignificant terms from the model: CD, ACD, BCD, and ABCD. Note that although
the other main effects and two-way interactions (B, C, D, AC, AD, BC, and BD) were not statistically
significant, we leave these terms in the model to maintain hierarchy in the model.
90
5
Residual
Percent
50
0
10
-5
1
-10 -5 0 5 10 170 180 190 200
Residual Fitted Value
7.5 5
Residual
5.0
0
2.5
-5
0.0
-4 0 4 8 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
Residual Observation Order
The normal probability plot of the residuals of the reduced model is an improvement over the full model,
although the points still do not lie in a straight line. The constant variance assumption seems reasonable.
There is one unusual observation as noted by the large residual with this model.
The sweetener has the largest effect on the taste of the soft drink. The low level of sweetener gives higher
levels taste score.
8.5. Consider the experiment in Exercise 8.4. Plot the residuals against the levels of factors A, B, C, and D. Also
construct a normal probability plot of the residuals. Comment on these plots.
To find the residuals, select Stat > DOE > Factorial > Analyze Factorial Design. Select “Terms”
and verify that all terms for the reduced model are included. Select “Graphs”, choose “Normal plot” of
residuals and “Residuals versus variables”, and then select the variables.
99
90
Percent
50
10
1
-10 -5 0 5 10
Residual
8 8
Residual
Residual
0 0
-8 -8
-1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0
8 8
Residual
Residual
0 0
-8 -8
-1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0
There appears to be a slight indication of inequality of variance for sweetener and syrup ratio, as well as a
slight indication of an outlier. This is not serious enough to warrant concern.
8.6. Find the standard error of the effects for the experiment in Exercise 8.4. Using the standard errors as a
guide, what factors appear significant?
From the ANOVA table from Exercise 8.4 (using the reduced model), we can use the MSE to estimate σ 2.
Analysis of Variance
s . e . ( ^β )=
√ √ √
σ^ 2
n2 k
=
MSE
n2k
=
27.344
2(24)
=0.924
Using the coefficient estimates from Exercise 8.4 and the standard error of the coefficient estimates, the
following terms appear significant: A, AB, ABC, and ABD.
8.7. Suppose that only the data from replicate I in Exercise 8.4 were available. Analyze the data and draw
appropriate conclusions.
Create a 24 factorial design in MINITAB, and then enter the data. Select Stat > DOE > Factorial >
Analyze Factorial Design. Since there is only one replicate of the experiment, select “Terms” and verify
that all terms are selected. Then select “Graphs”, choose the normal effects plot, and set alpha to 0.10
Percent
C Carbonation
60
D Temperature
50
40
30
20
10
5 A
1
-10 -5 0 5 10
Effect
Lenth’s PSE = 4.5
From visual examination of the normal probability plot of effects, only factor A (sweetener) is significant at
¿ 0.10.
Re-fit and analyze the reduced model.
99
10
90
Residual
Percent 0
50
10 -10
1
-20 -10 0 10 20 180.0 182.5 185.0 187.5 190.0
10 10
Residual
Residual
0 0
-10 -10
-1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0
10 10
Residual
Residual
0 0
-10 -10
-1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0
Carbonation Temperature
The normality assumption appears to be reasonable. There appears to be a slight indication of inequality of
variance for sweetener, as well as in the predicted values. This is not serious enough to warrant concern.
Plots of the residuals versus the other variables do not indicate any unusual patterns, indicating they do not
need to be included in the model.
A reduced model based on factor A (type of sweetener) is sufficient to model taste. The assumptions of
normality and constant variance are reasonable for the reduced model.
8.8. Suppose that only one replicate of the 24 design in exercise 8.4 could be run, and that we could only
conduct eight tests each day. Set up a design that would block out the day effect. Show specifically which
runs would be made on each day.
A 24 design in two blocks will lose the ABCD interaction to blocks. The ABCD interaction is confounded
with blocks.
Block 1 Block 2
a (1)
b ab
c ac
abc bc
d ad
abd bd
acd cd
bcd abcd
8.9. Show how a 25 experiment could be set up in two blocks of 16 runs each. Specifically, which runs would
be made in each block?
Block 1 Block 2
(1) ae a e
ab be b abe
ac ce c ace
bc abce abc bce
ad de d ade
bd abde abd bde
cd acde acd cde
abcd bcde bcd abcde
8.10. R. D. Snee (“Experimenting with a Large Number of Variables,” in Experiments in Industry: Design,
Analysis and Interpretation of Results, by R. D. Snee, L. B. Hare, and J. B. Trout, editors, ASQC, 1985)
describes an experiment in which a 25-1 design with I = ABCDE was used to investigate the effects of five
factors on the color of a chemical product. The factors were A = solvent/reactant, B = catalyst/reactant, C =
temperature, D = reactant purity, and E = reactant pH. The results obtained are as follows:
e=−0.63 d=6.79
a=2.51 ade=6.47
b=2.68 bde=3.45
abe=1.66 abd =5.68
c=2.06 cde=5.22
ace=1.22 acd =4.38
bce=−2.09 bcd=4.30
abc=1.93 abcde=4.05
a. Prepare a normal probability plot of the effects. Which effects seem active? Fit a model using these effects.
C temperature
60
D reactant purity
50
E reactant pH
40
30
20
10
5
1
-2 -1 0 1 2 3 4
Effect
Lenth’s PSE = 0.8325
Term Effect Coef
Constant 3.105
solvent 0.7650 0.3825
catalyst -0.7950 -0.3975
temperature -0.9425 -0.4712
reactant purity 3.875 1.938
reactant pH -1.3725 -0.6862
solvent*catalyst 0.4800 0.2400
solvent*temperature -0.2425 -0.1212
solvent*reactant purity -0.5600 -0.2800
solvent*reactant pH 1.0975 0.5488
catalyst*temperature -0.3775 -0.1888
catalyst*reactant purity -0.5500 -0.2750
catalyst*reactant pH -0.5075 -0.2537
temperature*reactant purity -0.16750 -0.08375
temperature*reactant pH 0.3050 0.1525
reactant purity*reactant pH 0.8825 0.4412
Only factor D (reactant purity) appears to be significant from the normal plot. Looking at the values of the
coefficients with the full model, factor E (reactant pH) has the next largest (in magnitude) coefficient, so
we will include both of these variables in the model.
Analysis of Variance
Model Summary
Coded Coefficients
b. Calculate the residuals for the model you fit in part (a). Construct a normal probability plot of the residuals
and plot the residuals versus the fitted values. Comment on the plots.
Normal Probability Plot Versus Fits
(response is color) (response is color)
99
2
95
90 1
80
70
Residual
Percent
0
60
50
40
30 -1
20
10
-2
5
1 -3
-3 -2 -1 0 1 2 3 0 1 2 3 4 5 6
Residual Fitted Value
The normality assumption appears to be reasonable. The constant variance assumption may not be
appropriate here, there are some difference in the spread of the residuals for the different fitted values.
c. If any factors are negligible, collapse the 25-1 design into a full factorial in the active factors. Comment on
the resulting design and interpret the results.
Factors A, B, and C are not significant effects, so the 25−1 design collapses into a 22 design with factors D
and E. MTB > Stat > DOE > Factorial > Cube Plot. Select Data means and make sure factors D and E
are selected.
0.0400 4.7975
1
reactant pH
2.2950 5.2875
-1
-1 1
reactant purity
Reactant purity has a positive effect on the color; reactant pH has an inverse relationship with color.
8.11. An article in Industrial and Engineering Chemistry (“More on Planning Experiments to Increase Research
Efficiency,” 1970, pp. 60–65) uses a 25−2 design to investigate the effect of A=¿condensation
temperature,
B=¿ amount of material 1, C=¿ solvent volume, D=¿condensation time, and E=¿ amount of material
2, on yield. The results obtained are as follows:
e=23.2 cd =23.8
ab=15.5 ab=23.4
ad =16.9 ad =16.8
bc=16.2 bc=18.1
a. Verify that the design generators used were I = ACE and I =BDE .
b. Write down the complete defining relation and the aliases from this design.
Select Stat > DOE > Factorial > Analyze Factorial Design. Since there is only one replicate of the
experiment, select “Terms” and verify that all main effects and two-factor interaction effects are selected.
From the Alias Structure shown in the Session Window, the complete defining relation is:
I = ACE=BDE= ABCD
The aliases are:
A∗I =A∗ACE= A∗BDE=A∗ABCD A=CE= ABDE=BCD
B∗I =B∗ACE=B∗BDE=B∗ABCD B=ABCE=DE= ACD
C∗I =C∗ACE=C∗BDE=C∗ABCDC= AE=BCDE= ABD
⋮
AB∗I = AB∗ACE= AB∗BDE= AB∗ABCD AB=BCE= ADE=CD
( 1 )=88 d=86
a=80 ad =81
b=89 bd=85
ab=87 abd =86
c=86 cd =85
ac=81 acd =79
bc=82 bcd=84
abc=80 abcd =81
a. Construct a normal probability plot of the effects. Which effects are active?
Enter the factor levels and yield data into a MINITAB worksheet, then define the experiment using
Stat > DOE > Factorial > Define Custom Factorial Design.
Select Stat > DOE > Factorial > Analyze Factorial Design. Since there is only one replicate of the
experiment, select “Terms” and verify that all main effects and two-factor interaction effects are selected.
Normal Plot of the Effects
(response is weight, α = 0.10)
99
Effect Type
Not Significant
95 Significant
90
Factor Name
80 A A
B B
70
Percent
C C
60
D D
50
40
30
20
10 C
5 A
1
-0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3
Effect
Lenth’s PSE = 0.1125
b. Construct an appropriate model. Fit this model and test for significant effects.
Model Summary
S R-sq R-sq(adj) R-sq(pred)
0.212585 61.09% 55.11% 41.06%
Coded Coefficients
Both factors A and C are significant in the model (p-value < 0.05). Both factors have an inverse
relationship with polymer weight.
c. Analyze the residuals from this model by constructing a normal probability plot of the residuals and
plotting the residuals versus the predicted values of y.
Normal Probability Plot Versus Fits
(response is weight) (response is weight)
99 0.4
0.3
95
90
0.2
80
70
0.1
Residual
Percent
60
50 0.0
40
30 -0.1
20
-0.2
10
5 -0.3
1 -0.4
-0.50 -0.25 0.00 0.25 0.50 8.0 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8
Residual Fitted Value
The normal probability plot of the residuals indicates that the normality assumption is reasonable. The plot
of the residuals versus the fitted values indicates the constant variance assumption may not be valid.
However, the difference appears to be small.
8.13. Reconsider the data in Exercise 8.12. Suppose that four center points were added to this experiment. The
molecular weights at the center point are 90, 87, 86, and 93.
a. Analyze the data as you did in Exercise 8.12, but include a test for curvature.
Create a 24 factorial design with four center points in MINITAB, and then enter the data. Select Stat >
DOE > Factorial > Analyze Factorial Design. Select “Terms” and verify that all main effects and
two-factor interactions are selected. Also, DO NOT include the center points in the model (uncheck the
default selection). This will ensure that if both lack of fit and curvature are not significant, the main and
interaction effects are tested for significance against the correct residual error (lack of fit + curvature + pure
error). See the dialog box below.
To summarize MINITAB’s functionality, curvature is always tested against pure error and lack of fit (if
available), regardless of whether center points are included in the model. The inclusion/exclusion of center
points in the model affects the total residual error used to test significance of effects. Assuming that lack of
fit and curvature tests are not significant, all three (curvature, lack of fit, and pure error) should be included
in the residual mean square.
When looking at results in the ANOVA table, the first test to consider is the “lack of fit” test, which is a
test of significance for terms not included in the model (in this exercise, the three-factor and four-factor
interactions). If lack of fit is significant, the model is not correctly specified, and some terms need to be
added to the model.
If lack of fit is not significant, the next test to consider is the “curvature” test, which is a test of significance
for the pure quadratic terms. If this test is significant, no further statistical analysis should be performed
because the model is inadequate.
If tests for both lack of fit and curvature are not significant, then it is reasonable to pool the curvature, pure
error, and lack of fit (if available) and use this as the basis for testing for significant effects. (In MINITAB,
this is accomplished by not including center points in the model.)
b. If curvature is significant in an experiment such as this one, describe what strategy you would pursue next
to improve your model of the process.
The test for curvature is significant ( p−value=0.004). Although one could pick a “winning
combination” from the experimental runs, a better strategy is to add runs that would enable estimation of
the quadratic effects. Adding axial runs in a sequential experiment would provide the means to estimate
these quadratic effects.
8.14. An engineer has performed an experiment to study the effect of four factors on the surface roughness of a
machined part. The factors (and their levels) are A = tool angle (12, 15), B = cutting fluid viscosity (300,
400), C = feed rate (10, 15 in/min), and D = cutting fluid cooler used (no, yes). The data from this
experiment (with the factors coded to the usual +1, -1 levels) are shown in Table 8E.4.
Surface
Run A B C D
Roughness
1 - - - - 0.00340
2 + - - - 0.00362
3 - + - - 0.00301
4 + + - - 0.00182
5 - - + - 0.00280
6 + - + - 0.00290
7 - + + - 0.00252
8 + + + - 0.00160
9 - - - + 0.00336
10 + - - + 0.00344
11 - + - + 0.00308
12 + + - + 0.00184
13 - - + + 0.00269
14 + - + + 0.00284
15 - + + + 0.00253
16 + + + + 0.00163
a. Estimate the factor effects. Plot the effect estimates on a normal probability plot and select a tentative
model.
1
A= ( a+ ab+ac +abc +ad + abd+ acd+ abcd−( 1 ) −b−c−d−bc−bd−cd −bcd )¿ 1/2 ¿
2
¿−0.000463
Similarly, the other effects can be estimated:
A=−0.00463 , B=−0.000878 , C=−0.000508 , D=−0.000032
AB=−0.0006 , AC =0.000070 , AD=−0.000015 , BC =0.000065 , BD=0.000065 ,CD=0
ABC=0.000082 , ABD=0.000008 , ACD=0.000033 , BCD=−0.000013 , ABCD=−0.000015
60 C C
50 D D
40
30
A
20 C
10 AB
5 B
1
-60 -50 -40 -30 -20 -10 0 10
Standardized Effect
Factors A, B, C, AB, and BC appear to be active. We select these variables for the preliminary model.
b. Fit the model identified in part (a) and analyze the residuals. Is there any indication of model inadequacy?
Model Summary
Coded Coefficients
0.00010
95
90
80 0.00005
70
Residual
Percent
60
50
0.00000
40
30
20
10 -0.00005
1 -0.00010
-0.0002 -0.0001 0.0000 0.0001 0.0002 0.0015 0.0020 0.0025 0.0030 0.0035
Residual Fitted Value
The residual plot indicates that normality assumption may not be reasonable.
c. Repeat the analysis from parts (a) and (b) using 1/y as the response variable. Is there an indication that the
transformation has been useful?
To add the transformed variable to the worksheet select MTB > Calc > Calculator and enter the
appropriate equation into the expression box. See the dialog box below:
MTB > Stat > DOE > Factorial > Analyze Factorial Design
Use the transformed variable as the response variable. Since this is a 24 factorial design with one replicate,
we combine the higher order interaction terms as an estimate of error and leave the main effects and all
two-factor interactions.
60 C C
50 D D
40
30
20
10
5 BD
1
0 10 20 30 40 50 60 70 80
Standardized Effect
With the transformed variable, effects A, B, C, AB, and BD appear to be active. Note that none of the
higher order interactions appear to be significant. We include in the model: A, B, C, D, AB, and BD. We
include D in the model to maintain hierarchy.
Analysis of Variance
4
95
90 3
80 2
70
Residual
Percent
60 1
50
40 0
30
-1
20
10 -2
5
-3
1 -4
-5.0 -2.5 0.0 2.5 5.0 7.5 300 350 400 450 500 550 600 650
Residual Fitted Value
The normality assumption is reasonable. The constant variance assumption also seems reasonable.
d. Fit the model in terms of the coded variables that you think can be used to provide the best predictions of
the surface roughness. Convert this prediction equation into a model in the natural variables.
1
=397.807+51.606 x 1+ 74.744 x 2+34.244 x3 +0.828 x 4 +58.699 x 1 x 2−4.152 x 2 x 4
y
Consider the uncoded variables:
1
y
=397.807+51.606
angle−13.5
1.5
+74.744( viscosity−350
50 )
+34.244 (
feed rate−12.5
2.5
+0.828∗cooler + ) ( )
1
=2937.0−239.53∗angle−9.071∗viscosity +13.698∗feed rate +29.90∗cooler+ 0.7827∗angle∗viscosit
y
8.15. An experiment was run to study the effect of two factors, time and temperature, on the inorganic impurity
levels in paper pulp. The results of this experiment are shown in Table 8E.5:
x1 x2 y
-1 -1 210
1 -1 95
-1 1 218
1 1 100
-
0 225
1.5
1.5 0 50
0 -1.5 175
0 1.5 180
0 0 145
0 0 175
0 0 158
0 0 166
This design is a CCD with k =2 and ¿ 1.5. The design is not rotatable.
b. Fit a quadratic model to the response, using the method of least squares.
Enter the factor levels and response data into a MINITAB worksheet, including a column indicating
whether a run is a center point run (1 = not center point, 0 = center point). Then define the experiment
using Stat > DOE > Response Surface > Define Custom Response Surface Design. Select
Stat > DOE > Response Surface > Analyze Response Surface Design. Select “Terms” and
verify that all main effects, two-factor interactions, and quadratic terms are selected.
2 2
The model is y=160.9−58.3 x 1+2.4 x2 −10.9 x 1 +6 x 2−0.75 x 1 x 2. Notice x 2 and x 1 x 2 are not
significant using α =0.10
c. Construct the fitted impurity response surface. What values of x 1 and x 2 would you recommend if you
wanted to minimize the impurity level?
1.5
y
< 40
1.0 40 - 50
50 - 75
75 - 100
0.5 100 - 125
x1 = 1.49384 240
x2 = -0.217615 125 - 150
y = 49.6101 150 - 175
0.0
x2
200 - 225 y
-1.0 60 1
0 x2
-1 -1
0
-1.5 x1
1
-1 0 1
x1
From visual examination of the contour and surface plots, it appears that minimum purity can be achieved
by setting x 1 (time) = +1.5 and letting x 2 (temperature) range from 1.5 to + 1.5. The range for x 2 agrees
with the ANOVA results indicating that it is statistically insignificant ( p−value=0.471). The level for
temperature could be established based on other considerations, such as cost. A flag is planted at one
option on the contour plot above.
d. Suppose that x 1=( temp−750 ) /50and x 2=( time−30 ) /15 where temperature is in °C and time is in
hours. Find the optimum operating conditions in terms of the natural variables temperature and time.
8.16. An article in Rubber Chemistry and Technology (Vol. 47, 1974, pp. 825–836)
describes an experiment that studies the relationship of the Mooney viscosity of
rubber to several variables, including silica filler (parts per hundred) and oil filler
(parts per hundred). Some of the data from this experiment are shown in Table 8E.6,
where x 1=( silica−60 ) /15and x 2=( oil−21 ) /1.5.
x1 x2 y
-1 -1 13.71
1 -1 14.15
-1 1 12.87
1 1 13.53
-1.4 0 12.99
1.4 0 13.89
0 -1.4 14.16
0 1.4 12.9
0 0 13.75
0 0 13.66
0 0 13.86
0 0 13.63
0 0 13.74
b. Fit a quadratic model to these data. What values of x 1 and x 2 will maximize the
Mooney viscosity?
Since the standard order is provided, one approach to solving this exercise is to create a two-factor response
surface design in MINITAB, then enter the data. Select Stat > DOE > Response Surface > Create
Response Surface Design. Leave the design type as a 2-factor, central composite design. Select
“Designs”, highlight the design with five center points (13 runs), and enter a custom alpha value of exactly
1.4 (the rotatable design is ¿ 1.41421). The worksheet is in run order, to change to standard order (and
ease data entry) select Stat > DOE > Display Design and choose standard order. To analyze the
experiment, select Stat > DOE > Response Surface > Analyze Response Surface Design.
Select “Terms” and verify that a full quadratic model (A, B, A2, B2, AB) is selected.
2 2
y uncoded =13.7273+ 0.2980 x1−0.4071 x 2 +0.0550 x 1 x 2−0.1249 x 1−0.0790 x 2
Values of x 1 and x 2maximizing the Mooney viscosity can be found from visual examination of the contour
and surface plots, or using MINITAB’s Response Optimizer.
y
< 12.00
1.0
12.00 - 12.25
12.25 - 12.50
0.5 12.50 - 12.75
12.75 - 13.00
14.0
13.00 - 13.25
0.0
x2
13.25 - 13.50
13.5
13.50 - 13.75 y
-0.5 13.75 - 14.00
13.0
14.00 - 14.25
> 14.25 12.5
-1.0 1
0 x2
-1 -1
0
-1.0 -0.5 0.0 0.5 1.0 x1 1
x1
From the plots and the optimizer, setting x 1 in a range from 0 to +1.4 and setting x 2 between –1 and –1.4
will maximize viscosity.
8.17. In their book Empirical Model Building and Response Surfaces (John Wiley, 1987), G. E. P. Box and N. R.
Draper describe an experiment with three factors. The data shown in Table 8E.7 are a variation of the
original experiment on p. 247 of their book. Suppose that these data were collected in a semiconductor
manufacturing process.
x1 x2 x3 y1 y2
-1 -1 -1 24.00 12.49
0 -1 -1 120.33 8.39
1 -1 -1 213.67 42.83
-1 0 -1 86.00 3.46
0 0 -1 136.63 80.41
1 0 -1 340.67 16.17
-1 1 -1 112.33 27.57
0 1 -1 256.33 4.62
1 1 -1 271.67 23.63
-1 -1 0 81.00 0.00
0 -1 0 101.67 17.67
1 -1 0 357.00 32.91
-1 0 0 171.33 15.01
0 0 0 372.00 0.00
1 0 0 501.67 92.50
-1 1 0 264.00 63.50
0 1 0 427.00 88.61
1 1 0 730.67 21.08
-1 -1 1 220.67 133.82
0 -1 1 239.67 23.46
1 -1 1 422.00 18.52
-1 0 1 199.00 29.44
0 0 1 485.33 44.67
1 0 1 673.67 158.21
-1 1 1 176.67 55.51
0 1 1 501.00 138.94
1 1 1 1010.00 142.45
a. The response y 1is the average of three readings on resistivity for a single wafer. Fit a quadratic model to
this response.
Enter the data into the worksheet. Then define the design as a response surface design by going to Stat >
DOE > Response Surface > Define Custom Response Surface Design
Select the three factors. To analyze the experiment, select Stat > DOE > Response Surface >
Analyze Response Surface Design. Select y 1 as the response variable. Select “Terms” and verify
that a full quadratic model (A, B, C, A2, B2, C2, AB, AC, BC) is selected.
Coded Coefficients
2 2 2
y=327.6 +177.0 x1 +109.4 x 2 +131.5 x 3+32.0 x 1−22.4 x 2−29.1 x 3 +66.0 x 1 x 2+ 75.5 x 1 x3 + 43.6 x2 x 3
b. The response y 2is the standard deviation of the three resistivity measurements. Fit a first-order model to
this response.
To analyze the experiment, select Stat > DOE > Response Surface > Analyze Response Surface
Design. Select y 2 as the response variable. Select “Terms” and change model to Linear, (A, B, C).
Response Surface Regression: y2 versus x1, x2, x3
Analysis of Variance
Coded Coefficients
c. Where would you recommend that we set x 1 , x 2 ,∧x 3 if the objective is to hold mean resistivity at 500
and minimize the standard deviation?
Using the Minitab response optimizer, we can determine where to set x 1 , x 2 ,∧x 3.
MTB > Stat > DOE > Response Surface > Response Optimizer
Select Minimize for y 2, since we want to minimize the standard deviation. Select Target for y 1 and enter
500. Click “View Model” to ensure the reduced models from parts (a) and (b) are correct. Click OK.
Optimal x1 x2 x3
High 1.0 1.0 1.0
D: 0.9047
Cur [1.0] [0.9880] [-0.6600]
Predict Low -1.0 -1.0 -1.0
Composite
Desirability
D: 0.9047
y2
Minimum
y = 28.7275
d = 0.81842
y1
Targ: 500.0
y = 500.0002
d = 1.0000
Setting x 1=1 , x 2 between 0.95 and 1.0, and x 3=−0.66 , the standard deviation is minimized and the
target value is close to 500.
8.18. The data shown in the Table 8E.8 were collected in an experiment to optimize crystal growth as a function
of three variables x 1 , x 2 ,∧x 3. Large values of y (yield in grams) are desirable. Fit a second-order model
and analyze the fitted surface. Under what set up conditions is maximum growth achieved?
x1 x2 x3 y
-1 -1 -1 66
-1 -1 1 70
-1 1 -1 78
-1 1 1 60
1 -1 -1 80
1 -1 1 70
1 1 -1 100
1 1 1 75
-1.682 0 0 65
1.682 0 0 82
0 -1.682 0 68
0 1.682 0 63
0 0 -1.682 100
0 0 1.682 80
0 0 0 83
0 0 0 90
0 0 0 87
0 0 0 88
0 0 0 91
0 0 0 85
Select Stat > DOE > Response Surface > Create Response Surface Design. Change to a 3
factor central composite design. Select “Designs”, highlight the design with six center points (20 runs).
Leave α =1.682. Enter the data into the created worksheet. To analyze the experiment, select Stat >
DOE > Response Surface > Analyze Response Surface Design. Select “Terms” and verify that
a full quadratic model (A, B, C, A2, B2, C2, AB, AC, BC) is selected.
Analysis of Variance
Coded Coefficients
2 2 2
y=87.36+5.83 x 1+1.36 x 2−6.05 x 3−5.058 x 1−7.886 x 2+ 0.776 x3 +2.88 x 1 x 2−2.62 x 1 x 3−4.63 x2 x 3
2
where x 2, x 3 are not significant at the α =0.05 level. To maintain hierarchy in the model, we leave x 2 in
the model since interaction effects with x 2 are significant.
Consider contour plots of the response. Stat > DOE > Response Surface > Contour Plot. Select
“Generate plots for all pairs of continuous variables”. Click “View Model” to ensure the correct
model is considered.
Contour Plots of y
B*A C*A y
< 40
1 1
40 – 50
50 – 60
0 0
60 – 70
70 – 80
-1 -1
80 – 90
90 – 100
-1 0 1 -1 0 1
C*B > 100
Hold Values
1
A0
B0
0
C 0
-1
-1 0 1
Using Minitab Optimizer, we can determine the optimal levels of the factors to maximize y.
From the plots and the optimizer, setting x 1 in a range from 1 to +1.682, setting x 2 between 0.5 and 1.2,
and setting x 3 to its smallest value will maximize viscosity.
8.19. Reconsider the crystal growth experiment from Exercise 8.18. Suppose that x 3=z is now a noise variable,
and that the modified experimental design shown in Table 8E.9 has been conducted. The experimenters
want the growth rate to be as large as possible, but they also want the variability transmitted from z to be
small. Under what set of conditions is growth greater than 90 with minimum variability achieved?
x1 x2 z y
-1 -1 -1 66
-1 -1 1 70
-1 1 -1 78
-1 1 1 60
1 -1 -1 80
1 -1 1 70
1 1 -1 100
1 1 1 75
-1.682 0 0 65
1.682 0 0 82
0 -1.682 0 68
0 1.682 0 63
0 0 0 83
0 0 0 90
0 0 0 87
0 0 0 88
0 0 0 91
0 0 0 85
Enter the data in the worksheet. Stat > DOE > Response Surface > Define Custom Response Surface
Design. Enter the three variables x 1 , x 2 , z . To analyze the experiment, go to Stat > DOE > Response Surface
> Analyze Response Surface Design. Select the response variable y. Select “Terms”. For robust parameter
design, include main factors of all variables, and all two way interactions between variables.
Analysis of Variance
Coded Coefficients
95
90 2.5
80
70
Residual
Percent
60
50
0.0
40
30
20
-2.5
10
5
-5.0
1
-5.0 -2.5 0.0 2.5 5.0 7.5 60 70 80 90 100
Residual Fitted Value
2 2
y=87.361+ 5.8275 x 1 +1.3611 x 2−4.8642 x 1−7.6919 x 2+2.8750 x 1 x 2−6.1250 z−2.6250 x 1 z−4.6250 x 2 z
Where the mean effect of x 2 is not statistically significant, but we leave it in the model to maintain hierarchy in the
model since interaction effects with x 2 are significant at α =0.10 .
Now specify the mean and variance model to determine when the growth is greater than 90 with minimum
variability.
2 2
y=β 0 + β 1 x 1 + β 2 x 2 + β 11 x1 + β 22 x 2 + β 12 x 1 x 2+ γ z 1+ δ 11 x 1 z 1 +δ 21 x 2 z 1 +ε
E [ y ] =β 0 + β 1 x 1+ β2 x 2+ β 11 x 1+ β 22 x 2+ β 12 x 1 x 2
2 2
Replace the unknown regression coefficients in the mean and variance models and replace σ 2 in the variance model
by the residual mean square found when fitting the response model. Assume that the low and high values of the
noise variable z have been run at one standard deviation on either side of its typical or average value.
σ z =1 , σ^ =MSE=15.072
2
2 2 2
Var ( y )=(−6.1250−2.6250 x 1−4.6250 x 2 ) σ z +σ
2 2
Var ( y )=52.5876+32.1563 x 1 +6.8906 x 1+56.6563 x 2 +21.3906 x 2+ 24.2813 x 1 x2
MTB > Stat > DOE > Response Surface > Surface Plot, Contour Plot
Surface Plot of Growth vs x1, x2 Contour Plot of Growth vs x1, x2
60
1.5
50 80
1.0
0.5
x2
0.0
80
y
60 -0.5
1
40 -1.0
0
x2 60
-2 -1
-1 70 60
0 -2 -1.5
x1 1
To reach target growth of 90 and minimum variability, x 1 values near 0.50 and x 2 values near 0.2 are appropriate.
CHAPTER 9
Reliability
Learning Objectives
Exercises
9.1. An electronic component in a dental x-ray system has an exponential time to failure distribution with
λ=0.00004 . What are the mean and variance of the time to failure? What is the reliability at 30,000
hours?
t exp ( λ=0.00004 )
1 1
Mean time to failure: E ( t )= = =25,000 hours
λ 0.00004
1 1 8 2
Variance of the time to failure: V ( t )= 2
= 2
=6.25∗10 hours
λ 0.00004
Reliability at 30,000 hours:
−λt −0.00004∗25,000 −1
R ( 25,000 )=P ( t >25,000 )=e =e =e =0.3679
9.3. A component in an automobile door latch has an exponential time to failure distribution with λ=0.00125
cycles. What are the mean and variance of the number of cycles to failure? What is the reliability at 8,000
cycles?
t exp ( λ=0.00125 )
1 1
Mean time to failure: E ( t )= = =800 cycles
λ 0.00125
1 1 2
Variance of the time to failure: V ( t )= 2
= 2
=640,000 cycles
λ 0.00125
Reliability at 8,000 cycles:
−λ∗10000
R ( 10000 ) ≥ 0.95⟹ P ( t>10000 ) ≥ 0.95 ⟹ e ≥ 0.95
⟹− λ∗10000≥ ln (0.95)
−ln (0.95)
⟹ λ≤
10000
−6
⟹ λ ≤ 5.129∗10
1 1
Mean time to failure: = =194,957 hours
λ 5.129∗10−6
NOTE: The mean time to failure is sensitive to the number of significant digits used in
intermediate calculations.
9.5. A synthetic fiber is stressed by repeatedly applying a particular load. Suppose that the number of cycles to
failure has an exponential distribution with mean 3,000 cycles. What is the probability that the fiber will
break at 1,500 cycles? What is the probability that the fiber will break at 2,500 cycles?
t exp ( λ=1/3,000 )
−λt −1 /3,000∗1,500 −1 /2
1−R ( 1,500 )=1−P ( t>1,500 )=1−e =1−e =e ≈ 0.3935
Probability that the fiber will break at 2,500 cycles:
9.6. An electronic component has an exponential time to failure distribution with λ=0.0002 hours. What are
the mean and variance of the time to failure? What is the reliability at 7,000 hours?
t exp ( λ=0.0002)
1 1
Mean Time to failure: E ( t )= = =5,000 hours
λ 0.0002
1 1 2
Variance of the time to failure: V ( t )= 2
= 2
=25,000,000 hours
λ ( 0.0002 )
− λt −0.0002∗(7000) −1.4
R ( 7000 )=Pr ( t>7000 )=e =e =e =0.2466
9.7. The life in hours of a mechanic assembly has a Weibull distribution with δ=5,000 and β=1/2. Find the
mean and variance of the time to failure. What is the reliability of the assembly at 4,000 hours? What is
the reliability at 7,500 hours?
2
V ( t )=δ Γ ( 1+2/ β )−[ δ Γ (1+1 / β ) ] =5,000 ∗Γ ( 5 )−10,000 =5∗10 hours
2 2 2 8 2
9.8. The life in hours of subsystem in an appliance has a Weibull distribution with δ=7,000 cycles and
β=1.5. What is the reliability of the appliance subsystem at 10,000 hours?
9.9. Suppose that a lifetime of a component has a Weibull distribution with shape parameter β=2. If the
system should have reliability 0.99 at 7,500 hours of use, what value of the scale parameter is required?
( δt ) ( 7,500
δ )
β 2
− −
R ( 7,500 )=e =e =0.99
( )
2
7,500
− =ln ( 0.99 )
δ
δ 2=
√ 75002
−ln ( 0.99 )
δ=74,811.9502
9.10. A component has a Weibull time-to-failure distribution. The value of the scale parameter is δ=200.
Calculate the reliability at 250 hours for the following values of the shape parameter:
β=0.5 , 1 ,1.5 ,2 ,∧2.5. For the fixed value of the scale parameter, what impact does changing the shape
parameter have on the reliability?
t Weibull (δ=200 , β)
9.11. A component has a Weibull time-to-failure distribution. The value of the scale parameter is δ=500 .
Suppose that the shape parameter is β=3.4 . Find the mean and variance of the time to failure. What is
the reliability at 600 hours?
2
¿ 21,288.5684 hours
Reliability at 600 hours:
( δt ) ( 600
500 )
β 3.4
− −
R ( 600 )=P ( t >600 ) =e =e ≈ 0.1559
9.12. Continuation of Exercise 9.11. If a Weibull distribution has a shape parameter of β=3.4 , it can be
reasonably well approximated by a normal distribution with the same mean and variance. For the situation
of Exercise 9.11, calculate the reliability at 600 hours using the normal distribution. How close is this to the
reliability value calculated from the Weibull distribution in Exercise 9.11?
2
X Normal( μ=449.1910 , σ =21288.5684)
9.13. Suppose that a unit had a Weibull time-to-failure distribution. The value of the scale parameter is δ=250.
Graph the hazard function for the following values of the shape parameter: β=0.5 , 1 ,1.5 ,2 , 2.5. For
the fixed values of the scale parameter, what impact does changing the shape parameter have on the hazard
function?
() ( )
β −1 β−1
β t β t
h ( t )=f ( t ) / R ( t )= =
δ δ 250 250
0.45
0.40
0.35
0.30
Beta = 0.5
0.25
Beta = 1
h(t)
9.14. Suppose that you have 15 observations on the number of hours to failure of a unit. The observations are
442, 381, 960, 571, 1861, 162, 334, 825, 2562, 324, 312, 368, 367, 968, and 15. Is the exponential
distribution a reasonable choice for the time to failure distribution? Estimate MTTF.
Percent
30
20
10
3
2
1
10 100 1000 10000
t
15
1 442+⋯+15
Estimate of MTTF: ∑
15 i=1
t i=
15
=697
The points do not form a straight line in the probability plot, indicating that the assumption that the times to
failure follow an exponential distribution is not reasonable.
9.15. Suppose that you have 15 observations on the number of hours to failure of a unit. The observations are
259, 53, 536, 1320, 341, 667, 538, 1713, 428, 152, 29, 445, 677, 637, 696, 540, 1392, 192, 1871, and 2469.
Is the exponential distribution a reasonable choice for the time to failure distribution? Estimate the MTTF.
30
20
10
3
2
1
10 100 1000 10000
t
20
t
Estimate of MTTF: ∑ 20i = 259+⋯+2469
20
=747.75
i=1
The assumption that the times to failure follow an exponential distribution is reasonable.
9.16. Twenty observations on the time to failure of a system are as follows: 1054, 320, 682, 1440, 1085, 938,
871, 471, 1053, 1103, 780, 665, 1218, 659, 393, 913, 566, 439, 533, and 813. Is the Weibull distribution a
reasonable choice for the time to failure distribution? Estimate the scale and shape parameters.
Percent
30
20
10
3
2
1
100 1000
t
The assumption that the times to failure follow a Weibull distribution is reasonable. From the Minitab
output, we can estimate the scale and shape parameters:
9.17. Consider the following 10 observations on time to failure: 50, 191, 63, 174, 71, 62, 119, 89, 123, and 175.
Is either the exponential or the Weibull a reasonable choice of the time to failure distribution?
Percent
30 30
20 20
10 10
5 5
3 3
2 2
1 1
10 100 1000 1 10 100 1000
t t
Based on the probability plots for the exponential and Weibull distribution, the Weibull distribution is a
more adequate fit.
9.18. Fifteen observations on the time to failure of a unit are as follows: 173, 235, 379, 439, 462, 455, 617, 41,
454, 1083, 371, 359, 588, 121, and 1066. Is the Weibull distribution a reasonable choice for the time to
failure distribution? Estimate the scale and shape parameters.
Percent
30
20
10
3
2
1
10 100 1000
t
The assumption that the times to failure follow a Weibull distribution is reasonable. From the Minitab
output, we can estimate the scale and shape parameters:
9.19. Consider the following 20 observations on time to failure: 702, 507, 664, 491, 514, 323, 350, 681, 281,
599, 495, 254, 185, 608, 626, 622, 790, 248, 610, and 537. Is either exponential or the Weibull a
reasonable choice of the time to failure distribution?
Percent
30 30
20 20
10 10
5 5
3 3
2 2
1 1
1 10 100 1000 10000 100 1000
t t
Based on the probability plots for the exponential and Weibull distribution, the Weibull distribution is a
more adequate fit.
9.20. Consider the time to failure data in Exercise 9.19. Is the normal distribution a reasonable model for these
data? Why or why not?
Percent
60
50
40
30
20
10
5
1
0 200 400 600 800 1000 1200
t
The assumption that the times to failure follow a normal distribution is reasonable.
The points on the normal probability plot are well approximated by a straight line. Note also that in
Exercise 9.19, the estimate for the shape parameter was 3.481. In Exercise 9.12, we saw that if a Weibull
distribution has a shape parameter of β=3.4 , it can be reasonably well approximated by a normal
distribution with the same mean and variance.
9.21. A simple series system is shown in the accompanying figure. The reliability of each component is shown in
the figure. Assuming that the components operate independently, calculate the system reliability.
4
R S=∏ R ( Ci ) =0.9995∗0.9999∗0.9875∗0.9980=0.9849
i=1
9.22. A series system is shown in the accompanying figure. The reliability of each component is shown in the
figure. Assuming that the components operate independently, calculate the system reliability.
4
R s=∏ R ( C i )=0.9950∗0.9999∗0.9875=0.9825
i=1
9.23. A simple series system in shown in the accompanying figure. The lifetime of each component is
exponentially distributed and the λ for each component is shown in the figure. Assuming that the
components operate independently, calculate the system reliability at 10,000 hours.
t C exp ( λ i )
i
−1
∗10,000
− λt
RC 1 ( 10,000 )=e =e 12,000
=e−5 /6 =0.4346
−1
∗10,000
− λt
RC 2 ( 10,000 )=e =e 15,000
=e−2 /3=0.5134
−1
∗10,000
− λt
RC 3 ( 10,000 )=e =e 20,000
=e−1 /2=0.6065
3
R S ( 10,000 )=∏ R C =0.4346∗0.5134∗0.6065=0.1353
i
i=1
9.24. A series system has four independent identical components. The reliability for each component is
exponentially distributed. If the reliability of the system at 8,000 hours must be at least 0.999, what MTTF
must be specified for each component in the system?
4
R S ( 8,000 )=∏ R C ≥ 0.999 i
i=1
4
R S ( 8,000 )=∏ e−λt =e−4∗8,000∗ λ ≥ 0.999
i=1
−8
λ ≤−ln ( 0.999 ) /32,000 ≈ 3.1266∗10
1 1 7
MTTF= ≥ ≈ 3.1984∗10
λ 3.1266∗10−8
9.25. Consider the parallel system shown in the accompanying figure. The reliability of each component is
provided in the figure. Assuming the components operate independently, calculate the system reliability.
2
R s ( t )=1−∏ [1−R i (t) ¿ ]=1−(1−0.9999)∗(1−0.9850)=1−1.5∗10−6=0.9999985 ¿
i=1
9.26. Consider the parallel system shown in the accompanying figure. The reliability for each component is
shown in the figure. Assuming the components operate independently, calculate the system reliability.
4
R S=1−∏ ( 1−RC )=1−( 1−0.995 )∗( 1−0.999 )∗( 1−0.985 )=1−7.5∗10−8
i
i=1
¿ 0.999999925
9.27. Consider the stand-by system in the accompanying figure. The components have an exponential lifetime
distribution, and the decision switch operates perfectly. The MTTF of each component is 1,000 hours.
Assuming that the components operate independently, calculate the system reliability at 2,000 hours.
2 i
(t /1,000)
R s ( t )=e −t / 1,000
∑ i!
i=0
−t
1,000 t t2
R s ( t )=e (1+ + )
1,000 2,000,000
−2,000
1,000 2,000 2,0002
R s ( 2000 )=e ( 1+ + )
1,000 2,000,000
R s ( 2,000 )=0.6767
9.28. Consider the stand-by system shown in the accompanying figure. The components have an exponential
lifetime distribution, and the decision switch operates perfectly. The MTTF of each component is 500
hours. Assuming that the components operate independently, calculate the system reliability at 1,500
hours.
n=3 , λ=1 /500
n−1 i
(λt )
R s ( t )=e − λt
∑ i!
i=0
2 i
(t /500)
R s ( t )=e −t / 500
∑ i!
i=0
−t
500 t t2
R s ( t )=e (1+ + )
500 500,000
−1,500
500 1,500 1,5002
R s ( 1,500 ) =e (1+ + )
500 500,000
R s ( 2,000 )=0.4232
9.29. Consider the system shown in the accompanying figure. The reliability of each component is shown in the
figure. Assuming that the components operate independently, calculate the system reliability.
Reliability of Subsystem 1:
2
R SS1 =1−∏ ( 1−RC ) =1−( 1−0.999 )∗(1−0.985)=0.999985
i
i=1
Reliability of Subsystem 2:
3
R SS2 =1−∏ ( 1−RC ) =1−( 1−0.995 )∗(1−0.980 )∗(1−0.975)=0.9999975
i
i=1
Reliability of Subsystem 3:
R SS3 =0.999
Reliability of System:
9.30. Consider the system shown in the accompanying figure. The reliability of each component is provided in
the figure. Assuming that the components operate independently, calculate the system reliability.
Reliability of subsystem 1:
2
R SS1 =1−∏ ( 1−RC ) =1−( 1−0.995 )2=0.999975
i
i=1
Reliability of subsystem 2:
2
R SS2 =1−∏ ( 1−RC ) =1−( 1−0.980 ) (1−0.950 ) =0.9990
i
i=1
Reliability of subsystem 3:
2
R SS3 =∏ R SS =¿ 0.999975∗0.9990=0.998975¿
i
i=1
System reliability:
9.31. Consider the system shown in the accompanying figure. The reliability of each component is provided in
the figure. Assuming that the components operate independently, calculate the system reliability.
Reliability of Subsystem 1:
2
R SS1 =1−∏ ( 1−RC ) =¿ 1−( 1−0.995 )∗( 1−0.995 )=0.999975¿
i
i=1
Reliability of Subsystem 2:
3
R SS2 =1−∏ ( 1−RC ) =¿ 1−( 1−0.990 )∗( 1−0.999 )∗( 1−0.995 ) =0.99999995 ¿
i
i=1
Reliability of Subsystem 3:
2
R SS3 =∏ R S ¿ R SS 1∗RSS 2=0.999975∗0.99999995=0.99997495
i
i=1
Reliability of Subsystem 4:
R SS 4=1 ( 1−R SS 3 )∗( 1−0.985 )=1−( 1−0.99997495 )∗( 1−0.985 )=0.9999996
System Reliability:
R s=RSS 4∗0.995=0.994999
9.32. Suppose that n=20 units are placed on test and that five of them fail at 20, 25, 40, 75, and 100 hours,
respectively. The test is terminated at 100 hours without replacing any of the failed units. Find the failure
rate.
Failure Failed
time units
20 1
25 1
40 1
75 1
100 1
≥ 100 15
r r 5 5
h ( t )= = = = =0.0028
Q r
20+25+ 40+75+100+15∗100 1760
∑ t i +( n−r ) T
i=1
9.33. Continuation of Exercise 9.32. For the data in Exercise 9.32, find a 95% confidence interval on the MTTF
and the reliability at 200 hours, assuming that the lifetime has an exponential distribution.
θ^ l =150.81, θ^ u =800
NOTE: Confidence intervals endpoints are highly dependent on the number of significant digits used throughout the
calculations.
9.34. Suppose that 50 units are tested for 1,000 cycles of use. At the end of the 1,000-cycle test period, 5 of units
have failed. When a unit failed, it was replaced with a new one and the test was continued. Estimate the
failure rate.
r r 5 −4
h ( t )= = r = =10
Q 50∗1,000
∑ ti
i=1
9.35. Suppose that 50 units are placed on test, and when 3 of them have failed, the test is terminated. The failure
times in hours for the 3 units that failed are 950, 1050, and 1525. Estimate the failure rate for these units.
r 3 3
h ( t )= = = =0.000851
r
950+1050+1525 3525
∑ ti
i=1
9.36. Suppose that n=25 units are placed on test and that two of them fail at 200 and 400 hours, respectively.
The test is terminated at 1,000 hours without replacing any of the failed units. Determine the failure rate.
Failure Failed
time units
200 1
400 1
≥ 1,000 23
r r 2 2
h ( t )= = = = =8.4746∗10−5
Q r
200+400+ 23∗1000 23,600
∑ t i +( n−r ) T
i=1
9.37. Suppose that n=25 units are placed on test and that three of them fail at 150, 300, and 500 hours,
respectively. The test is terminated at 2,000 hours, and no failed units are replaced during the test. Estimate
the MTTF and the reliability of these units at 5,000 hours. Assuming that the lifetime has an exponential
distribution, find 95% confidence interval on these quantities.
( χ
2
2Q
α /2 ,2 r +2
,
χ
2
2Q
1−α / 2, 2 r+2
)(
=
2∗44950 2∗44950
2
χ .025 ,8
, 2
)(
χ .975 ,8
=
89,900 89,900
,
17.53 2.18 )
¿(5,128.35 , 41,238.53)
θ^ l =5,128.35 , θ^ u=41,238.53
NOTE: Confidence intervals endpoints are highly dependent on the number of significant digits used throughout the
calculations.