0% found this document useful (0 votes)
83 views221 pages

Solutions Manual

This document introduces the concept of quality, its dimensions, and the evolution of quality improvement methods. It discusses the importance of understanding quality from various perspectives, including product and service quality, and emphasizes the role of management in fostering a culture of quality. Additionally, it outlines key quality management philosophies and tools, such as DMAIC and Six Sigma, while highlighting the costs associated with quality and the significance of internal and external customers.

Uploaded by

mfarrej
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views221 pages

Solutions Manual

This document introduces the concept of quality, its dimensions, and the evolution of quality improvement methods. It discusses the importance of understanding quality from various perspectives, including product and service quality, and emphasizes the role of management in fostering a culture of quality. Additionally, it outlines key quality management philosophies and tools, such as DMAIC and Six Sigma, while highlighting the costs associated with quality and the significance of internal and external customers.

Uploaded by

mfarrej
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 221

CHAPTER 1

Introduction to Quality
Learning Objectives

After completing this chapter you should be able to:


1. Define and discuss quality and quality improvement
2. Discuss the different dimensions of quality
3. Discuss the evolution of modern quality improvement methods
4. Discuss the role that variability and statistical methods play in controlling and
improving quality
5. Explain the links between quality and productivity and between quality and cost
6. Discuss product liability

Important Terms and Concepts


Acceptance-sampling Nonconforming product or Quality engineering
Appraisal costs service Quality of conformance
Critical-to-quality (CTQ) Prevention costs Quality of design
Dimensions of quality Product liability Quality planning
Fitness for use Quality assurance Specifications
Internal and external Quality characteristics Variability
failure costs Quality control and
improvement

Exercises
1.1. Why is it difficult to define quality?

Even the American Society for Quality describes “quality” as a subjective term for which each person or
sector has its own definition. Given a large set of customers considering purchasing the same product or
service, it is likely that each customer evaluates it in terms of a completely different set of desirable
characteristics. As a result, it is extremely difficult to come up with a single definition of quality that could
meet the expectations of all customers for all products or services. (For further details refer to page 2)

1.2. Briefly discuss the eight dimensions of quality. Does this improve our understanding of quality?

Dimensions under discussion: performance, reliability, durability, serviceability, aesthetics, features,


perceived quality, conformance to standards, and responsiveness. These components of quality help us
evaluate the quality of a product in more specific terms (For details, refer to page 3-4).

1.3. Select a specific product or service and discuss how the eight dimensions of quality impact its overall
acceptance by consumers.

Answers will vary.


Consider a TV. Customers will by a certain brand of TV if it meets their requirements. Customers will
evaluate the performance of the TV – is the picture sharp/clear? Does it have the functions the customer
desires? The TV should function without frequent repairs – it should be reliable. The TV should be durable
and last for many years. If the TV needs repair, is it easy for the company to come fix it? The aesthetics
will be evaluated by the customer: Is the TV pleasing to look at? Is it overly bulky? How much room will it
take up? Does the TV have any special features over other brands? Does the TV brand have a good
reputation? Does the TV have any noticeable defects like scratches or marks?

1.4. Is there a difference between quality for a manufactured product and quality for a service? Give some
specific examples.

The service industry has some additional dimensions of quality that can be used to evaluate the quality of a
service. These dimensions include professionalism and attentiveness of the service providers. In terms of
the aesthetics, for a manufactured product, this would be related to the visual appeal of the product. In the
service sector, this is the physical appearance of the facility (For further details refer to page 4).

1.5. Can an understanding of the multidimensional nature of quality lead to improved product design or better
service?

Understanding the multidimensional nature of quality can help with improved product design. By focusing
on several (or all) dimensions of quality, improvements to a product’s design or service can be made. In
addition, recognizing that quality is inversely proportional to variability will also lead to improved product
designs. Reducing the variability of critical features identified from the dimensions of quality will lead to
improved product design or better service.

1.6. What are the internal customers of a business? Why are they important from a quality perspective?

The internal customers of a business are those within the company that require products or services from
other departments in the company. Internal customers could include managers or other employees.

1.7. What are the three primary technical tools used for quality control and improvement?

The three primary statistical technical tools include: statistical process control, design of experiments, and
acceptance-sampling.

1.8. What is meant by the cost of quality?

Quality costs are the categories of costs that are associated with producing, identifying, avoiding, or
repairing products that do not meet requirements. These categories include prevention costs, appraisal
costs, internal failure costs, and external failure costs.

1.9. Are internal failure costs more or less important than external failure costs?

External costs can be expensive as a result of defective products being sent to customers. These costs
include the costs of returned products/materials, warranty charges, liability costs, and the indirect costs of
business reputation or loss of future business. External costs can be eliminated if all units of product
conform to requirements, i.e. if the defects are discovered prior to delivery of the product to the customer
(internal failure).

1.10. Discuss the statement “Quality is the responsibility of the quality department.”

The responsibility for quality spans the entire organization. Quality improvement must be a total,
company-wide activity, and that every organizational unit must actively participate. Because quality
improvement activities are so broad, successful efforts require, as an initial step, senior management
commitment. This commitment involves emphasis on the importance of quality, identification of the
respective quality responsibilities of the various organizational units, and explicit accountability for quality
improvement of all managers and employees in the company. Nevertheless, the quality department takes
care of the quality planning and analyses in order to make sure all quality efforts are being successfully
implemented throughout the organization. (For further details refer to pages 26-27)

1.11. Most of the quality management literature state that without top management leadership, quality
improvement will not occur. Do you agree or disagree with this statement? Discuss why.

Top management provides the strategic agenda and goals of quality improvement in a company.
Management can strategically define the quality improvement projects that will most benefit the company.
In addition, they can spread the knowledge and culture of quality throughout the company (For further
details refer to page 26-27).

1.12. Explain why it is necessary to consider variability around the mean or nominal dimension as a measure of
quality.

The primary objective of quality engineering efforts is the systematic reduction of variability in the key
quality characteristics of the product. The introduction of statistical process control will help to stabilize
processes and reduce their variability. However, it is not satisfactory just to meet requirements – further
reduction of variability around the mean or nominal dimension often also leads to better product
performance and enhanced competitive position. (For further details refer to page 17)

1.13. Suppose you had the opportunity to improve quality in a hospital. Which areas of the hospital would you
look to as opportunities for quality improvement? What metrics would you use as measures of quality?

Identifying areas for quality improvement (in any company) should be made with the company’s strategic
goals in mind. You want to make the improvements that will have the biggest impact on the company
(hospital). In a hospital setting, some potential areas of opportunities for quality improvement include clinic
wait times, lab processing times, length of stay in the hospital, surgical outcomes, medication delivery, etc.

1.14. Suppose you had to improve service quality in a bank credit card application and approval process. What
critical-to-quality characteristics would you identify? How could you go about improving this system?

Some possible CTQs could be the cycle time for the loan to be approved. You might start an improvement
to the system by identifying potential causes of long cycle times, determine which causes are the most
harmful to the system and then try to eliminate or reduce them.

1.15. How would quality be defined in the following organization?

a) Health-care facility
Some possible dimensions of quality that may be of interest in a health-care facility:
Performance: Is the appropriate treatment provided?
Reliability: Is the treatment effective?
Durability: Is the treatment done correctly the first time?
Aesthetics: Is the facility clean?
Features: Is the facility able to handle different types of health-related issues?
Perceived Quality: Does the facility have a good reputation?
Responsiveness: Does a patient have to wait a long time for treatment? Is the staff prompt?
Professionalism: Are the nurses/doctors/other staff knowledgeable?
Is the facility affordable?

b) Department store
Some possible dimensions of quality that may of interest in a department store:
Performance: Does the store have what you need?
Reliability: Does the store consistently have what you need? Are the products in good condition?
Serviceability: Is it easy and/or possible to make returns or exchanges?
Aesthetics: Is the store clean? Does it have an easy to understand layout?
Features: Does the store have specialized products?
Perceived Quality: Does the store have a good reputation?
Responsiveness: Are the staff quick to help find an item? Is the staff helpful?
Professionalism: Do the staff know where items in the store are?

c) Grocery store
Some possible dimensions of quality that may of interest in a grocery store:
Performance: Does the store have what you need?
Reliability: Does the store consistently have what you need? Are the products in good condition?
Durability: Are there items past the expiration date in the store?
Aesthetics: Is the store clean? Is the layout understandable/logical?
Features: Does the store have specialized products?
Perceived Quality: Does the store have a good reputation?
Responsiveness: Are the staff quick to help find an item? Is the staff helpful?
Professionalism: Do the staff know where items in the store are?

d) University academic department


Some possible dimensions of quality that may of interest in a university academic department:
Performance: Does the department have a high graduation rate? Retention rate? How well do the
students perform (possible metrics may include GPA, time to find a job upon graduation, starting
salary after graduation)
Reliability: Are classes offered consistently?
Aesthetics: Are the classrooms and department facilities clean? In good shape?
Features: What classes are offered?
Perceived Quality: Does the department have a good reputation (possible metric may include
national rankings)? Does the faculty have a good reputation?
Conformance to standards: How does the department compare to similar departments in other
universities?
Responsiveness: Are faculty and advisors quick to respond to questions from students?
Professionalism: Are faculty knowledgeable? Are they active in research and/or teaching? Are the
faculty/staff/students caring/helpful/knowledgeable?
CHAPTER 2

Managing aspects of Quality


Learning Objectives

After completing this chapter you should be able to:


1. Describe the quality management philosophies of W. Edwards Deming, Joseph M. Juran, and Armand V.
Feigenbaum
2. Discuss total quality management, six-sigma, the Malcolm Baldrige National Quality Award, and quality
systems and standards
3. Understand the importance of selecting good projects for improvement activities
4. Explain the five steps of DMAIC
5. Know when and when not to use DMAIC

Important Terms and Concepts


Analyze step Improve step SIPOC diagram
Control step Key process input variables Six-Sigma
Define step (KPIV) Tollgate
Design for Six-Sigma (DFSS) Key process output variables
DMAIC (KPOV)
Failure modes and effects analysis Measure step
(FMEA) Project charter

Exercises

2.1. Is Deming’s philosophy more or less focused on statistics than Juran?

Deming’s philosophy is more focused on statistics than Juran. A significant component of his philosophy is
statistical techniques for reducing variability. Juran takes a more strategic approach and believed most
quality problems result from ineffective planning for quality. (For further details refer to pages 32-37)

2.2. What is the Juran Trilogy?

The Juran Trilogy is the three components of quality management philosophy: planning, control and
improvement. The planning process involves identifying external customers, determining their needs,
designing products that meet these needs, and planning for quality improvement on a regular basis. Control
is used by the operating forces of the business to ensure that the product meets the customers’
requirements. Improvement aims to achieve performance and quality levels higher than the current levels.

2.3. What is the Malcolm Baldrige National Quality Award? Who is eligible for the award?

The award is an annual award administered by NIST to recognize U.S. organizations for performance
excellence. Awards are given to U.S. organizations in five categories: manufacturing, service, small
business, health care, and education.

2.4. What is a six-sigma process?


A six-sigma process is one that operates under six-sigma quality, i.e. the specification limits are at least six
standard deviations from the mean. This leads to 3.4 ppm defective units in the process given that we
“allow” the mean to shift by as much as 1.5 standard deviations off target.

2.5. Compare and contrast Deming’s and Juran’s philosophies of quality.

Juran’s philosophy of quality focused on a strategic approach to quality management and improvement.
One of Deming’s key philosophies is to reduce variability. Both emphasize planning in quality
improvement: plan, control, and improve are both themes in Deming and Juran’s philosophies. (For further
details refer to pages 32-37)

2.6. What would motivate a company to compete for the MBNQA?

The prestige of winning the award as only three awards are given each year in each of the five categories is
one motivating factor. In addition, improving the company’s performance leads to improved customer
satisfaction, improved product quality, productivity, etc. This will all lead to overall improved business for
the company.

2.7. Hundreds of companies have won the MBNQA. Collect information on two of the winners. What success
have they had since receiving the awards?

Answers will vary.

2.8. Reconsider the fast-food restaurant out considered in the chapter.

a. What results do you obtain if the probability of good quality on each meal component was 0.999?

10
P ( single meal good )=( .999 ) =0.990045
4
P ( all meals good )= ( 0.990045 ) =0.96077
12
P ( all visits duringthe year good )=( 0.96077 ) =0.6186

b. What level of quality on each meal component would be required to produce an annual level of quality that
was acceptable to you?

Let p be the level of quality on each meal component. Then,

P ( all visits duringthe year good )=¿ ¿ ¿

Annual Level of Quality p


0.75 0.999401
0.80 0.999535
0.90 0.999781
0.95 0.999893
0.99 0.999979

2.9. Discuss the similarities between the Shewhart cycle and DMAIC.

Both are cyclic plans/strategies for improvement of a process. There are phases dedicated to planning and
defining the current process. Both are team-oriented aimed at testing, implementing, and maintaining
improvements (For further details refer to pages 35, 55-65).
2.10. Describe a service system that you use. What are the CTQs that are important to you? How do you think
that DMAIC could be applied to this process?

Answers will vary.

Consider eating out at a restaurant. Potential CTQs are time until seated, time for waiter to come to the
table, time until food is served, the taste of the food, the temperature of the food, menu options, friendly
staff, etc. The Define phase would include project definition, goals, and scope of the project. Flow charts
and value stream maps will help define the current process. In the Measure step, data must be collected to
understand the current state. Food temperature or time until guests are served could be recorded, for
example. Guest surveys could help determine current customer opinions on menu options, etc. The measure
phase will aim to identify causes for long wait times or improper food temperature. This could be done with
cause-and-effect diagrams and hypothesis testing, for example. In the Improve step, the project team will
propose changes that can be made to improve long wait times. This could include a redesign of the tables
that improves workflow, reassigning waitstaff, introducing time trackers, etc. After a pilot study is done to
test if the improvements are successful, then the control step will consist of maintaining the improvements.
This may include training new waitstaff and monitoring times.

2.11. One of the objectives of the control plan in DMAIC is to “hold the gain.” What does this mean?

Once a project is completed, the improved process is handed off to the process owner. To hold the gain
means to ensure that the gains from the project stay in place and become a part of the daily routine. Once
an improvement has been achieved, you do not want to let it drop back to the level it was at previously.

2.12. Is there a point at which seeing further improvement in quality and productivity isn’t economically
advisable? Discuss your answer.

If the cost to improve the quality of the process/product outweighs the gains of the improvements and the
process is operating at an acceptable quality level, then it is probably not worth the expense to continue
making improvements.

2.13. Explain the importance of tollgates in the DMAIC process.

At a tollgate, a project team presents its work to managers and “owners” of the
process. In a six-sigma organization, the tollgate participants also would include the
project champion, master black belts, and other black belts not working directly on
the project. Tollgates are where the project is reviewed to ensure that it is on track
and they provide a continuing opportunity to evaluate whether the team can
successfully complete the project on schedule. Tollgates also present an opportunity
to provide guidance regarding the use of specific technical tools and other
information about the problem. Organization problems and other barriers to success
—and strategies for dealing with them—also often are identified during tollgate
reviews. Tollgates are critical to the overall problem-solving process; It is important
that these reviews be conducted very soon after the team completes each step.

2.14. An important part of a project is to identify the key process input variables (KPIV) and key process output
variables (KPOV). Suppose that you are the owner/manager of a small business that provides mailboxes,
copy services, and mailing services. Discuss the KPIVs and KPOVs for this business. How do they relate to
possible CTQs?

Potential KPIVs include number of current customers, types of customers, number of employees, current
services offered, and cost of services. Potential KPOVs include store revenue, customer wait times.
Customer CTQs may include wait times and price of services. The number of customers, types of
customers, and services offered affect the price of services. Number of employees, current services offered
impact wait time for customers.

2.15. An important part of a project is to identify the key process input variables (KPIV) and key process output
variables (KPOV). Suppose that you are in charge of a hospital emergency room. Discuss the KPIVs and
KPOVs for this business. How do they relate to possible customer CTQs?

Potential KPIVs include number of patients, types of patients, number of nurses on shift, number of doctors
on shift, day of the week, time of the day, etc. Potential KPOVs include waiting time until treatment, time
to process laboratory results, time until admission, percentage of patients without being seen, percentage of
patients who return for treatment, etc. Critical to quality characteristics may include wait times and
treatment received.

2.16. Why are designed experiments most useful in the improve step of DMAIC?

Designed experiments can be applied to an actual physical process or to a computer simulation model of
that process. They can be used for determining which factors influence the outcome of a process and
determining the optimal combination of factor settings.

2.17. Suppose that your business is operating at the three-sigma quality level. If projects have an average
improvement rate of 50% annually, how many years will it take to achieve six-sigma quality?

Assuming a 1.5 σ shift in the mean that is customary with six-sigma applications [
' '
X N ( μ =1.50 , σ =1) ¿, the percentage within the −3 σ and 3 σ limits is:

P (−3 ≤ X ' ≤ 3 )=P ( X ' ≤ 3 )−P ( X ' ≤−3 )=0.9331896

Then the ppm defective is: ( 1−0.9331896 )∗10 6 ≈ 66810

Using the equation in Example 2.1:


x
3.4=66810∗( 1−0.5 )

x=ln ( 66810
3.4
)/ln ⁡(1−0.5)
x=¿14.26
It will take the business about 14 years and 3 months to achieve 6 σ quality.

2.18. Suppose that your business is operating at the four-and-a-half-sigma quality level. If
projects have an average improvement rate of 50% annually, how many years will it
take to achieve six-sigma quality?

Assuming a 1.5 shift in the mean that is customary with six-sigma applications
[X' N (μ' =1.50 , σ =1) ], the percentage within the −4.5 σ and 4.5 σ limits is:

P (−4.5 ≤ X ' ≤ 4.5 )=P ( X ' ≤ 4.5 ) −P ( X ' ≤−4.5 )=¿0.99865

Then, the ppm defective is: ( 1−0.99865 )∗106 ≈ 1,350

Using the equation in Example 2.1:


x
3.4=1,350∗( 1−0.5 )

x=ln ( 1,350
3.4
)/ln ( 1−0.5 )
x=8.6332

` It will take the business nearly 8 years and 7 months to achieve6 σ quality.

2.19. Explain why it is important to separate sources of variability into special or assignable causes and common
or chance causes.

Common or chance causes are due to the inherent variability in the system and cannot generally be
controlled. Special or assignable causes can be discovered and removed, thus reducing the variability in
the overall system. It is important to distinguish between the two types of variability, because the strategy
to reduce variability depends on the source. Chance cause variability can only be removed by changing the
system, while assignable cause variability can be addressed by finding and eliminating the assignable
causes.

2.20. Consider improving service quality in a restaurant. What are the KPIVs and KPOVs that you should
consider? How do these relate to likely customer CTQs?

Potential KPIVs include number of customers, number of servers on staff, number of cooks on staff,
available food, cost of food, etc. Potential KPOVs include temperature of the food, taste of the food, wait
time until greeted, wait time until food is served, restaurant revenue, etc.

2.21. Suppose that during the analyze phase an obvious solution is discovered. Should that solution be
immediately implemented and the remaining steps of DMAIC abandoned? Discuss your answer.

The answer is generally NO. The advantage of completing the rest of the DMAIC process is that the
solution will be documented, tested, and its applicability to other parts of the business will be evaluated.
An immediate implementation of an “obvious” solution may not lead to an appropriate control plan.
Completing the rest of the DMAIC process can also lead to further refinements and improvements to the
solution. Also the transition plan developed in the control phase includes a validation check several months
after project completion. If the DMAIC process is not completed, there is a danger of the original results
not being sustained.

2.22. What information would you have to collect in order to build a discrete-event simulation model of a retail
branch-banking operation? Discuss how this model could be used to determine appropriate staffing levels
for the bank.

You would need to know the distribution of customer arrivals to the bank. This may be dependent on time
of the day and/or day of the week. You would also need information on the time it takes for an employee to
serve a particular customer. You may need to know if there are different types of customers entering the
bank and if the time it takes to serve them differs. For comparison, you would want to know how the bank
currently operates – how long do customer typically wait to be served and how many employees are
potentially reduce the wait time of customers while ensuring that employees do not have too much
downtime.

2.23. Suppose that you manage an airline reservation system and want to improve service quality. What are the
important CTQs for this process? What are the KPIVs and KPOVs? How do these relate to the customer
CTQs that you have identified?
Potential critical to quality characteristics for an airline reservation system are ease of use of the system,
number of flight options, time to process a reservation, and flight costs. Potential KPIVs include the flight
information entered by the user. Potential KPOVs include the time to display all possible flight options and
time to complete the reservation.

2.24. It has been estimated that safe aircraft carrier landings operate at about the 5s level. What level of ppm
defective does this imply?

Assuming a 1.5 shift in the mean that is customary with six-sigma applications
[X' N (μ' =1.50 , σ =1) ], the percentage within the −5 and 5σ limits is:

P (−5 ≤ X ' ≤ 5 )=P ( X ' ≤ 5 )−P ( X ' ≤ 5 )=¿ 0.999767

Then, the ppm defective is: ( 1−0.999767 )∗10 6 ≈ 233

2.25. Discuss why, in general, determining what to measure and how to make measurements is more difficult in
service processes and transactional businesses than in manufacturing.

Measurement systems and data on system performance often already exist in manufacturing. It often
automated as well. In service and transactional businesses, the need for data may not be as obvious, so there
may not be historical data or measurement systems already in place.

2.26. Suppose that you want to improve the process of loading passengers onto an airplane. Would a discrete-
event simulation model of this process be useful? What data would have to be collected to build this
model?

Testing different plans of loading passengers onto an airplane would be difficult to implement in practice.
A discrete event simulation would be useful in order to simulate different processes before implementing a
pilot study on the new process that leads to the most improvement of loading passengers. Information on
the current process will be needed in order to compare any potential changes to the process. This would
include current boarding procedure and distribution of time for passengers to board. In the simulations for
the potential improvements to the boarding process, estimates on the improvements of boarding times and
other changes would also be necessary.
CHAPTER 3

Tools and Techniques for Quality Control and Improvement


Learning Objectives
After completing this chapter you should be able to:
1. Understand chance and assignable causes of variability in a process
2. Explain the statistical basis of the Shewhart control chart
3. Understand the basic process improvement tools of SPC: the histogram or stem-and-leaf plot, the check
sheet, the Pareto chart, the cause-and-effect diagram, the defect concentration diagram, the scatter diagram,
and the control chart
4. Explain how sensitizing rules and pattern recognition are used in conjunction with control charts

Important Terms and Concepts


Action limits Designed experiments Out-of-control process
Assignable causes of variation Flow charts, operations process Pareto chart
Cause-and-effect diagram charts, and value stream mapping Patterns on control charts
Chance causes of variation Factorial experiment Scatter diagram
Check sheet In-control process Shewhart control charts
Control chart Magnificent seven Statistical control of a process
Control limits Out-of-control-action plan Statistical process control (SPC)
Defect concentration diagram (OCAP) Three-sigma control limits

Exercises

3.1. What are chance and assignable causes of variability? What part do they play in the operation and
interpretation of a Shewhart control chart?

Chance cause variability is the natural variability or “background noise” present in a system. This type of
variability is essentially unavoidable. An assignable cause of variability is an identifiable source of
variation in a process and is not a part of the normal variability of the process. For example, an assignable
cause could be an operator error or defective materials. A process that only has chance causes is said to be
in statistical control. Assignable causes will be flagged in a control chart so that the process can be
investigated.

3.2. Discuss the relationship between a control chart and statistical hypothesis testing.

A control chart is a test of the hypothesis that the process is in a state of statistical control. A point plotting
outside the control limits is equivalent to rejecting the hypothesis that the process is in a state of statistical
control. A point plotting within the control limits is equivalent of failing to reject the hypothesis that the
process is in a state of statistical control.

3.3. What is meant by the statement that a process is in a state of statistical control?

A process can have natural or common cause variation. This type of variation is usually unavoidable. A
system that only has common cause variability is considered to be in a state of statistical control. When
other types of variability outside of the natural variability of the system occur, the process is no longer in
statistical control.
3.4. If a process is in a state of statistical control, does it necessarily follow that all or nearly all of the units of
product produced will be within the specification limits?

No; your process can be in a state of statistical control, but the process is not operating at the desired
specification limits. In this case, a redesign or change to the process may be necessary in order for the
process to operate within the specification limits.

3.5. Discuss the logic underlying the use of three-sigma limits on Shewhart control charts. How will the chart
respond if narrower limits are chosen? How will it respond if wider limits are chosen?

The use of three-sigma limits is common because this range contains 99.7% of points within a normal
distribution. Using narrower limits may cause out of control points to be detected quicker, but will increase
the Type I error, the probability of false positives. Using wider limits will decrease Type I error, but will
lead to more false negatives, as out of control behavior is not detected by the wider limits.

3.6. Consider the control chart shown here. Does the pattern appear random?

No, the last four samples are located at a distance greater than 1 from the center line.

3.7. Consider the control charts shown here. Does the pattern appear random?

Yes, the pattern is random.

3.8. Consider the control charts shown here. Does the pattern appear random?
Yes, the pattern is random.

3.9. You consistently arrive at your office about one-half hour later than you would like. Develop a cause-and-
effect diagram that identifies and outlines the possible causes of this event.

MTB > STAT > Quality Tools > Cause-and-Effect

Cause-and-Effect of Being Late for Work


Environment Methods Measurements

weather Slow route Alarm clock


not set early
enough Late
for
car malfunction
Work
Oversleep
public transportation timing
Not enough sleep
alarm clock malfunction
traffic Too long to get ready

Machin Personnel

Other causes may be identified as well.

3.10. A car has gone out of control during a snowstorm and strikes a tree. Construct a cause-and-effect diagram
that identifies and outlines the possible causes of the accident.

MTB > STAT > Quality Tools > Cause-and-Effect

Cause-and-Effect of Car Crash


Methods Personnel

user error
driving too fast
another driver
on cell phone/distracted
pedestrian/object Car
Crash in
Snow
storm
ice on road

road not cleared speedometer car faulty


inaccurate
poor visibility

Environ Measurements Machines

Other causes may be identified as well.


3.11. Laboratory glassware shipped from the manufacturer to your plant via an overnight package service has
arrived damaged. Develop a cause-and-effect diagram that identifies and outlines the possible cause of this
event.

MTB > STAT > Quality Tools > Cause-and-Effect

Cause-and-Effect of Broken Glassware


Measurement Material Personnel

glasware not courier


up to specs mishandles box
box not defective manufacturer
appropriate components mishandles box
size glass broken shipping
before packing company Broken
Glass-
ware
inclement weather
Not packed well
extreme temperatures

Environment Methods

Other causes may be identified as well.

3.12. Construct a cause-and-effect diagram that identifies the possible causes of consistently bad coffee from a
large-capacity office coffee pot.

MTB > STAT > Quality Tools > Cause-and-Effect

Cause-and-Effect of Bad Coffee

Measurements Material

too much coffee


poor quality
not enough coffee coffee
too much water
bad water
not enough water quality
Bad
Coffee
machine
Chemicals in
room coffee stale filter not
Air quality
pot not

Environment Methods Machines

Other causes may be identified as well.

3.13. Develop a flow chart for the process that you follow every morning from the time you awake until you
arrive at your workplace (or school). Identify the value-added and non-value-added activities.
3.14. Develop a flow chart for the pre-registration process at your university. Identify the value-added and non-
value-added activities.

3.15. Many process improvement tools can be used in our personal lives. Develop a check sheet to record
“defect” you have in your personal life (such as overeating, being rude, not meeting commitments, missing
classes, etc.). Use the check sheet to keep a record of these “defects” for one month. Use a Pareto chart to
analyze these data. What are the underlying causes of these “defects”?

Answers will vary.


CHAPTER 4

Statistical Inference about Product and Process Quality


Learning Objectives

After completing this chapter you should be able to:


1. Construct and interpret visual data displays, including the stem-and-leaf plot, the histogram, and the box
plot
2. Compute and interpret the sample mean, the sample variance, the sample standard deviation, and the
sample range
3. Explain the concepts of a random variable and a probability distribution
4. Understand and interpret the mean, variance, and standard deviation of a probability distribution
5. Determine probabilities from probability distributions
6. Construct and interpret confidence intervals on a single mean and on the difference in two means
7. Construct and interpret confidence intervals on a single variance and the ratio of two variances
8. Construct and interpret confidence intervals on a single proportion and on the difference in two proportions
9. Understand how the analysis of variance (ANOVA) is used to compare more than two samples

Important Terms and Concepts


Alternative hypothesis Descriptive statistics Sampling distribution
Analysis of variance (ANOVA) Discrete distribution Standard deviation
Approximations to probability Exponential distribution Standard normal distribution
distributions F-distribution Statistic
Binomial distribution Histogram Statistics
Box plot Hypothesis testing Stem-and-leaf display
Central limit theorem Interquartile range t-distribution
Checking assumptions for Mean of a distribution Test statistic
statistical Median Tests of hypotheses on means,
inference procedures Negative binomial known variance(s)
Chi-square distribution distribution Tests of hypotheses on means,
Confidence interval Normal distribution unknown variance(s)
Confidence intervals on means, Normal probability plot Tests of hypotheses on
known variance(s) Parameters of a distribution proportions
Confidence intervals on means, Percentile Tests of hypotheses on the
unknown variance(s) Poisson distribution variance
Confidence intervals on Population of a normal distribution
proportions Power of a statistical test Tests of hypotheses on the
Confidence intervals on the Probability distribution variances
variance of a normal distribution Probability plotting of two normal distributions
Confidence intervals on the P-value Time series plot
variances Quartile Type I error
of two normal distributions Random sample Type II error
Continuous distribution Random variable Uniform distribution
Control limit theorem Residual analysis Variance of a distribution
Critical region for a test statistic Run chart
Exercises

4.1. The fill amount of liquid detergent bottles is being analyzed. Twelve bottles, randomly selected from the
process, are measured, and the results are as follows (in fluid ounces): 16.05, 16.03, 16.02, 16.04, 16.05,
16.01, 16.02, 16.02, 16.03, 16.01, 16.00, 16.07.

a. Calculate the sample average.

∑ xi 16.05+⋯+16.07
i=1
= =16.0292 fluid ounces
n 12
b. Calculate the sample standard deviation.


n

∑ ( xi −x )2
i=1
n−1
=
√ ( 16.05−16.0292 )2 +⋯+ ( 16.07−16.0292 )2
11
=0.0202 fluid ounces

4.2. Monthly sales for tissues in the northwest region are (in thousands) 50.001, 50.002, 49.998, 50.006, 50.005,
49.996, 50.003, 50.004.

a. Calculate the sample average.

∑ xi 50.001+⋯+50.004
i=1
= =50.002
n 8
b. Calculate the sample standard deviation.


n

∑ ( xi −x )2
i=1
n−1
=
√ ( 50.001−50.002 )2 +⋯+ ( 50.004−50.002 )2
7
=0.0034

4.3. Waiting times for customers in an airline reservation system are (in seconds) 953, 955, 948, 951, 957, 949,
954, 950, 959.

a. Calculate the sample average.

∑ xi 953+⋯+ 959
i=1
= =952.8889 seconds
n 9
b. Calculate the sample standard deviation.

n

∑ ( xi −x )2
i=1
n−1
=

( 953−952.8889 )2 +⋯+ ( 959−952.8889 )2
8
=3.7231 seconds

4.4. Consider the waiting time data in Exercise 4.3.

Data: 953, 955, 948, 951, 957, 949, 954, 950, 959

a. Find the sample median of these data.

Ordered Data: 948, 949, 950, 951, 953, 954, 955, 957, 959

n+1
Since n = 9 is odd, the position of the median value is =5 in the ordered data set.
2
Median = Q2 = 953 seconds

b. How much could the largest time increase without changing the sample median?

Since the median is the middle value in the ordered data, the largest time could increase infinitely and the
sample median would not change.

4.5. The time to complete an order (in seconds) is as follows: 96, 102, 104, 108, 126, 128, 150, and 156.

a. Calculate the sample average.

∑ xi 96+ ⋯+ 156
i=1
= =121.25 seco nds
n 8
b. Calculate the sample standard deviation.


n

∑ ( xi −x )2
i=1
n−1
=
√ ( 96−121.25 )2 +⋯+ ( 156−121.25 )2
7
=22.6258 seconds

4.6. The time to failure in hours of an electronic component subjected to an accelerated life test is shown in
Table 4E.1. To accelerate the failure test, the units were tested at an elevated temperature (read down, then
across).
127 124 121 118
125 123 136 131
131 120 140 125
124 119 137 133
129 128 125 141
121 133 124 125
142 137 128 140
151 124 129 131
160 142 130 129
125 123 122 126
a. Calculate the sample average and standard deviation.

∑ xi 127+⋯+126
x= i=1 = =129.97 hours
n 40


n

∑ ( x i−x )2
s= i=1
n−1
=
√ ( 127−129.97 )2 +⋯+ ( 126−129.97 )2
39
=8.91 hours

b. Construct a histogram.

MTB > Graph > Histogram > Simple

Histogram of Failure Time


14

12

10
Frequency

0
120 130 140 150 160
Failure Time

c. Construct a stem-and-leaf plot.

MTB > Graph > Stem-and-Leaf

Stem-and-Leaf Display: failureTime

Stem-and-leaf of failureTime N = 40
Leaf Unit = 1.0

2 11 89
12 12 0112334444
(12) 12 555556788999
16 13 011133
10 13 677
7 14 00122
2 14
2 15 1
1 15
1 16 0

d. Find the sample median and lower and upper quartiles.

Since n = 40 is even, the median is the average of the (n/2)st and (n/2 + 1)st ranked observations.
Therefore, the median is the average of the 20th and 21st ranked observations.
128+128
Q 2= =128 hours
2
The lower quartile is the observation with rank (0.25)*(40) + .5 = 10.5.

124+124
Q 1= =124 hours
2
The upper quartile is the observation with rank (0.75)*(40) + .5 = 30.5

133+136
Q 3= =134.5 hours
2
4.7. An article in Quality Engineering (Vol.4, 1992, pp. 487-495) presents viscosity data from a batch chemical
process. A sample of these data is presented in Table 4E.2. (read down, then across).

13.3 14.9 15.8 16.0


14.5 13.7 13.7 14.9
15.3 15.2 15.1 13.6
15.3 14.5 13.4 15.3
14.3 15.3 14.1 14.3
14.8 15.6 14.8 15.6
15.2 15.8 14.3 16.1
14.5 13.3 14.3 13.9
14.6 14.1 16.4 15.2
14.1 15.4 16.9 14.4
14.3 15.2 14.2 14.0
16.1 15.2 16.9 14.4
13.1 15.9 14.9 13.7
15.5 16.5 15.2 13.8
12.6 14.8 14.4 15.6
14.6 15.1 15.2 14.5
14.3 17.0 14.6 12.8
15.4 14.9 16.4 16.1
15.2 14.8 14.2 16.6
16.8 14.0 15.7 15.6

a. Construct a stem-and-leaf plot for the viscosity data.

12 68
13 3134
13 776978
14 3133101332423404
14 585669589889695
15 3324223422112232
15 568987666
16 140114
16 85996
17 0

b. Construct a frequency distribution and histogram.

Interval Frequency
[12.5,13.0) 2
[13.0,13.5) 4
[13.5,14.0) 6
[14.0,14.5) 16
[14.5,15.0) 15
[15.0,15.5) 16
[15.5,16.0) 9
[16.0,16.5) 6
[16.5,17.0) 5
[17.0,17.5) 1

MTB > Graph > Histogram > Simple


Histogram of Viscosity
20

15
Frequency

10

0
13 14 15 16 17
viscosity

c. Convert the stem-and-leaf plot in part (a) into an ordered stem-and-leaf plot. Use this graph to assist in
locating the median and the upper and lower quartiles of the viscosity data.

MTB > Graph > Stem-and-Leaf

Stem-and-Leaf Display: viscosity


Stem-and-leaf of viscosity N = 80
Leaf Unit = 0.10

2 12 68
6 13 1334
12 13 677789
28 14 0011122333333444
(15) 14 555566688889999
37 15 1122222222333344
21 15 566667889
12 16 011144
6 16 56899
1 17 0

Since n = 80 is even, the median is the average of the (n/2)st and (n/2 + 1)st ranked observations.
Therefore, the median is the average of the 40th and 41st ranked observations.
14.9+14.9
Q 2= =14.9 units
2
The lower quartile is the observation with rank (0.25)*(80) + .5 = 20.5.

14.3+14.3
Q 1= =14.3 units
2
The upper quartile is the observation with rank (0.75)*(80) + .5 = 60.5

15.5+15.6
Q 3= =15.55 units
2
d. What are the ninetieth and tenth percentiles of viscosity?

The 90th percentile is the observation with rank (0.90)*(80) + .5 = 72.5

16.1+16.4
90 th percentile= =16.25 units
2
The 10th percentile is the observation with rank (0.10)*(80) + .5 = 8.5

12.7+12.7
10 th percentile= =12.7 units
2
4.8. Construct and interpret a normal probability plot of the volumes in Exercise 4.1.

Data: 16.05, 16.03, 16.02, 16.04, 16.05, 16.01, 16.02, 16.02, 16.03, 16.01, 16.00, 16.07

MTB > Graph > Probability Plot > Single

Probability Plot of Volume


Normal - 95% CI
99
Mean 16.03
95 StDev 0.02021
90
N 12
AD 0.297
80
P-Value 0.532
70
Percent

60
50
40
30
20

10
5

1
15.950 15.975 16.000 16.025 16.050 16.075 16.100
volume

It can be assumed that the data follows a normal distribution.

4.9. Construct and interpret a normal probability plot of the waiting time measurements in Exercise 4.3.

Data: 953, 955, 948, 951, 957, 949, 954, 950, 959

MTB > Graph > Probability Plot > Single


Probability Plot of Wait Time
Normal - 95% CI
99
Mean 952.9
95 StDev 3.723
90
N 9
AD 0.166
80
P-Value 0.908
70

Percent
60
50
40
30
20

10
5

1
940 945 950 955 960 965 970
Wait Time

It can be assumed that the data follows a normal distribution.

4.10. Construct a normal probability plot of the failure time data in Exercise 4.6. Does the assumption that failure
time for this component is well modeled by a normal distribution seem reasonable?

127 124 121 118


125 123 136 131
131 120 140 125
124 119 137 133
129 128 125 141
121 133 124 125
142 137 128 140
151 124 129 131
160 142 130 129
125 123 122 126

MTB > Graph > Probability Plot > Single

Probability Plot of Failure Time


Normal - 95% CI
99
Mean 130.0
95 StDev 8.914
90
N 40
AD 1.259
80
P-Value <0.005
70
Percent

60
50
40
30
20

10
5

1
100 110 120 130 140 150 160
FailureTime

There are outliers in both tails of the normal probability plot and the points do not form a straight line. This
indicates that the data does not come from a normal distribution.

4.11. Construct a normal probability plot of the Viscosity data in Exercise 4.7. Does the assumption that process
yield is well modeled by a normal distribution seem reasonable?
13.3 14.3 14.9 15.2 15.8 14.2 16.0 14.0
14.5 16.1 13.7 15.2 13.7 16.9 14.9 14.4
15.3 13.1 15.2 15.9 15.1 14.9 13.6 13.7
15.3 15.5 14.5 16.5 13.4 15.2 15.3 13.8
14.3 12.6 15.3 14.8 14.1 14.4 14.3 15.6
14.8 14.6 15.6 15.1 14.8 15.2 15.6 14.5
15.2 14.3 15.8 17.0 14.3 14.6 16.1 12.8
14.5 15.4 13.3 14.9 14.3 16.4 13.9 16.1
14.6 15.2 14.1 14.8 16.4 14.2 15.2 16.6
14.1 16.8 15.4 14.0 16.9 15.7 14.4 15.6

MTB > Graph > Probability Plot > Single

Probability Plot of Viscosity


Normal - 95% CI
99.9
Mean 14.90
99 StDev 0.9804
95
N 80
90 AD 0.249
80 P-Value 0.740
Percent

70
60
50
40
30
20
10
5

0.1
11 12 13 14 15 16 17 18 19
Viscosity

It can be assumed that the data follows a normal distribution.

4.12. Consider the yield data in Table 4E.3. Construct a time-series plot for these data. Interpret the plot.

87. 84.
94.1 94.1 92.4 85.4
3 6
84. 83.
93.2 92.1 90.6 86.6
1 6
90. 85.
90.6 96.4 89.1 91.7
1 4
95. 89.
91.4 88.2 88.8 87.5
2 7
86. 87.
88.2 86.4 86.4 84.2
1 6
94. 85.
86.1 85.0 85.1 85.1
3 1
93. 89.
95.1 84.9 84.0 90.5
2 6
86. 90.
90.0 87.3 93.7 95.6
7 0
83. 90.
92.4 89.6 87.7 88.3
0 1
95. 94.
87.3 90.3 90.6 84.1
3 3
94. 97.
86.6 93.1 89.4 83.7
1 3
97. 96.
91.2 94.6 88.6 82.9
8 8
93. 94.
86.1 96.3 84.1 87.3
1 4
86. 96.
90.4 94.7 82.6 86.4
4 1
87. 98.
89.1 91.1 83.1 84.5
6 0

MTB > Graph > Time Series Plot > Simple

Time Series Plot of Yield


100

95
Yield

90

85

1 9 18 27 36 45 54 63 72 81 90
Index

4.13. Consider the chemical process yield data in Exercise 4.12. Calculate the sample average and standard
deviation.

87. 84.
94.1 94.1 92.4 85.4
3 6
84. 83.
93.2 92.1 90.6 86.6
1 6
90. 85.
90.6 96.4 89.1 91.7
1 4
95. 89.
91.4 88.2 88.8 87.5
2 7
86. 87.
88.2 86.4 86.4 84.2
1 6
94. 85.
86.1 85.0 85.1 85.1
3 1
93. 89.
95.1 84.9 84.0 90.5
2 6
86. 90.
90.0 87.3 93.7 95.6
7 0
83. 90.
92.4 89.6 87.7 88.3
0 1
95. 94.
87.3 90.3 90.6 84.1
3 3
94. 97.
86.6 93.1 89.4 83.7
1 3
97. 96.
91.2 94.6 88.6 82.9
8 8
93. 94.
86.1 96.3 84.1 87.3
1 4
90.4 86. 94.7 82.6 96. 86.4
4 1
87. 98.
89.1 91.1 83.1 84.5
6 0

a. Calculate the sample average.

∑ xi 94.1+⋯+84.5
i=1
= =89.4756 units
n 90

b. Calculate the sample standard deviation.


n

∑ ( xi −x )2
i=1
n−1
=

( 94.1−89.4756 )2 +⋯+ ( 90−89.4756 )2
89
=4.1578 units

4.14. Consider the chemical process yield data in Exercise 4.12. Construct a stem-and-leaf plot and a histogram.
Which display provides more information about the process?

MTB > Graph > Stem and Leaf

Stem-and-leaf of yield N = 90
Leaf Unit = 0.10

2 82 69
6 83 0167
14 84 01112569
20 85 011144
30 86 1114444667
38 87 33335667
43 88 22368
(6) 89 114667
41 90 0011345666
31 91 1247
27 92 144
24 93 11227
19 94 11133467
11 95 1236
7 96 1348
3 97 38
1 98 0

MTB > Graph > Histogram > Simple


Histogram of Yield
9

Frequency
5

0
84 86 88 90 92 94 96 98
Yield

The stem-and-leaf plot provides more information because it shows the individual data values. However,
the histogram provides a more concise visual summary of the data.

4.15. Construct a box plot for the data in Exercise 4.1.

Data: 16.05, 16.03, 16.02, 16.04, 16.05, 16.01, 16.02, 16.02, 16.03, 16.01, 16.00, 16.07

MTB > Graph > Boxplot > Simple

Boxplot of Volume
16.07

16.06

16.05
Volume

16.04

16.03

16.02

16.01

16.00

4.16. Construct a box plot of the data in Exercise 4.2.

Data: 50.001, 50.002, 49.990, 50.006, 50.005, 49.996, 50.003, 50.004.

MTB > Graph > Boxplot > Simple

Boxplot of Sales
50.0075

50.0050

50.0025
Sales

50.0000

49.9975

49.9950
4.17. Suppose that two fair dice are tossed and the sum of the dice is observed. Determine the probability
distribution of x , the sum of the dice.

Let x=d 1 +d 2 represent the sum of two dice. First, let us determine the possible values for x , and then,
let us determine how likely it is to obtain each value. For example, x=3 can occur when d 1=1 and
d 2=2 or when d 1=2 and d 2=1. Hence, there are two possible scenarios that lead to x=3.

Similarly, let us list all possible scenarios for x=d 1 +d 2:

Scenario d 1 d 2 x Scenario d 1 d 2 x
1 1 1 2 19 4 1 5
2 1 2 3 20 4 2 6
3 1 3 4 21 4 3 7
4 1 4 5 22 4 4 8
5 1 5 6 23 4 5 9
6 1 6 7 24 4 6 10
7 2 1 3 25 5 1 6
8 2 2 4 26 5 2 7
9 2 3 5 27 5 3 8
10 2 4 6 28 5 4 9
11 2 5 7 29 5 5 10
12 2 6 8 30 5 6 11
13 3 1 4 31 6 1 7
14 3 2 5 32 6 2 8
15 3 3 6 33 6 3 9
16 3 4 7 34 6 4 10
17 3 5 8 35 6 5 11
18 3 6 9 36 6 6 12

From the previous table, we can conclude x can assume integer values between 2 and 12. Now, let us
define the likelihood of each scenario by counting how many times each scenario occurs and dividing it by
the total number of possible scenarios (36).

x Number of Scenarios P(d1 +d 2= x)


2 1 1/36
3 2 2/36
4 3 3/36
5 4 4/36
6 5 5/36
7 6 6/36
8 5 5/36
9 4 4/36
10 3 3/36
11 2 2/36
12 1 1/36

4.18. Find the mean and standard deviation of x in Exercise 4.17.

x p(x )
2 1/36
3 2/36
4 3/36
5 4/36
6 5/36
7 6/36
8 5/36
9 4/36
10 3/36
11 2/36
12 1/36

n
Mean: E ( x )=∑ x∗p ( x )=2∗1/36+⋯+12∗1/36=7
i=1

Standard Deviation:
n
V ( x )=∑ ( x 2∗p ( x ) )− [ E ( x ) ] =(22∗1 /36+ ⋯+122∗1/36)−7 2=5.8333
2

i=1

Standard Deviation = √ V ( x ) =√5.8333=2.4152


4.19. The tensile strength of a metal part is normally distributed with mean 40 lb and standard deviation 5 lb. If
50,000 parts are produced, how many would you expect to fail to meet a minimum specification limit of 35
lb tensile strength? How many would have a tensile strength in excess of 48 lb?

X N ( μ=40 , σ=5 ) , n=50,000

Probability that a part fails to meet 35 lb specification (look up probability in a normal probability table):

P ( X <35 )=P Z< ( 35−40


5 )
=P ( Z ←1 )=0.15866

Expected number of parts that fail to meet 35 lb:

n∗P ( X <35 ) =50,000∗0.15866 ≈7,933 parts

Probability that a part has tensile strength exceeds 48 lb:

P ( X > 48 )=P Z > ( 48−40


5 )
=P ( Z >1.6 )=1−P ( Z <1.6 )=1−0.94520=0.0548
Expected number of parts that have tensile strength in excess of 48 lb:

n∗P ( X >48 )=50,000∗0.0548=2,740 parts

4.20. The output voltage of a power supply is normally distributed with mean 5 V and standard deviation 0.02 V.
If the lower and upper specifications for voltage are 4.95 V and 5.05 V, respectively, what is the probability
that a power supply selected at random will conform to the specifications on voltage?

X N (μ=5 , σ=0.02)

Probability that a power supply will be within the specifications (look up probability in a normal
probability table):

P ( LSL< X <USL )=P ( 4.95< X <5.05 )=P ( X <5.05 )−P ( X < 4.95 )

(
¿ P Z<
5.05−5
0.02 ) (
−P Z<
4.95−5
0.02 )
=P ( Z <2.5 ) −P ( Z ←2.5 )=0.99379−0.00621

¿ 0.98758
4.21. Continuation of Exercise 4.20. Reconsider the power supply manufacturing process in Exercise 4.20.
Suppose we wanted to improve the process. Can shifting the mean reduce the number of nonconforming
units produced? How much would the process variability need to be reduced in order to have all but one out
of 1,000 units conform to the specifications?

Shifting the mean will not reduce the number of nonconforming units since the mean is centered between
the specification limits.

Reduce the process variability:

P ( LSL< X <USL )=P ( 4.95< X <5.05 )=1−0.001

P ( X −μ
σ
<
5.05−5
σ )−P ( X−μ
σ
<
4.95−5
σ )=0.999
(
P Z<
0.05
σ ) (
−P Z <
−0.05
σ )
=0.999

Note that Z 0.0005=3.29

0.05 0.05
Therefore, 3.29= ⇒σ = =0.01519
σ 3.29
So:

( Z<
0.05
0.01519) (
−P Z <
−0.05
0.01519 )
=P ( Z< 3.29 )−P ( Z ←3.29 ) =0.998998−0.0005=0.999

Reducing σ to 0.01519 is needed to have all but 1 out of 1,000 units conform to specifications.
4.22. The life of an automotive battery is normally distributed with mean 900 days and standard deviation 35
days. What fraction of these batteries would be expected to survive beyond 1,000 days?

Let X be the life of an automotive battery.

X N (μ=900 , σ =35)

P ( X >1000 ) =1−P ( X ≤1000 )=1−P Z ≤ ( 1000−900


35 )
¿ 1−P ( Z ≤ 2.86 )=1−.99788=0.00212

0.212 %of the batteries are expected to survive beyond 1,000 days.
4.23. A light bulb has a normally distributed light output with mean 5,000 end foot-candles and standard
deviation of 50 end foot-candles. Find a lower specification limit such that only 0.5% of the bulbs will not
exceed this limit.

X N ( μ=5000 , σ =50 )

Lower specification limit (LSL): (


P ( X < LSL )=P Z <
LSL−5000
50 )
=P ( Z < z )=0.005

LSL−5000
Look up inverse value of z in probability table: z= =−2.5758
50

LSL=5000+50 z=5000−2.5758∗50=4,871.21
4.24. The specifications on an electronic component in a target-acquisition system are that its life must be
between 5,000 and 10,000 h. The life is normally distributed with mean 7,500 h. The manufacturer realized
a price of $10 per unit produced; however, defective units must be replaced at a cost of $5 to the
manufacturer. Two different manufacturing processes can be used, both of which have the same mean life.
However, the standard deviation of life for process 1 is 1000h, whereas for process 2 it is only 500h.
Production costs for process 2 are twice those for process 1. What value of production costs will determine
the selection between processes 1 and 2?

LSL=5000 , USL=10,000 , μ=7500 , profit / part=10 , defective cost / part=5

Process 1: X N ( 7500,1000 ) , ∏ cost /unit=c 1

Proportion of components within spec:

P ( LSL< X <USL )=P ( X <USL )−P ( X < LSL )

(
¿ P Z<
10000−7500
1000
−P Z< ) (
5000−7500
1000 )
=P ( Z< 2.5 )−P ( Z ←2.5 )

¿ 0.99379−0.00621=0.98758
Therefore, in process 1, 98.758% of units produced are within spec and 1.242% are outside the
specification limits, i.e., defective.
Profit from Process 1:
98.758∗10−1.242∗5−100∗c1=981.37−100 c1

Process 2:

Proportion of components within spec:

P ( LSL< X <USL )=P ( X <USL )−P ( X < LSL )

(
¿ P Z<
10000−7500
500
−P Z< ) (
5000−7500
500 )
=P ( Z< 5.0 )−P ( Z ←5.0 )

¿ 0.9999997129−0.000000287=0.9999994
Therefore, in process 2, 99.99994% of units produced are within spec and 0.00006% are outside the
specification limits, i.e., defective.

Profit from Process 2:


99.99994∗10−0.00006∗5−100∗2 c 1=999.9991−200 c 1

We will use process 2 when the profit from process 2 exceeds the profit from process 1:

999.9991−200 c 1> 981.37−100 c 1

18.6291>100 c 1

0.186291> c1

Therefore, process 2 is preferred when the production cost per unit, c 1, is less than $0.1863.

4.25. The tensile strength of fiber used in manufacturing cloth is of interest to the purchaser. Previous
experience indicates that the standard deviation of tensile strength is 2 psi. A random sample of eight fiber
specimens is selected, and the average tensile strength is found to be 127 psi.

x=127 , σ =2
a. Build a 95% lower confidence interval in the mean tensile strength.

μ ≥ x −z 0.05∗σ / √ n=127−1.6449∗2/ √ 8=125.8369 psi


μ ≥125.8369 psi
b. What can you conclude?

For 95% of the CIs constructed, the mean tensile strength is expected to be equal to or exceed 125.8369 psi.

4.26. Payment times of 100 randomly selected customers this month had an average of 35 days. The standard
deviation from this group was 2 days.

x=35 , σ =2 , n=100
a. Build a 90% two-sided confidence interval on the mean payment time.

x−z α /2 σ / √ n ≤ μ ≤ x+ z α /2 σ / √ n

35−z 0.05∗2/ √ 100≤ μ ≤ 35+ z 0.05∗2/ √ 100

35−1.645∗2 / √ 100 ≤ μ ≤ 35+1.645∗2/ √ 100

34.671 ≤ μ ≤ 35.329
b. Build a 99% two-sided confidence interval on the mean payment time.

x−z α /2 σ / √ n ≤ μ ≤ x+ z α /2 σ / √ n

35−z 0.005∗2/ √ 100 ≤ μ ≤35+ z 0.005∗2/ √ 100

35−2.576∗2/ √ 100 ≤ μ≤ 35+2.576∗2 / √ 100

34.4848 ≤ μ ≤ 35.5152
c. Is it possible that the average time is 30 days?

It is highly unlikely that the average time is 30 days. In 99% of confidence intervals constructed, the
average payment time will be between 34.5 and 35.5 days.

4.27. The service life of a battery used in a cardiac pacemaker is assumed to be normally distributed. A random
sample of 10 batteries is subjected to an accelerated life test by running the, continuously at an elevated
temperature until failure, and the following lifetimes (in hours) are obtained: 25.5, 26.1, 26.8, 23.2, 24.2,
28.4, 25.0, 27.8, 27.3, and 25.7. The manufacturer wants to be certain that the mean battery life exceeds 25
hours in accelerated life testing.

x=26 , s=1.6248 ,n=10


a. Construct a 90% two-sided confidence interval on mean life in the accelerated test.

x−t α / 2 ,n−1 s / √ n≤ μ ≤ x +t α / 2 ,n−1 s / √ n

26−t 0.1/ 2 ,10−1 ¿ 1.6248/ √ 10 ≤ μ ≤ 26+t 0.1 /2 ,10−1 ¿1.6248 / √ 10

26−1.8331¿ 1.6248 / √ 10 ≤ μ≤ 26+1.8331 ¿ 1.6248/ √ 10

25.0581 ≤ μ ≤ 26.9419
b. Construct a normal probability plot of the battery life data. What conclusions can you draw?

MTB > Graph > Probability Plot > Single


Probability Plot of Service Life
Normal - 95% CI
99
Mean 26
95 StDev 1.625
90
N 10
AD 0.114
80
P-Value 0.986
70

Percent
60
50
40
30
20

10
5

1
20 22 24 26 28 30 32
Service Life

Normality assumption is reasonable. Also, the manufacturer can be certain that the mean exceeds 25 hours,
95% of the CIs constructed will not contain a mean of 25 hrs.

NOTE: Conclusions on the confidence interval are highly dependent on the number of significant digits used
throughout the calculations.

4.28. A local neighborhood has just installed speed bumps to slow traffic. Two weeks after the installation the
city recorded the following speeds 500 feet after the last speed bump: 29, 29, 31, 42, 30, 24, 30, 27, 33, 44,
32, 30, 24, 35, 34, 30, 23, 35, 27.

x=30.85 , s=5.3830 , n=20


a. Find a 99% one-sided confidence interval on mean speed and assess whether or not the average speed is 25
or less. Assume that the data is normally distributed.

μ ≤ x +t α ,n−1∗s / √ n

μ ≤30.85+ t 0.01 , 19∗5.3830/ √ 20

μ ≤30.85+ 2.539∗5.3830 / √ 20

μ ≤33.91
The hypothesized average speed of 25 is contained in the 99% CI. Therefore, there is no evidence that the
mean speed is less than 25 mph.

b. Does the normality assumption seem reasonable for these data?

MTB > Graph > Probability Plot > Single


Probability Plot of Speed
Normal - 95% CI
99
Mean 30.85
95 StDev 5.383
90
N 20
AD 0.570
80
P-Value 0.121
70

Percent
60
50
40
30
20

10
5

1
10 20 30 40 50
Speed

There is one value outside the 95% confidence interval; however, it is reasonable to assume the data comes
from a normal population.

4.29. A company has just purchased a new billboard near the freeway. Sales for the past 10 days have been 483,
532, 444, 510, 467, 461, 450, 444, 540, and 499. Build a 95% two-sided confidence interval on sales. Is
there any evidence that the billboard has increased sales from its previous average of 475 per day?

x=483 , s=35.7553 , n=10

x−t α / 2 ,n−1 s / √ n≤ μ ≤ x +t α / 2 ,n−1 s / √ n

483−t 0.05/ 2 ,10−1 ¿ 35.7553 / √ 10 ≤ μ ≤ 483+ t 0.05 /2 , 10−1 ¿ 35.7553/ √ 10

483−2.2622 ¿ 35.7553/ √ 10≤ μ ≤ 483+2.2622 ¿ 35.7553/ √ 10

457.4217 ≤ μ ≤508.5783
Since 475 is within the confidence interval, there is no evidence that the billboard has increased sales from
its previous average of 475 per day.

NOTE: Confidence intervals endpoints are highly dependent on the number of significant digits used throughout the
calculations.

4.30. A machine is used to fill container with a liquid product. Fill volume can be assumed to be normally
distributed. A random sample of 10 containers is selected, and the net contents (oz.) are as follows: 12.03,
12.01, 12.04, 12.02, 12.05, 11.98, 11.96, 12.02, 12.05, and 11.99. Suppose that the manufacturer wants to
be sure that the mean net contents exceed 12 oz. What conclusions can be drawn from the data using a 95%
two-sided confidence interval on the mean fill volume?

x=12.015 , s=0.0303 ,n=10

x−t α / 2 ,n−1 s / √ n≤ μ ≤ x +t α / 2 ,n−1 s / √ n

12.015−t .025 , 10−1∗0.0303/ √ 10≤ μ ≤12.015+ t .025 , 10−1∗0.0303/ √ 10

12.015−2.262∗0.0303 / √ 10 ≤ μ ≤ 12.015+2.262∗0.0303/ √ 10

11.9933≤ μ ≤12.0367
12 is contained in the 95% CI, so we do not have evidence at the 5% significance level that the mean fill
volume exceeds 12 oz.

4.31. A company is evaluating the quality of aluminum rods received in a recent shipment. Diameters of
aluminum alloy rods produced on an extrusion machine are known to have a standard deviation of 0.0001
in. A random sample of 25 rods has an average diameter of 0.5046 in. Test whether or not the mean rod
diameter is 0.5025 using a two-sided confidence interval.

x=0.5046 ,σ =0.0001 , n=25


Using a 95% CI:
x−z 0.05 /2∗σ / √ n ≤ μ ≤ x+ z 0.05 /2∗σ / √ n

0.5046−1.96∗0.0001/ √ 25≤ μ ≤ 0.5046−1.96∗0.0001 / √ 25

0.504561 ≤ μ ≤ 0.504639
Since 0.5025 is not within the confidence interval, we can conclude that the mean diameter is not 0.5025 at
the 5% significance level.

4.32. The output voltage of a power supply is assumed to be normally distributed. Sixteen observations taken at
random on voltage are as follows: 10.35, 9.30, 10.00, 9.96, 11.65, 12.00, 11.25, 9.58, 11.54, 9.95, 10.28,
8.37, 10.44, 9.25, 9.38, and 10.85.

x=10.259 , s=0.9990 ,n=16


a. Test the hypothesis that the mean voltage equals 12 V using a 95% two-sided confidence interval.

x−t α / 2 ,n−1 s / √ n≤ μ ≤ x +t α / 2 ,n−1 s / √ n

10.259−t 0.05 /2 , 16−1 ¿ 0.9990/ √ 16 ≤ μ ≤ 483+ t 0.05 /2 , 16−1 ¿ 10.259/ √ 16

10.259−2.131¿ 0.9990 / √ 16 ≤ μ ≤10.259+2.131 ¿ 0.9990 / √ 16

9.7268 ≤ μ ≤10.7912
Since 12 is not in the 95% CI, we conclude that the mean voltage is not equal to 12 V.

b. Test the hypothesis that the variance equals 1 V using a 95% two-sided confidence interval.

2 2
(n−1) s (n−1)s
2
≤ σ2 ≤ 2
χ α /2 , n−1 χ 1−α /2 , n−1
2 2
15∗(0.999) 15∗(0.999)
2
≤ σ2≤
χ .025 , 16−1 χ 2.975 , 16−1
2 2
15∗(0.999) 15∗(0.999)
≤ σ2≤
27.49 6.27
2
0.5456 ≤ σ ≤ 2.3876
Since 1 is contained in the CI, we cannot conclude that the variance is not equal to 1.

4.33. Last month, a large national bank’s average payment time was 33 days with a standard deviation of 4 days.
This month, the average payment time was 33.5 days with a standard deviation of 4 days. They had 1,000
customers both months.

x current =33.5 , σ current =4 , ncurrent =1000x previous=33 , σ previous=4 , n previous=1000

a. Build a 95% confidence interval on the difference between this month’s and last month’s payment times.
Is there evidence that payment times are increasing?

Using a one-sided confidence interval:

μCurrent −μ Previous ≥ x Current −x P revious−z 0.05


√ σ 2Current σ 2Previous
+
nCurrent n Previous

μCurrent −μ Previous ≥ 33.5−33−1.6449


√ 42
+
42
1000 1000
=0.2058

Since the lower limit for the difference is positive, there is evidence to state that the payment times are
increasing.

Using a two-sided confidence interval:

x Current −x Previous−z 0.05 /2


√ σ 2Current σ 2Previous
+ ≤μ
nCurrent nPr evious Current
−μ Previous


2
σ Current σ 2Previous
≤ xCurrent + x Previous + z 0.05 /2 +
nCurrent n Previous

33.5−33−1.96
√ 42
+
42
1000 1000
≤ μCurrent −μ Previous ≤ 33.5−33+1.96
42
+
42
1000 1000 √
0.149 ≤ μ Current −μPrevious ≤ 0.851

We still arrive at the same conclusion that payment times have increased given the entire confidence
interval is greater than zero.

4.34. Two machines are used for filling glass bottles with a soft-drink beverage. The filling processes have
known standard deviations σ 1=0.010 liter and σ 2=0.015 liter, respectively. A random sample of
n1=25 bottles from machine 1 and n2 =20 bottles from machine 2 results in average net contents of 2.04
liters and 2.07 liters from machine 1 and 2, respectively. Test the hypothesis that both machines fill to the
same net contents using a 95% two-sided confidence interval on the difference of fill volume. What are
your conclusions?

x 1=2.04 , σ 1=0.010 , n1=25 , x 2=2.07 , σ 2=0.015 , n2=20

x 1−x 2−z α / 2
√ σ 21 σ 22
+ ≤ μ −μ ≤ x −x + z
σ 21 σ 22
n 1 n2 1 2 1 2 α /2 n1 n 2
+

√ ( 0.010 )2 ( 0.015 )2

2 2
(0.010) (0.015)
2.04−2.07−1.96 + ≤ μ 1−μ2 ≤ 2.04−2.07+1.96 +
25 20 25 20
−0.0377 ≤ μ1−μ2 ≤−0.0223

Since 0 is not included in the 95% confidence interval, we can conclude that there is a difference of fill
volume between the two filling processes.

4.35. A supplier received results on the hardness of metal from two different hardening process (1) salt-water
quenching and (2) oil quenching. The results are shown in Table 4E.4.

Salt
Oil Quench
Quench
145 152
150 150
153 147
148 155
141 140
152 146
146 158
154 152
139 151
148 143

2 2
x SQ=147.6 , s SQ=24.7111 , nSQ =10 , x OQ =149.4 , s OQ=29.8222 , nOQ =10

a. Construct a 95% confidence interval on the difference in mean hardness.

Using a two-sided confidence interval assuming unequal variances:

( )
2 2 2
s SQ s OQ
( )
2
+ 24.7111 29.8222
+
n SQ nOQ 10 10
¿ = =17.3992 ⇒ 17

( ) ( ) ( ) ( )
2
sSQ
2
sOQ
2 2
24.7111 2 29.8222 2

nSQ nOQ 10 10
+ +
10−1 10−1
n SQ−1 nOQ−1

√ s 2SQ sOQ

2
s 2SQ s 2OQ
x SQ−x OQ −t 0.05/ 2 , + ≤ μ SQ−μOQ ≤ x SQ−x OQ +t 0.05/ 2 , +
n SQ nOQ n SQ nOQ

147.6−149.4−2.1098
√ 24.7111 29.8222
10
+
10
≤ μSQ −μ OQ ≤33.5−33+2.1098
24.7111 29.8222
10
+
10 √
−6.7269 ≤ μ SQ−μ OQ ≤ 3.1269

b. Construct a 95% confidence interval on the ratio of variances.


2 2
s SQ 2 2 s SQ
2
F 1−α / 2, n −1 , nSQ−1 ≤ σ /σ SQ OQ ≤ 2
Fα /2 , n −1 ,nSQ −1
s OQ OQ
sOQ OQ

24.7111 2 2 24.7111
F1−0.05 /2 , 10−1 , 10−1 ≤ σ SQ / σ OQ ≤ F
29.8222 29.8222 0.05/ 2 ,10−1 ,10−1

24.7111 2 2 24.7111
∗0.2484 ≤ σ SQ /σ OQ ≤ ∗4.0260
29.8222 29.8222
2 2
0.2058 ≤ σ SQ /σ OQ ≤3.3360

c. Does this assumption of normality seem appropriate for this data?

MTB > Graph > Probability Plot > Single

Probability Plot of Salt Quench, Oil Quench


Normal - 95% CI

130 140 150 160 170


Salt Quench Oil Quench Salt Quench
99
Mean 147.6
StDev 4.971
95 N 10
AD 0.218
90
P-Value 0.779
80 Oil Quench
70 Mean 149.4
Percent

60 StDev 5.461
50 N 10
40 AD 0.169
30 P-Value 0.906
20

10
5

1
130 140 150 160 170

The normality assumption for both samples looks reasonable.

d. What can you conclude?

For the confidence interval based on the mean difference, zero is included within the interval. For the
confidence interval based on the ratio of the variances, one is included within the interval. Hence, there is
no significant difference in the population means or variances.

4.36. A random sample of 200 printed circuit boards contains 18 defective or nonconforming units. Estimate the
process fraction nonconforming. Using a 90% two-sided confidence interval, evaluate whether or not it is
possible that the true fraction nonconforming in this process is 10%.

18
Process fraction nonconforming estimate: ^p= =0.09
200

90% two-sided confidence interval: ^p−z 0.05


√ ^p (1− ^p )
n
≤ p ≤ ^p + z 0.05
^p (1− ^p )
n √
0.09−1.645∗
√ 0.09∗( 0.91 )
200
≤ p ≤ 0.09+1.645∗
0.0567 ≤ p ≤0.1233
0.09∗(0.91)
200 √
For 90% of confidence intervals constructed, the true proportion of defective units is between 5.7% and
12.3%. There is not enough evidence to suggest that the true fraction nonconforming is not 10%. So it is
possible that the true fraction nonconforming in the process is 10%.

4.37. A random sample of 500 connecting rod pins contains 65 nonconforming units. Estimate the process
fraction nonconforming. Construct a 90% upper confidence interval on the true process fraction
nonconforming. Is it possible that the true fraction nonconforming defective is 10%?

65
Process fraction nonconforming estimate: ^p= =0.13
500

90% upper confidence interval: p ≤ ^p + z 0.1


√ ^p (1− ^p )
n

p ≤ 0.13+1.2816
√ 0.13∗( 1−0.13 )
500

p ≤ 0.1493
For 90% of the CIs constructed, the true defective fraction is less than 14.93% (evidence suggests p might
be larger than 0.10)

4.38. During shipment testing, product was flown from Indianapolis to Seattle and back again to simulate 4
takeoffs and landings which can cause cans to open due to pressure changes. Prototype units of the 100
were shipped and 15 opened. Using a 90% two-sided confidence interval, determine if it is possible that the
average failure rate is 11%.

15
^p= =0.15 , α =0.10 , z .05=1.645
100

^p−z α /2
√ ^p ( 1− ^p )
n
≤ p ≤ ^p + z α /2

^p (1− ^p )
n

0.15−1.645
√ 0.15 ( 0.85 )
100
≤ p ≤ 0.15+1.645
0.15(0.85)
100 √
0.0913 ≤ p ≤ 0.2087
Since 0.11, 11%, is within the 90% CI, it is possible that the average failure rate is 11%. We cannot
conclude that it the average failure rate is not 11%.

4.39. Continuation of Exercise 4.38. The company has made improvements and has repeated the experiment. In
this iteration, 12 opened. Using a 95% two-sided confidence interval on the difference in proportions, is it
possible to cite improvement?

12
^p1=0.15 , n1 =100 , ^p2= =0.12 , n2=100 , α=0.05 , z .025=1.96
100
^p1− ^p2−z α /2
√ ^p1 ( 1−^p1 ) p^ 2 ( 1− ^p2 )
n1
+
n2
≤ p1− p 2 ≤ ^p 1− ^p2 + z α / 2

p^ 1 ( 1− ^p 1) ^p 2 ( 1−^p 2 )
n1
+
n2

.15−.12−1.96
√ 0.15 ( 0.85 ) 0.12 ( 0.88 )
100
+
100
≤ p 1− p2 ≤0.15−0.12+1.96
100
+
100 √
0.15 ( 0.85 ) 0.12 ( 0.88 )

−0.0646 ≤ p ≤0.1246
Since 0 is contained in the confidence interval, we cannot conclude that there was an improvement in the
failure rate.

4.40. Of 1,000 customers, 200 had payments greater than 30 days last month. This month, there are 1,100
customers, of which 230 had payments greater than 30 days.

a. Estimate the fraction late for last month and this month.

Let p1 represent the fraction late last month and p2 represent the fraction late this month.

200 230
^p1= =0.20 , ^p2= =0.2091
1000 1100
b. Construct a 90% confidence interval on the difference in the percentage of late payments.

^p1− ^p2−z α /2
√ ^p1 ( 1−^p1 ) p
n1
+
^ 2 ( 1− ^p2 )
n2
≤ p1− p 2 ≤ ^p 1− ^p2 + z α / 2

^ 1 ( 1− ^p 1) ^p 2 ( 1−^p 2 )
p
n1
+
n2

0.20−0.2091−1.645
√ 0.20 ( 0.80 ) 0.2091 ( 0.7909 )
1000
+
1100
≤ p1− p2 ≤ 0.20−0.2091+1.645
1000
+

0.20 ( 0.80 ) 0.2091 (
110

−0.0381 ≤ p 1−p 2 ≤ 0.0199

c. What can you conclude?

Since zero is within the 90% CI, there is no statistical evidence that there is a difference in the percentage
of late payments between the two months.

4.41. A new purification unit is installed in a chemical process. Before and after installation data was collected
regarding the percentage of impurity:

Before (1): Sample mean = 9.85, Sample variance = 6.79, Number of samples = 10
After (2): Sample mean = 8.08, Sample variance = 6.18, Number of samples = 8

a. Can you conclude that the two variances are equal using a two-sided 95% confidence interval?

2 2
s1 2 2 s1
2
F1−α /2 , n −1 , n −1 ≤ σ /σ ≤
1 2 2
F α / 2 ,n −1 ,n −1
s2 2 1
s2 2 1
6.79 2 2 6.79
F 1−0.05/ 2 ,8−1 ,10−1 ≤ σ 1 /σ 2 ≤ F
6.18 6.18 0.05/ 2 ,8−1 ,10−1

24.7111 2 2 24.7111
∗0.2073 ≤ σ 1 /σ 2 ≤ ∗4.6113
29.8222 29.8222
2 2
0.2278 ≤ σ 1 /σ 2 ≤ 4.6113

Since one is included in the interval, we can conclude that the variances are equal.

b. Can you conclude that the new purification device has reduced the mean percentage of impurity using a
two-sided 95% confidence interval?

Using a two-sided confidence interval assuming equal variances:

( )
2 2 2
s 1 s2
( )
2
+ 6.79 6.18
+
n1 n2 10 8
¿ = =15.4373 ⇒ 15

( ) ( ) ( ) ( )
2 2
s1
2
s2
2
6.79 2 6.18 2

n1 n2 10 8
+ +
10−1 8−1
n 1−1 n2−1

x 1−x 2−t 0.05/ 2 ,


√ s 21 s22
+ ≤ μ −μ ≤ x −x + t
s21 s 22
+
n1 n2 1 2 1 2 0.05 /2 , n1 n2 √
9.85−8.08−2.1315
√ 6.79 6.18
10
+
8
≤ μ 1−μ 2 ≤ 9.85−8.08+2.1315
6.79 6.18
10
+
8 √
−0.7980 ≤ μ1−μ2 ≤ 4.3380

Since zero is within the interval, we cannot conclude that the new device has reduced the mean percentage
of impurity.

4.42. Two different types of glass bottles are suitable for use by a soft-drink beverage bottler. The internal
pressure strength (psi) of the bottle is an important quality characteristic. It is known that σ 1=σ 2=¿3.0
psi. From a random sample of n1=n2=16 bottles, the mean pressure strengths are observed to be
x 1=175.8psi and x 2=181.3 psi. The company will not use bottle design 2 unless its pressure strength
exceeds that of bottle design 1 by at least 5 psi. Based on the sample data, should they use bottle design 2 if
they want no larger than 5% chance of excluding bottle 2 if it meets this target?

x 1=175.8 , σ 1=3.0 , n1 =16 , x 2=181.3 , σ 2=3 , n2=16

μ2−μ1 ≥ x2 −x1−z α
√ σ 21 σ 22
+
n1 n 2
μ2−μ1 ≥ 181.3−175.8−1.645
√ 9 9
+
16 16

μ2−μ1 ≥ 3.7553

Since the 95% CI contains 5, there is no evidence that the pressure strength of bottle design 2 exceeds
design 1 by 5 psi.

4.43. The diameter of a metal rod is measured by 12 inspectors, each using both a micrometer caliper and a
vernier caliper. The results are shown in Table 4E.5. Is there a difference between the mean measurements
produced by the two types of caliper? Use alpha = 0.01.

Inspecto Vernier Difference


Micrometer Caliper
r Caliper
1 0.15 0.151 -0.001
2 0.151 0.15 0.001
3 0.151 0.151 0.000
4 0.152 0.15 0.002
5 0.151 0.151 0.000
6 0.15 0.151 -0.001
7 0.151 0.153 -0.002
8 0.153 0.155 -0.002
9 0.152 0.154 -0.002
10 0.151 0.151 0.000
11 0.151 0.15 0.001
12 0.151 0.152 -0.001

Since the same inspector measures the rod with two calipers, a confidence interval for paired data should be
used.

12
1 −0.001+…+(−0.001) −0.005
d= ∑d=
n i=1 i 12
=
12
=−0.000416667
12
1 1
∑ d i−d ) = (( −0.001−(−0.00041667 ) ) +…+ ( −0.001−(−000041667 ) ) )=0.00131
2 2 2
sd =
n−1 i=1
( 11

sd s
d−t α /2 , n−1 ≤ μ d ≤ d+t α /2 ,n −1 d
√n √n
0.00131 0.00131
−0.000416667−t 0.005, 11 ≤ μ d ≤−0.000416667+t 0.005 ,11
√12 √ 12

0.00131 0.00131
−0.000416667−3.106 ≤ μ d ≤−0.000416667+ 3.106
√12 √12
−0.00159 ≤ μ d ≤ 0.000758

Since zero is within the confidence interval, there is no evidence to suggest the mean measurements
produced by the two types of calipers are different.

4.44. An experiment was conducted to investigate the filling capability of packaging equipment at a winery in
Newberg, Oregon. Twenty bottles of Pinot Gris were randomly selected and the fill volume (in mL)
measured. Assume that fill volume has a normal distribution. The data are as follows: 753, 751, 752, 753,
753, 753, 752, 753, 754, 754, 752, 751 752, 750, 753, 755, 753, 756, 751, and 750.

x=752.55 , s=1.5381 , n=20


a. Do the data support the claim that the standard deviation of fill volume is less than 1 ml using a 95% two-
sided confidence interval?

2 2
(n−1) s (n−1)s
2
≤ σ2 ≤ 2
χ α /2 , n−1 χ 1−α /2 , n−1
2 2
19∗( 1.5381 ) 19∗( 1.5381 )
2
≤ σ 2≤
χ .025 ,19 χ 2.975 ,19
2 2
19∗( 1.5381 ) 19∗( 1.5381 )
≤ σ 2≤
32.85 8.91
2
1.3683 ≤ σ ≤ 5.0448

1.1697 ≤ σ ≤ 2.2461
Since one is not contained in the 95% CI, there is evidence that the standard deviation of fill volume is not
equal to one.

b. Does it seem reasonable to assume that fill volume has a normal distribution?

MTB > Graph > Probability Plot > Single

Probability Plot of Volume


Normal - 95% CI
99
Mean 752.5
95 StDev 1.538
90
N 20
AD 0.511
80
P-Value 0.172
70
Percent

60
50
40
30
20

10
5

1
748 750 752 754 756 758
Volume

The data appears to be normally distributed.


4.45. Rehab, Inc. is evaluating patient success results for its Northbrook and Southbrook locations. Each
successfully treated 10 patients within the last year following elbow surgery. Total recovery times and %
range of motion achieved are listed in Table 4E.6.

Northbrook (1) Southbrook (2)


Recovery % ROM Recovery % ROM
Time Achieved Time Achieved
148.81 0.69 135.25 0.98
188.72 0.89 174.99 0.47
186.77 0.65 144.15 0.85
152.72 0.73 161.81 0.71
197.8 0.79 151.35 0.94
162.78 0.81 149.69 0.56
192.18 0.64 136.17 0.57
200.17 0.88 146.25 0.2
181.32 0.67 162.88 0.84
193.03 0.74 183.95 0.62

2 2
x 1=180.43 , s 1=353.0155 , n1=10 , x 2=154.649 , s 2=258.3751 , n2=10

a. Build a 95% two-sided confidence interval for the differences of the average of the recovery times. Is there
evidence to support that the facilities are different?

( )
2 2 2
s 1 s2
( )
2
+ 353.0122 258.3751
+
n1 n2 10 10
¿ = =17.5788 ⇒ 17

( ) ( ) ( ) ( )
2 2 2 2 2
s1
2
s2 353.0122 258.3751
n1 n2 10 10
+ +
10−1 10−1
n 1−1 n2−1

s 21 s22
x 1−x 2−t 0.05/ 2 ,

+ ≤ μ 1−μ 2 ≤ x 1−x 2+ t 0.05 /2 ,
n1 n2
s21 s 22
+
n1 n2 √
180.43−154.649−2.1098
10
+

353.0122 258.3751
10
≤ μ 1−μ2 ≤ 180.43−154.649+ 2.1098
353.0122 258.3751
10
+
10 √
9.2841 ≤ μ 1−μ2 ≤ 42.2779

The confidence interval is positive; thus, there is evidence that recovery times are larger at Northbrook.

b. Build a 95% two-sided confidence interval for the ratio of the variance of recovery times. Is there evidence
to support that the facilities are different?

2 2
s1 2 2 s1
F
2 1−α /2 , n −1 , n −1
≤ σ 1 /σ 2 ≤ 2
F α / 2 ,n −1 ,n −1
s2 2 1
s2 2 1

353.0122 2 2 353.0122
F1−0.05 /2 , 10−1 , 10−1 ≤ σ 1 /σ 2 ≤ F
258.3751 258.3751 0.05/ 2 ,10−1 ,10−1
353.0122 2 2 353.0122
∗0.2484 ≤ σ 1 /σ 2 ≤ ∗4.0260
258.3751 258.3751
2 2
0.3394 ≤ σ 1 /σ 2 ≤ 5.5006

Since one is included in the interval, we cannot conclude that variances are different.

c. Build a 95% two-sided confidence interval for the differences on the percentage of motion restored. Is
there evidence to support that the facilities are different?

p1− p2−z 0.05 /2


√ p1 ( 1− p1 ) p 2 ( 1− p 2)
n1
+
n2
≤ p 1−p 2 ≤ p1− p2 + z 0.05/ 2

p 1 ( 1−p 1 ) p2 ( 1− p2 )
n1
+
n2

0.7490−0.674−1.96
√ 0.7490 ( 1−0.7490 ) 0.674 (1−0.674 )
10
+
10
≤ p1 −p 2 ≤ 0.0909−0.2397+1.96
0.7490 ( 1−
10 √
−0.3208 ≤ p 1− p2 ≤ 0.4708

Since zero is within the interval, there is not enough evidence to conclude the facilities are different in
terms of the percentage of motion restored.

4.46. An article in Solid State Technology (May 1987) describes an experiment to determine the effect of flow
rate on etch uniformity on a silicon wafer used in integrated-circuit manufacturing. Three flow rates are
tested, and the resulting uniformity (in percent) is observed for six test units at each flow rate. The data are
shown in Table 4E.7.

C 2 F 6 Flow Observations
(SCCM) 1 2 3 4 5 6
125 2.7 2.6 4.6 3.2 3.0 3.8
160 4.6 4.9 5.0 4.2 3.6 4.2
200 4.6 2.9 3.4 3.5 4.1 5.1

a. Does flow rate affect etch uniformity? Answer this question by using an analysis of variance.

MTB > Stat > ANOVA > One-way


One-way ANOVA: uniformity versus FlowRate

Source DF SS MS F P
FlowRat 2 3.648 1.824 3.59 0.053
Error 15 7.630 0.509
Total 17 11.278

S = 0.7132 R-Sq = 32.34% R-Sq(adj) = 23.32%

Individual 95% CIs For Mean Based on


Pooled StDev
Level N Mean StDev -----+---------+---------+---------+----
125 6 3.3167 0.7600 (---------*----------)
160 6 4.4167 0.5231 (----------*---------)
200 6 3.9333 0.8214 (----------*---------)
-----+---------+---------+---------+----
3.00 3.60 4.20 4.80

Pooled StDev = 0.7132

Since the p-value ¿ 0.053> α =0.05, we cannot conclude that the flow rate affects the etch uniformity.

b. Construct a boxplot of the etch uniformity data. Use this plot, together with the analysis of variance results,
to determine which gas flow rate would be best in terms or etch uniformity (a small percentage is best).

Boxplot of Uniformity by Flow Rate

5.0

4.5
Uniformity

4.0

3.5

3.0

2.5
125 160 200
Flow Rate

The boxplots overlap, which supports the conclusion from the ANOVA output. There does not seem to be
much difference between etch uniformity between flow rates 160 and 200 and 125 and 200. Although, the
etch uniformity for flow rate 160 has a smaller variance compared to the other two flow rates.

c. Plot the residuals versus predicted flow. Interpret this plot.


Versus Fits
(response is uniformity)
1.5

1.0

0.5

Residual
0.0

-0.5

-1.0

3.2 3.4 3.6 3.8 4.0 4.2 4.4


Fitted Value

The residuals for fitted value of 4.4 have a smaller variance compared to the residuals at the other fitted
values; this supports the fact that the variance of etch uniformity at flow rate 160 was smaller than at the
other flow rates. Otherwise, the residuals appear to be random.

d. Does the normality assumption seem reasonable in this problem?

Constructing a normal probability plot of the residuals:

Normal Probability Plot


(response is Etch Uniformity)
99

95
90

80
70
Percent

60
50
40
30
20

10
5

1
-2 -1 0 1 2
Residual

The normality assumption appears to be reasonable.

4.47. An article in the ACI Materials Journal (Vol. 84, 1987, pp. 213-216) describes several experiments
investigating the rodding of concrete to remove entrapped air. A 3-in diameter cylinder was used, and the
number of times this rod was used is the design variable. The resulting compressive strength of the
concrete specimen is the response. The data are shown in Table 4E.8.

a. Is there any difference in compressive strength due to rodding level? Answer the question by using
analysis of variance with alpha = 0.05.

Rodding level Compressive strength


10 1530
10 1530
10 1440
15 1610
15 1650
15 1500
20 1560
20 1730
20 1530
25 1500
25 1490
25 1510

MTB > Stat > ANOVA > One-way

One-way ANOVA: Compressive strength versus Rodding level

Source DF SS MS F P
Rodding level_1 3 28633 9544 1.87 0.214
Error 8 40933 5117
Total 11 69567

S = 71.53 R-Sq = 41.16% R-Sq(adj) = 19.09%

Individual 95% CIs For Mean Based on


Pooled StDev
Level N Mean StDev ----+---------+---------+---------+-----
10 3 1500.0 52.0 (-----------*----------)
15 3 1586.7 77.7 (-----------*-----------)
20 3 1606.7 107.9 (-----------*-----------)
25 3 1500.0 10.0 (-----------*----------)
----+---------+---------+---------+-----
1440 1520 1600 1680

Pooled StDev = 71.5

Since p-value = 0.214 > 0.05, there is no evidence that suggests rodding level influences compressive
strength.

NOTE: Let us now assume only data on levels 10 and 15 is available, and that we will use a 95% two-sided
confidence interval to evaluate whether there are any differences in terms of compressive strength between
the two rodding levels.

( )
2 2 2
s 1 s2
( )
2
+ 2700 6033.3333
+
n1 n2 3 3
¿ = =3.4914 ❑ 3

( ) ( ) ( ) ( )
2 2 2 2 2
s1
2
s2 2700 6033.3333 ⇒

n1 n2 3 3
+ +
3−1 3−1
n 1−1 n2−1

x 1−x 2−t 0.05/ 2 ,


s 21 s22

+ ≤ μ −μ ≤ x −x + t
n1 n2 1 2 1 2 0.05 /2 , n1 n2
s21 s 22
+

1500−1586.6667−3.1825
3 √
2700 6033.3333
+
3
≤ μ1−μ2 ≤ 1500−1586.6667+3.1825
2700 6033.3333
3
+
3 √
−258.3748 ≤ μ 1−μ2 ≤ 85.0415

Since zero is within the confidence interval, we cannot conclude there is a difference in the compressive
strength associated with the two different rodding levels.

b. Construct box plots of compressive strength by rodding level. Provide a practical interpretation of the
plots.

MTB > Graph > Boxplot > One Y > With Groups

Boxplot of Strength
1750

1700

1650
Strength

1600

1550

1500

1450

1400
10 15 20 25
Rodding Level

There does not seem to be a significant difference in terms of the mean compressive strength for rodding
levels15, and 20 and between, 10 and 25. Rodding level 25 shows remarkably smaller variability compared
to other levels.

c. Construct a normal probability plot of the residuals from this experiment. Does the assumption of a
normal distribution for compressive strength seem reasonable?

MTB > Graph > Probability Plot > Single

Probability Plot of Residuals


Normal - 95% CI
99
Mean 1.894781E-14
95 StDev 61.00
90
N 12
AD 0.234
80
P-Value 0.736
70
Percent

60
50
40
30
20

10
5

1
-200 -100 0 100 200
Residual

Normality assumption appears reasonable.

4.48. An article in environment International (Vol. 18, No.4, 1992) describes an experiment in which the amount
of radon released in showers was investigated. Radon-enriched water was used in the experiment, and six
different orifice diameters were tested in shower heads. The data from the experiment are shown in Table
4E.9.

a. Does the size of orifice affect the mean percentage of radon released? Use the analysis of variance.
MTB > Stat > ANOVA > One-way

One-way ANOVA: radon versus Diameter

Source DF SS MS F P
Diameter 5 1133.38 226.68 30.85 0.000
Error 18 132.25 7.35
Total 23 1265.63

S = 2.711 R-Sq = 89.55% R-Sq(adj) = 86.65%

Individual 95% CIs For Mean Based on


Pooled StDev
Level N Mean StDev ----+---------+---------+---------+-----
0.37 4 82.750 2.062 (---*---)
0.51 4 77.000 2.309 (---*---)
0.71 4 75.000 1.826 (---*---)
1.02 4 71.750 3.304 (----*---)
1.40 4 65.000 3.559 (---*---)
1.99 4 62.750 2.754 (---*---)
----+---------+---------+---------+-----
63.0 70.0 77.0 84.0

Pooled StDev = 2.711

Since p-value < 0.05, there is evidence that suggests orifice diameter influences the percentage radon
released.

b. Analyze the results from this experiment.

The normal probability of the residuals indicates a good model fit.

Normal Probability Plot


(response is Radon Released)
99

95
90

80
70
Percent

60
50
40
30
20

10
5

1
-5.0 -2.5 0.0 2.5 5.0
Residual

A plot of the residuals versus the fitted values appears random; the assumption of constant variance seems
reasonable.
Versus Fits
(response is Radon Released)
5.0

2.5

Residual
0.0

-2.5

-5.0
60 65 70 75 80 85
Fitted Value

Now consider pairwise comparisons of the 6 diameter levels from Minitab:

Tukey Pairwise Comparisons

Grouping Information Using the Tukey Method and 95% Confidence

Diameter N Mean Grouping


0.37 4 82.75 A
0.51 4 77.00 A B
0.71 4 75.000 B
1.02 4 71.75 B
1.40 4 65.00 C
1.99 4 62.75 C

Means that do not share a letter are significantly different.

Orifice Diameters of 0.37 and 0.51 had similar effects on percent radon released; orifice diameters 1.40 and
1.99 had the smallest percent radon released.
CHAPTER 5

Control Charts for Variables


Learning Objectives

After completing this chapter you should be able to:


1. Understand chance and assignable causes of variability in a process
2. Explain the statistical basis of the Shewhart control chart, including choice of sample size, control limits,
and sampling interval
3. Explain the rational subgroup concept
4. Explain how sensitizing rules and pattern recognition are used in conjunction with control charts
5. Know how to design variables control charts
6. Know how to set up and use and R control charts
7. Know how to set up and use and S control charts
8. Know how to set up and use control charts for individual measurements
9. Understand the importance of the normality assumption for individuals control charts and know how to
check this assumption
10. Set up and use CUSUM control charts for monitoring the process mean
11. Set up and use EWMA control charts for monitoring the process mean
12. Understand the difference between process capability and process potential
13. Calculate and properly interpret process capability ratios
14. Understand the role of the normal distribution in interpreting most process capability ratios

Important Terms and Concepts


Assignable causes of variation Pareto chart Specification limits
Control chart PCR Cp Statistical control of a process
Control limits PCR Cpk Statistical process control (SPC)
CUSUM control chart PCR Cpm Three-sigma control limits
EWMA control chart Process capability Variable sample size on control
In-control process R control chart charts
Individuals control chart Rational subgroups Variables control charts
One-sided process-capability s control chart x control chart
ratios Sampling frequency for control
Out-of-control process charts
Out-of-control-action plan Shewhart control charts
(OCAP) Signal resistance of a control chart
Exercises

5.1. What are chance and assignable causes of variability? What part do they play in the operation and
interpretation of a Shewhart control chart?

Chance cause variability is the natural variability or “background noise” present in a system. This type of
variability is essentially unavoidable. An assignable cause of variability is an identifiable source of
variation in a process and is not a part of the normal variability of the process. For example, an assignable
cause could be an operator error or defective materials. A process that only has chance causes is said to be
in statistical control. Assignable causes will be flagged in a control chart so that the process can be
investigated.

5.2. Discuss the relationship between a control chart and statistical hypothesis testing.

A control chart is a test of the hypothesis that the process is in a state of statistical control. A point plotting
outside the control limits is equivalent to rejecting the hypothesis that the process is in a state of statistical
control. A point plotting within the control limits is equivalent of failing to reject the hypothesis that the
process is in a state of statistical control.

5.3. Discuss type I and type II errors relative to the control chart. What practical implication in terms of process
operation do these two types of errors have?

A type I error occurs when you reject the null hypothesis when in fact the null hypothesis is true. A type I
error in control charts is a false-positive signal, a point that is outside the control limits that is actually in
control. When we use z α/ 2=3 to construct the control limits, the probability of a type I error is .0027
(assuming a normal distribution). A type II error is when you fail to reject the null hypothesis when in fact
the null hypothesis is false. A type II error in a control chart is a false-negative, a point that was out of
control, but was not detected by the control chart.

5.4. What is meant by the statement that a process is in a state of statistical control?

A process can have natural or common cause variation. This type of variation is usually unavoidable. A
system that only has common cause variability is considered to be in a state of statistical control. When
other types of variability outside of the natural variability of the system occur, the process is no longer in
statistical control.

5.5. If a process is in a state of statistical control, does it necessarily follow that all or nearly all of the units of
product produced will be within the specification limits?

No; your process can be in a state of statistical control, but the process is not operating at the desired
specification limits. In this case, a redesign or change to the process may be necessary in order for the
process to operate within the specification limits.

5.6. Discuss the logic underlying the use of three-sigma limits on Shewhart control charts. How will the chart
respond if narrower limits are chosen? How will it respond if wider limits are chosen?

The use of three-sigma limits is common because this range contains 99.7% of points within a normal
distribution. Using narrower limits may cause out of control points to be detected quicker, but will increase
the Type I error, the probability of false positives. Using wider limits will decrease Type I error, but will
lead to more false negatives, as out of control behavior is not detected by the wider limits.

5.7. Discuss the rational subgroup concept. What part does it play in control chart analysis?
A rational subgroup is a sample selected so that if assignable causes are present, the chance for differences
between subgroups is maximized, while the chance for differences due to the assignable cause within a
subgroup will be minimized. Within subgroup variability determines the control limits. If the subgroups are
poorly chosen so that assignable causes occur within the subgroup, the control limits will be wider than
necessary. This will reduce the control chart’s ability to detect shifts. Careful consideration must be used
when selecting rational subgroups to ensure the control charts are effective in detecting shifts. Other
relevant points: the size of the sample and the frequency of sampling (For further details refer to pages
168-169).

5.8. When taking samples of subgroups from a process, do you want assignable causes occurring within the
subgroups or between them? Fully explain your answer.

We want assignable causes occurring between subgroups. Within subgroup variability determines the
control limits, so if there are assignable causes within the subgroups, the control limits will be wider than
necessary. This would decrease the chart’s ability to detect shifts.

5.9. A molding process uses a five-cavity mold for a part used in an automotive assembly. The wall thickness of
the part is the critical quality characteristic. It has been suggested to use X and R charts to monitor this
process, and to use as the subgroup or sample all five parts that result from a single “shot” of the machine.
What do you think of this sampling strategy? What impact does it have on the ability of the charts to detect
assignable causes?

This sampling approach is appropriate when the purpose is to detect process shifts. It will minimize the
chance of variability due to assignable causes within the sample and maximize chance of variability
between samples because the samples consist of units that were produced at the same time. It will provide a
good estimate of the standard deviation of the process as well.

5.10. A manufacturing process produces 500 parts per hour. A sample part is selected about every half-hour, and
after five parts are obtained, the average of these five measurements is plotted on a control chart. (a) Is this
an appropriate sampling scheme if the assignable cause in the process results in an instantaneous upward
shift in the mean that is of very short duration? (b) If your answer is no, propose an alternative procedure.

a. No, this sampling scheme would not be appropriate for this situation. Because a part is selected only
once every half hour, the sample will likely miss the short shift in the mean. The shift would go
undetected.

b. Because there could be quick shifts in the mean, small, more frequent samples would be more
appropriate.

5.11. Consider the sampling scheme proposed in Exercise 5.10. Is this scheme appropriate if the assignable cause
in the process results in a slow, prolonged upward shift? If your answer is no, propose an alternative
procedure.

Yes, this sampling scheme is more appropriate if the assignable cause results in a slow upward shift.

5.12. If the time order of production has not been recorded in a set of data from a process, is it possible to detect
the presence of assignable causes?

Yes, provided that the data are uncorrelated and stationary. However, knowing the time order may help
identify the cause of out of control behavior. If the control chart is based on subgroups, you would also
need to know which observations belong in which subgroups in order to distinguish within subgroup
variability and between subgroup variability. Not knowing the time order will also not allow you to detect
unusual runs or trends in the process as well which could be an indication of the mean slowly rising or
falling.
5.13. How do the costs of sampling, the costs of producing an excessive number of defective units, and the costs
of searching for assignable causes impact the choice of parameters of a control chart?

The costs of sampling will impact the sampling scheme for constructing the control charts. If sampling is
expensive, then smaller samples will be required which impacts the value of n . If the cost of producing an
excessive number of defective units is high, then we would want to be able to detect an assignable cause as
soon as possible and therefore may decrease the width of the control limits. This will lead to an increase in
false positive, however. Therefore, if the cost of searching for assignable causes is high, we would not want
to make the control limits too narrow.

5.14. Consider the control chart shown here. Does the pattern appear random?

No. The last four runs appear to plot at a distance of one-sigma or beyond from the center line.

5.15. Consider the control chart shown here. Does the pattern appear random?

Yes, the pattern appears random.

5.16. Consider the control chart shown here. Does the pattern appear random?
Yes, the pattern appears random.

5.17. Apply the Western Electric rules to the control chart in Exercise 5.14. Are any of the criteria for declaring
the process out of control satisfied?

Check:

1. Any point outside the 3-sigma control limits? NO.


2. 2 of 3 beyond 2 sigma of centerline? NO.
3. 4 of 5 at 1 sigma or beyond of centerline? YES. Points 17, 18, 19, and 20 are outside the lower 1-
sigma area.
4. 8 consecutive points on one side of centerline? NO.

The process can be declared out-of-control because the last four runs appear to plot at a distance of one-
sigma or beyond from the center line.

5.18. Apply the Western Electric rules to the control chart presented in Exercise 5.16. Would these rules result in
any out-of-control signals?

1. Any point outside the 3-sigma control limits? NO.


2. 2 of 3 beyond 2 sigma of centerline? YES. Points 16 and 18 appear to be beyond 2 sigma of
centerline.
3. 4 of 5 at 1 sigma or beyond of centerline? NO.
4. 8 consecutive points on one side of centerline? NO.
5.19. Consider the time-varying process behavior shown below. Match each of these
several patterns of process performance to the corresponding and R charts shown in

Figures (a) to (e) below.

Behavio
Control Chart
r
(a) (2)
(b) (4)
(c) (5)
(d) (1)
(e) (3)

5.20. The thickness of a printed circuit board is an important quality parameter. Data on board thickness (in
inches) are given in Table 5E.1 for 25 samples of three boards each.

Sample Number x1 x2 x3
1 0.0629 0.0636 0.0640
2 0.0630 0.0631 0.0622
3 0.0628 0.0631 0.0633
4 0.0634 0.0630 0.0631
5 0.0619 0.0628 0.0630
6 0.0613 0.0629 0.0634
7 0.0630 0.0639 0.0625
8 0.0628 0.0627 0.0622
9 0.0623 0.0626 0.0633
10 0.0631 0.0631 0.0633
11 0.0635 0.0630 0.0638
12 0.0623 0.0630 0.0630
13 0.0635 0.0631 0.0630
14 0.0645 0.0640 0.0631
15 0.0619 0.0644 0.0632
16 0.0631 0.0627 0.0630
17 0.0616 0.0623 0.0631
18 0.0630 0.0630 0.0626
19 0.0636 0.0631 0.0629
20 0.0640 0.0635 0.0629
21 0.0628 0.0625 0.0616
22 0.0615 0.0625 0.0619
23 0.0630 0.0632 0.0630
24 0.0635 0.0629 0.0635
25 0.0623 0.0629 0.0630

a. Set up X and R control charts. Is the process in statistical control?

MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-R

Xbar-R Chart of Thickness


0.0640
UCL=0.063893
0.0635
Sample Mean

__
0.0630 X=0.062952
0.0625

0.0620
1
LCL=0.062011
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample

1
0.0024 UCL=0.002368
Sample Range

0.0018

0.0012 _
R=0.00092
0.0006

0.0000 LCL=0
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample

X chart :UCL=0.0639 ,CL=0.0630 , LCL=0.0620


R chart :UCL=0.0024 ,CL=0.00092, LCL=0
The process is not in statistical control; there are out of control points in both control charts.

b. Estimate the process standard deviation.

σ^ =R/d 2=0.00092/1.693=0.0005

c. What are the limits that you would expect to contain nearly all the process measurements?

We expect 99.7% of points within a normal distribution to fall within ± 3 standard deviations of the mean.
Therefore, we expect 99.7% of process measurements to fall within x́ ± 3 σ^.

So we expect 99.7% of process measurements to be between


0.0639 ± 3∗( 0.0005 )=(0.0624 , 0.0654)

5.21. The net weight of a soft drink is to be monitored by X and R control charts using a sample size of n=5.
Data for 20 preliminary samples are shown in Table 5E.2.

Sample
x1 x2 x3 x4 x5
Number
16. 16.
1 15.8 16.2 16.6
3 1
15. 16.
2 16.3 15.9 16.4
9 2
16. 16.
3 16.1 16.5 16.3
2 4
16. 16.
4 16.3 15.9 16.2
2 4
16. 16.
5 16.1 16.4 16.0
1 5
15. 16.
6 16.1 16.7 16.4
8 6
16. 16.
7 16.1 16.5 16.5
3 1
16. 16.
8 16.2 16.2 16.3
1 1
16. 16.
9 16.3 16.4 16.5
2 3
16. 16.
10 16.6 16.4 16.5
3 1
16. 16.
11 16.2 15.9 16.4
4 3
16. 16.
12 15.9 16.7 16.5
6 2
16. 16.
13 16.4 16.6 16.1
1 4
16. 16.
14 16.5 16.2 16.4
3 3
16. 16.
15 16.4 16.3 16.2
1 2
16. 16.
16 16.0 16.3 16.2
2 3
16. 16.
17 16.4 16.4 16.2
2 3
18 16.0 16. 16.4 16. 16.1
2 5
16. 16.
19 16.4 16.3 16.4
0 4
16. 16.
20 16.4 16.5 15.8
4 0

a. Set up X and R control charts using these data. Does the process exhibit statistical control?

MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-R

Xbar-R Chart of Net Weight


16.6
UCL=16.5420

Sample Mean 16.4


__
X=16.268
16.2

16.0 LCL=15.9940
1 3 5 7 9 11 13 15 17 19
Sample

1.00 UCL=1.004
Sample Range

0.75
_
0.50 R=0.475
0.25

0.00 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample

X chart :UCL=16.5420 , CL=16.268 , LCL=15.9940


R chart :UCL=1.004 ,CL=0.475 , LCL=0
The process appears to exhibit statistical control.

b. Estimate the process mean and standard deviation.

^μ= x́ =16.268 , σ^ =R/d 2=0.475/2.326=0.2042

c. Does fill weight seem to follow a normal distribution?

MTB > Graph > Probability Plot

Probability Plot of Net Weight


Normal - 95% CI
99.9
Mean 16.27
StDev 0.2014
99 N 100
AD 1.257
95 P-Value <0.005
90
80
70
Percent

60
50
40
30
20
10
5

0.1
15.50 15.75 16.00 16.25 16.50 16.75 17.00
Net Weight

Normality assumption is reasonable.

5.22. Rework Exercise 5.20 using an X -S chart.


a. Set up X and S control charts. Is the process in statistical control?

MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-S

Xbar-S Chart of Thickness


0.0640
UCL=0.063887
0.0635

Sample Mean
__
0.0630 X=0.062952

0.0625

0.0620
1
LCL=0.062017
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample

1
0.0012 UCL=0.001228

Sample StDev
0.0009

0.0006 _
S=0.000478
0.0003

0.0000 LCL=0
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample

X chart :UCL=0.0639 ,CL=0.0630 , LCL=0.0620


S chart :UCL=0.0012 , CL=0.000478 , LCL=0
The process is not in statistical control; there are out of control points in both control charts.

b. Estimate the process standard deviation.

σ^ =s/c 4 =0.000478/0.8862=0.000539

c. What are the limits that you would expect to contain nearly all the process measurements?

We expect 99.7% of points within a normal distribution to fall within ± 3 standard deviations of the mean.
Therefore, we expect 99.7% of process measurements to fall within x́ ± 3 σ^.

(0.0614 , 0.0646)

5.23. Rework Exercise 5.21 using an X −S chart.

a. Set up X −S control charts using these data. Does the process exhibit statistical control?

MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-S

Xbar-S Chart of Net Weight


16.6
UCL=16.5484
Sample Mean

16.4
__
X=16.268
16.2

16.0 LCL=15.9876
1 3 5 7 9 11 13 15 17 19
Sample

0.4 UCL=0.4104
Sample StDev

0.3
_
0.2 S=0.1965
0.1

0.0 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample

X chart :UCL=16.5484 ,CL=16.268 , LCL=15.9876


S chart :UCL=0.4104 ,CL=0.1965 , LCL=0
The process appears to be in statistical control.

b. Estimate the process mean and standard deviation.

^μ= x́ =16.268 , σ^ =s /c 4=0.1965 /0.94=0.2092

c. Does fill weight seem to follow a normal distribution?

MTB > Graph > Probability Plot

Probability Plot of Net Weight


Normal - 95% CI
99.9
Mean 16.27
StDev 0.2014
99 N 100
AD 1.257
95 P-Value <0.005
90
80
70
Percent

60
50
40
30
20
10
5

0.1
15.50 15.75 16.00 16.25 16.50 16.75 17.00
Net Weight

Normality assumption is reasonable.

5.24. Samples of six items are taken from a service process at regular intervals. A quality characteristic is
measured and x and R values are calculated from each sample. After 50 groups of size 6 have been taken
we have x=40 and R=4 . The data is normally distributed.

a. Compute control limits for the X and R control charts. Do all points fall within the control limits?

n=6 , x́=40 , R=4


R chart :
UCL= D4 R=(2.004∗4 )=8.016
CL=R=4
LCL=D3 R=(0∗4)=0

x Chart :
UCL= x́+ A 2 R=40+0.483∗4=41.93 ,CL=x́=40
LCL=x́− A2 R=40−0.483∗4=38.07

b. Estimate the mean and standard deviation of the process. What are the ± 2 standard deviation limits for the
individual data?

^ = x́ =40
μ
σ^ =R/d 2=4 /2.326=1.720

±2 standard deviation limits for the individual data:


x ± 2 σ^ =( 40−2∗1.720 , 40+ 2∗1.720 ) =(36.56 , 43.44)

c. If the specification limits are 41 ± 5, do you think the process is capable of producing within these
specifications?

^ p= USL−LSL = 46−36 =0.97


C
6∗σ^ 6∗1.720

Since C p< 1, the process uses up more than 100% of the specification range. C p is close to 1, so some of
the process measurements will fall outside the specification limits.

5.25. Table 5E.3 presents 20 subgroups of five measurements on the time it takes to
service a customer.

a. Set up X and R control charts for this process and verify that it is in statistical
control.

Sample Number x1 x2 x3 x4 x5 x R
138. 137.
1
1 110.8 138.7 4 125.4 130.1 27.9
149. 134.
2
3 142.1 105.0 0 92.3 124.5 57.0
115. 155.
3
9 135.6 124.2 0 117.4 129.6 39.1
118. 122.
4
5 116.5 130.2 6 100.2 117.6 30.0
108. 142.
5
2 123.8 117.1 4 150.9 128.5 42.7
102. 135.
6
8 112.0 135.0 0 145.8 126.1 43.0
120. 118.
7
4 84.3 112.8 5 119.3 111.0 36.1
132. 123.
8
7 151.1 124.0 9 105.1 127.4 46.0
136. 127.
9
4 126.2 154.7 1 173.2 143.5 46.9
135. 138.
10
0 115.4 149.1 3 130.4 133.6 33.7
139. 143.
11
6 127.9 151.1 7 110.5 134.6 40.6
125. 152.
12
3 160.2 130.4 4 165.1 146.7 39.8
145. 113.
13
7 101.8 149.5 3 151.8 132.4 50.0
138. 140.
14
6 139.0 131.9 2 141.1 138.1 9.2
110. 113.
15
1 114.6 165.1 8 139.6 128.7 54.8
145. 120.
16
2 101.0 154.6 2 117.3 127.6 53.3
125. 147.
17
9 135.3 121.5 9 105.0 127.1 42.9
129. 109.
18
7 97.3 130.5 0 150.5 123.4 53.2
123. 148.
19
4 150.0 161.6 4 154.2 147.5 38.3
144. 151.
20
8 138.3 119.6 8 142.7 139.4 32.2

MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-R

Xbar-R Chart of Service Time


UCL=154.45
150

Sample Mean
140
__
130 X=130.88
120

110
LCL=107.31
1 3 5 7 9 11 13 15 17 19
Sample

UCL=86.40
80
Sample Range

60
_
40 R=40.86
20

0 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample

X chart :UCL=154.45 , CL=130.88 , LCL=107.31


R chart :UCL=86.40 , CL=40.86 , LCL=0
The process appears to exhibit statistical control; hence, let us use this control chart limits to monitor future
samples.

b. Following establishing of the control charts in part (a), 10 new samples have been
provided in Table 5E.4. Plot the new X and R values using the control chart limits
you established in part (a) and draw conclusions.

Sample Number x1 x2 x3 x4 x5 x R
131. 143.
1 184.8 182.2 212.8 170.8 81.8
0 3
181. 169.
2 193.2 180.7 174.3 179.7 24.0
3 1
154. 202.
3 170.2 168.4 174.4 174.1 48.0
8 7
157. 142.
4 154.2 169.1 161.9 157.0 26.9
5 2
216. 155.
5 174.3 166.2 184.3 179.3 60.8
3 5
186. 175.
6 180.2 149.2 185.0 175.3 37.8
9 2
167. 171.
7 143.9 157.5 194.9 167.2 51.0
8 8
178. 159.
8 186.7 142.4 167.6 166.9 44.2
2 4
162. 168.
9 143.6 132.8 177.2 157.0 44.5
6 9
172. 150.
10 191.7 203.4 196.3 182.8 53.0
1 4

MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-R
Xbar-R Chart of Service Time
1
1 1
180 1 1
1
1 1

Sample Mean
160 1 1
UCL=154.45
140 __
X=130.88
120
LCL=107.31
100
1 4 7 10 13 16 19 22 25 28
Sample

UCL=86.40
80

Sample Range
60
_
40 R=40.86
20

0 LCL=0
1 4 7 10 13 16 19 22 25 28
Sample

All new samples exceed the UCL indicating a significant increase in the time to service a customer.

Suppose that the assignable cause responsible for the action signals generated in part
(b) has been identified and adjustments made to the process to correct its performance.
Plot the X and R values from the new subgroups shown in Table 5E.5 which were taken
following the adjustment against the control chart limits established in part (a). What are
your conclusions?

Sample Number x1 x2 x3 x4 x5 x R
131. 103.
1 143.1 118.5 121.6 123.6 39.8
5 2
111.
2 127.3 110.4 91.0 143.9 116.7 52.8
0
129. 105.
3 98.3 134.0 133.1 120.1 35.7
8 1
145. 131.
4 132.8 106.1 99.2 122.8 46.0
2 0
114. 177.
5 111.0 108.8 121.6 126.7 68.7
6 5
125. 137.
6 86.4 64.4 117.5 106.1 72.6
2 1
145. 129.
7 109.5 84.9 110.6 116.1 61.0
9 8
123.
8 114.0 135.4 83.2 107.6 112.8 52.2
6
9 85.8 156.3 119.7 96.2 153.0 122.2 70.6
107. 125.
10 148.7 127.4 127.5 127.2 41.3
4 0

MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-R
Xbar-R Chart of Service Time
160
UCL=153.18

Sample Mean
140
__
X=127.07
120

100 LCL=100.95
1 4 7 10 13 16 19 22 25 28
Sample

100
UCL=95.7
75

Sample Range
_
50
R=45.3
25

0 LCL=0
1 4 7 10 13 16 19 22 25 28
Sample

The process now appears to be under statistical control.

5.26. Parts manufactured by an injection modeling process are subjected to a compressive strength test. Twenty
samples of five parts each are collected, and the compressive strengths (in psi) are shown in Table 5E.6.

Sample
x1 x2 x3 x4 x5 x R
Number
1 83.0 81.2 78.7 75.7 77.0 79.1 7.3
2 88.6 78.3 78.8 71.0 84.2 80.2 17.6
3 85.7 75.8 84.3 75.2 81.0 80.4 10.4
4 80.8 74.4 82.5 74.1 75.7 77.5 8.4
5 83.4 78.4 82.6 78.2 78.9 80.3 5.2
6 75.3 79.9 87.3 89.7 81.8 82.8 14.5
7 74.5 78.0 80.8 73.4 79.7 77.3 7.4
8 79.2 84.4 81.5 86.0 74.5 81.1 11.4
9 80.5 86.2 76.2 83.9 80.2 81.4 9.9
10 75.7 75.2 71.1 82.1 74.3 75.7 10.9
11 80.0 81.5 78.4 73.8 78.1 78.4 7.7
12 80.6 81.8 79.3 73.8 81.7 79.4 8.0
13 82.7 81.3 79.1 82.0 79.5 80.9 3.6
14 79.2 74.9 78.6 77.7 75.3 77.1 4.3
15 85.5 82.1 82.8 73.4 71.7 79.1 13.8
16 78.8 79.6 80.2 79.1 80.8 79.7 2.0
17 82.1 78.2 75.5 78.2 82.1 79.2 6.6
18 84.5 76.9 83.5 81.2 79.2 81.1 7.6
19 79.0 77.8 81.2 84.4 81.6 80.8 6.6
20 84.5 73.1 78.6 78.7 80.6 79.1 11.4

NOTE: There is a typo in sample 9, value x 4 . The value in the book, 64.1, does not match the sample mean
and sample range for that sample. x 4 =83.9 is used in this solution.

a. Establish X and R control charts for compressive strength using these data. Is the process in statistical
control?

MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-R
Xbar-R Chart of Compressive Strength
85.0
UCL=84.58
82.5

Sample Mean
__
80.0
X=79.53
77.5

75.0
LCL=74.49
1 3 5 7 9 11 13 15 17 19
Sample

20
UCL=18.49
15

Sample Range
10
_
R=8.75
5

0 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample

X chart :UCL=84.58 ,CL=79.53 , LCL=74.49


R chart :UCL=18.49 , CL=8.75 , LCL=0
The process appears to be in a state of statistical control.

b. After establishing the control charts in part (a), 15 new subgroups were collected; the compressive
strengths are shown in Table 5E.7. Plot the X and R values against the control limits from part (a) and draw
conclusions.

Sample
x1 x2 x3 x4 x5 x R
Number
1 68.9 81.5 78.2 80.8 81.5 78.2 12.6
2 69.8 68.6 80.4 84.3 83.9 77.4 15.7
3 78.5 85.2 78.4 80.3 81.7 80.8 6.8
4 76.9 86.1 86.9 94.4 83.9 85.6 17.5
5 93.6 81.6 87.8 79.6 71.0 82.7 22.5
6 65.5 86.8 72.4 82.6 71.4 75.9 21.3
7 78.1 65.7 83.7 93.7 93.4 82.9 27.9
8 74.9 72.6 81.6 87.2 72.7 77.8 14.6
9 78.1 77.1 67.0 75.7 76.8 74.9 11.0
10 78.7 85.4 77.7 90.7 76.7 81.9 14.0
11 85.0 60.2 68.5 71.1 82.4 73.4 24.9
12 86.4 79.2 79.8 86.0 75.4 81.3 10.9
13 78.5 99.0 78.3 71.4 81.8 81.7 27.6
14 68.8 62.0 82.0 77.5 76.1 73.3 19.9
15 83.0 83.7 73.1 82.2 95.3 83.5 22.2

MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-R

In Xbar-R Options Estimate Tab, use the “omit following subgroups when estimating parameters” to plot
the X and R values against the control limits from part (a).
Xbar-R Chart of Compressive Strength
1
85.0 UCL=84.58
82.5

Sample Mean
__
80.0 X=79.53
77.5

75.0
LCL=74.49
1 1
1 4 7 10 13 16 19 22 25 28 31 34
Sample

30 1 1
1
1 1
1

Sample Range
1
20
UCL=18.49
_
10
R=8.75

0 LCL=0
1 4 7 10 13 16 19 22 25 28 31 34
Sample

The control chart indicates that the process is in statistical control until the 25 th sample, when the process
variability goes out of control. While the 32nd sample is within the control limits in the R chart, it fails
Western Electric’s 4th Rule; it is the 9th consecutive sample above the center line. This indicates the process
variability has become unstable. There are also points where the sample mean is out control.

5.27. Reconsider the data presented in Exercise 5.26.

a. Establish X and S control charts for compressive strength using these data. Is the process in statistical
control? After establishing the control charts, 15 new subgroups were collected; the compressive strengths
are shown in Table 5E.7. Plot the X and S values against the previous control limits and draw conclusions.

MTB > Stat > Control Charts > Variables Charts for Subgroups > Xbar-S

Xbar-S Chart of Compressive Strength


85.0 UCL=84.63
82.5
Sample Mean

__
80.0
X=79.53
77.5

75.0
LCL=74.43
1 3 5 7 9 11 13 15 17 19
Sample

8
UCL=7.464
6
Sample StDev

4
_
S=3.573
2

0 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample

X chart :UCL=84.63 ,CL=79.53 , LCL=74.43


S chart :UCL=7.464 ,CL=3.573 , LCL=0
The process appears to be in a state of statistical control.

X -S charts with additional subgroups:


Xbar-S Chart of Compressive Strength
1
85.0 UCL=84.63
82.5

Sample Mean
__
80.0 X=79.53
77.5

75.0
LCL=74.43
1 1
1 4 7 10 13 16 19 22 25 28 31 34
Sample

1
1 1
10.0 1
1

Sample StDev
1 1 1
7.5 UCL=7.46
5.0 _
S=3.57
2.5

0.0 LCL=0
1 4 7 10 13 16 19 22 25 28 31 34
Sample

The new subgroups are not in control. The chart detects an out of control point beginning at the 22 nd sample
in the S chart.

b. Does the S chart detect the shift in process variability more quickly than the R chart did originally in part
(b) of Exercise 5.26?

The S-chart does detect out of control behavior faster than the R chart. The R chart detects the shift
beginning at sample 25 while the s-chart detects the shift beginning at sample 22.

5.28. One-pound coffee cans are filled by a machine, sealed, and then weighed by a local coffee store. After
adjusting for the weight of the can, any package that weighs less than 16 oz is cut out of the conveyor. The
weights of 25 successive cans are shown in Table 5E.8. Set up a moving range control chart and a control
chart for individuals. Estimate the mean and standard deviation of the amount of coffee packed in each can.
Is it reasonable to assume that weight is normally distributed? If the process remains in statistical control at
this level, what percentage of cans will be underfilled?

Can Can
Weight Weight
Number Number
1 16.11 14 16.12
2 16.08 15 16.10
3 16.12 16 16.08
4 16.10 17 16.13
5 16.10 18 16.15
6 16.11 19 16.12
7 16.12 20 16.10
8 16.09 21 16.08
9 16.12 22 16.07
10 16.10 23 16.11
11 16.09 24 16.13
12 16.07 25 16.10
13 16.13

MTB > Stat > Control Charts > Variables Charts for Individuals > I-MR
I-MR Chart of Weight
16.17 UCL=16.1684

Individual Value
16.14
_
16.11
X=16.1052
16.08

16.05
LCL=16.0420
1 3 5 7 9 11 13 15 17 19 21 23 25
Observation

0.08 UCL=0.07760
0.06

Moving Range
0.04
__
0.02
MR=0.02375

0.00 LCL=0
1 3 5 7 9 11 13 15 17 19 21 23 25
Observation

^μ=x =16.1052, MR=0.02375 , σ^ =MR /d 2=0.02375/1.128=0.02105

MTB > Graph > Probability Plot

Probability Plot of Weight


Normal - 95% CI
99
Mean 16.11
95 StDev 0.02044
90
N 25
AD 0.397
80
P-Value 0.342
70
Percent

60
50
40
30
20

10
5

1
16.050 16.075 16.100 16.125 16.150 16.175
Weight

The normality assumption is reasonable.

Percentage of cans that are underfilled:

P ( X <16 )=P ( X−σ^ μ^ < 16−16.1052


0.02105 )
=P ( Z ← 4.998 )=3∗10 −7

So percentage of cans that are underfilled is 0.00003%

5.29. Fifteen successive heats of a steel alloy are tested for hardness. The resulting data
are shown in Table 5E.9. Set up a control chart for the moving range and a control
chart for individual hardness measurements. Is it reasonable to assume that
hardness is normally distributed?

Heat Hardness Heat Hardness


1 52 9 58
2 51 10 51
3 54 11 54
4 55 12 59
5 50 13 53
6 52 14 54
7 50 15 55
8 51

MTB > Stat > Control Charts > Variables Charts for Individuals > I-MR

I-MR Chart of Hardness


UCL=61.82
60

Individual Value
55 _
X=53.27
50

45 LCL=44.72
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Observation

10.0
UCL=10.50
Moving Range

7.5

5.0
__
MR=3.21
2.5

0.0 LCL=0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Observation

Individuals chart :UCL=61.82 , CL=53.27 , LCL=44.72


MR chart :UCL=10.50 ,CL=3.21 , LCL=0
MTB > Graph > Probability Plot

Probability Plot of Hardness


Normal - 95% CI
99
Mean 53.27
95 StDev 2.712
90
N 15
AD 0.465
80
P-Value 0.217
70
Percent

60
50
40
30
20

10
5

1
45.0 47.5 50.0 52.5 55.0 57.5 60.0 62.5
Hardness

Normality assumption is reasonable.

5.30. The viscosity of a polymer is measured hourly. Measurements for the last 20 hours are shown as follows:

Test Viscosity Test Viscosity


1 2838 11 3174
2 2785 12 3102
3 3058 13 2762
4 3064 14 2975
5 2996 15 2719
6 2882 16 2861
7 2878 17 2797
8 2920 18 3078
9 3050 19 2964
10 2870 20 2805

a. Does viscosity follow a normal distribution?

MTB > Graph > Probability Plot

Probability Plot of Viscosity


Normal - 95% CI
99
Mean 2929
95 StDev 129.0
90
N 20
AD 0.319
80
P-Value 0.511
70

Percent
60
50
40
30
20

10
5

1
2500 2600 2700 2800 2900 3000 3100 3200 3300 3400
Viscosity

The normality assumption for viscosity measurements appears reasonable.

b. Set up a control chart on viscosity and a moving range chart. Does the process exhibit statistical control?

MTB>Stat>Control Charts>Variables Charts for Individuals>I-MR

I-MR Chart of Viscosity


3400
UCL=3322.9
Individual Value

3200

3000 _
X=2928.9
2800

2600
LCL=2534.9
1 3 5 7 9 11 13 15 17 19
Observation

480 UCL=484.1
Moving Range

360

240
__
120
MR=148.2

0 LCL=0
1 3 5 7 9 11 13 15 17 19
Observation

I chart :UCL=3322.9 ,CL=2928.9 , LCL=2534.9


MR chart :UCL=484.1 , CL=148.2, LCL=0
The process appears to exhibit statistical control.

c. Estimate the process mean and standard deviation.

^μ=x =2928.9 , MR=148.158 , σ^ =MR /d 2=148.158/1.128=131.34

5.31. Continuation of exercise 5.20. The next five measurements on viscosity are 3163, 3199, 3054, 3147, and
3156. Do these measurements indicate that the process is in statistical control?

MTB>Stat>Control Charts>Variables Charts for Individuals>I-MR


I-MR Chart of Viscosity
3400
UCL=3322.9

Individual Value
3200

3000 _
X=2928.9
2800

2600
LCL=2534.9
1 3 5 7 9 11 13 15 17 19 21 23 25
Observation

480 UCL=484.1

Moving Range
360

240
__
120
MR=148.2

0 LCL=0
1 3 5 7 9 11 13 15 17 19 21 23 25
Observation

The next five measurements are in statistical control.

5.32. A machine is used to fill cans with an energy drink. A single sample can is selected every hour and the
weight of the can is obtained. Since the filling process is automated, it has very stable variability, and long-
term experience indicates that σ =0.05 oz. the individual observations for 24 hours of operation are shown
in Table 5E.11.

a. Assuming that the process target is 8.02 oz, create a CUSUM chart for this process. Design the chart using
the standardized values h=4.77 and k =0.5 .

Sample Sample
x x
Number Number
1 8.00 13 8.05
2 8.01 14 8.04
3 8.02 15 8.03
4 8.01 16 8.05
5 8.00 17 8.06
6 8.01 18 8.04
7 8.06 19 8.05
8 8.07 20 8.06
9 8.01 21 8.04
10 8.04 22 8.02
11 8.02 23 8.03
12 8.01 24 8.05

MTB > Stat > Control Charts > Time-Weighted Charts > CUSUM
CUSUM Chart of Can Weight
0.3

UCL=0.2385
0.2

Cumulative Sum
0.1

0.0 0

-0.1

-0.2
LCL=-0.2385
-0.3
2 4 6 8 10 12 14 16 18 20 22 24
Sample

UCL=0.2385 ,CL=0 , LCL=−0.2385


There are no out of control points.

b. Does the value of σ =0.05 seem reasonable for this process?

MR 0.0170
MR=0.01870 , σ^ = = =0.0151
d2 1.128

^ =0.0151is actually lower than the historical value σ =0.05


So the value σ

5.33. Rework Exercise 5.30 using the standardized CUSUM parameters of h=8.01 and k =0.25. Compare the
results with those obtained previously in Exercise 5.32. What can you say about the theoretical
performance of those two CUSUM schemes?

μ0=8.02 ,=0.05 , k =0.25 , h=8.01 , H=h=8.01(0.05)=0.4005

MTB > Stat > Control Charts > Time-Weighted Charts > CUSUM

CUSUM Chart of Can Weight


0.4 UCL=0.4005
0.3

0.2
Cumulative Sum

0.1

0.0 0
-0.1

-0.2

-0.3

-0.4 LCL=-0.4005
-0.5
2 4 6 8 10 12 14 16 18 20 22 24
Sample

CUSUM chart: UCL=0.4005 ,CL=0 , LCL=−0.4005

There are no out-of-control signals.

In Exercise 5.32:

μ0=8.02 ,σ =0.05 , k =0.25 , h=8.01 , b=h+1.166=8.01+1.166=9.176


¿
¿ −¿=−δ −k=0−0.25=−0.25 ¿

δ ¿=0 , Δ+¿=δ −k=0−0.25=−0.25 , Δ


−2∗( −0.25)∗ 9.176
¿
e +2∗(−0.25)∗9.176−1
−¿= 2 ≈ 741.6771¿
2∗(−0.25 )
+¿= ARL0 ¿
ARL 0
1 1
=
ARL 0 1
ARL+¿
0 + ¿
−¿ 2
ARL 0 = =0.0027 ¿
741.6771
ARL0=1/0.0027=370.84

ARL0=370.84, the theoretical performance of these two CUSUM schemes is the same.

5.34. The data in Table 5E.12 are the times it takes for a local payroll company to process checks (in minutes).
The target value for the turnaround times is μ0=950 minutes (two working days).

95 98 94 93 95 94 95 95
3 5 9 7 9 8 8 2
94 97 94 94 93 93 95 93
5 3 1 6 9 7 5 1
97 95 96 95 94 95 94 92
2 5 6 4 8 5 7 8
94 95 96 93 95 92 94 93
5 0 6 5 8 7 1 7
97 94 93 94 96 94 93 95
5 8 4 1 3 0 8 0
97 95 93 93 97 96 94 97
0 7 7 3 3 2 5 0
95 94 94 96 94 96 96 93
9 0 6 0 9 3 3 3

a. Estimate the process standard deviation.

Construct a Moving Range chart to find MR and estimate standard deviation.

MTB > Stat > Control Charts > Variables Charts for Individuals > Moving Range

Moving Range Chart of Time


1
50

UCL=44.83
40
Moving Range

30

20
__
MR=13.72
10

0 LCL=0
1 9 17 25 33 41 49 57 65 73
Observation

MR=13.72 , σ^ =MR /d 2=13.72 /1.128=12.16

b. Create a CUSUM chart for this process, using standardized values h=5 andk =1/2. Interpret this chart.
μ0=950 , σ^ =12.16 , k=1/2 , h=5

MTB > Stat > Control Charts > Time-Weighted Charts > CUSUM

CUSUM Chart of Time


75

UCL=60.8
50

Cumulative Sum
25

0 0

-25

-50

LCL=-60.8
1 9 17 25 33 41 49 57 65 73
Sample

Test Results for CUSUM Chart of Time

TEST. One point beyond control limits.


Test Failed at points: 12, 13

CUSUM chart :UCL=60.8 ,CL=0 , LCL=−60.8

The process signals out of control at sample 12. The assignable cause occurred after sample 12 – 10=2.

5.35. Calcium hardness is measured hourly for a public swimming pool. Data (in ppm) for the last 32 hours are
shown in Table 5E.13 (read down from left). The process target is μ0 = 175 ppm.

a. Estimate the process standard deviation.

160 186 190 206


157 195 189 210
150 179 185 216
1511 184 182 212
153 175 181 211
154 192 180 202
158 186 183 205
162 197 186 197

Construct a Moving Range chart to find MR and estimate standard deviation.

MTB > Stat > Control Charts > Variables Charts for Individuals > Moving Range
Moving Range Chart of Hardness
25 1

20
UCL=20.76

Moving Range
15

10
__
MR=6.35
5

0 LCL=0
1 4 7 10 13 16 19 22 25 28 31
Observation

MR=6.35 , σ^ =MR /d2 =6.3548/1.128=5.634

b. Construct a CUSUM chart for this process using standardized values of h=5 and k =1/2.

μ0=175 , σ^ =5.634 k=1/2 , h=5

CUSUM Chart of Hardness


400

300
Cumulative Sum

200

100

UCL=28.2
0 0
LCL=-28.2
-100

-200
1 4 7 10 13 16 19 22 25 28 31
Sample

UCL=28.2, CL=0 , LCL=−28.2

5.36. Reconsider the data in Exercise 5.32. Set up an EWMA control chart with λ=0.2 and L=3 for this
process. Interpret the results.

From Exercise 5.32:


¿ 0.05 , μ0=8.02

UCL=μ 0+ Lσ
√ λ
2−λ
=8.02+3∗0.05
0.2
2−0.2
=8.07

CL=μ0=8.02

LCL=μ0−Lσ
√ λ
2−λ
=8.02−3∗0.05
0.2
2−0.2
=7.97

MTB > Stat > Control Charts > Time-Weighted Charts > EWMA
EWMA Chart of Can Weight
8.08

UCL=8.0700
8.06

8.04

__

EWMA
8.02 X=8.02

8.00

7.98

LCL=7.9700
7.96
2 4 6 8 10 12 14 16 18 20 22 24
Sample

EWMA chart :UCL=8.0700 , C L=8.02, LCL=7.9700


The process is in control.

5.37. Reconstruct the control chart in Exercise 5.36 using λ=0.1 and L=3 for this process. Interpret the
results.
¿ 0.05 , μ0=8.02 , λ=0.1 , L=3

UCL=μ 0+ Lσ
√ λ
2−λ
=8.02+3∗0.05
0.1
2−0.1
=8.05

CL=μ0=8.02

LCL=μ0−Lσ
√ λ
2−λ
=8.02−3∗0.05
0.1
2−0.1
=7.99

MTB > Stat > Control Charts > Time-Weighted Charts > EWMA

EWMA Chart of Can Weight


8.06
UCL=8.05430
8.05

8.04

8.03
__
EWMA

8.02 X=8.02
8.01

8.00

7.99
LCL=7.98570
7.98
1 3 5 7 9 11 13 15 17 19 21 23
Sample

EWMA chart :UCL=8.0543 , CL=8.02 , LCL=7.9857


The process is in control.

5.38. Reconsider the data in Exercise 5.34. Apply an EWMA control chart to these data using λ=0.1 and
L=2.7 .
¿ 0.1 , L=2.7 , σ^ =12.16 ,CL=❑0=950 ,

UCL=μ 0+ Lσ
√ λ
2−λ
=950+2.7∗12.16
0.1
2−0.1
=957.53

LCL=μ0 + Lσ
√ λ
2− λ
=950−2.7∗12.16
0.1
2−0.1
=942.47

MTB > Stat > Control Charts > Time-Weighted Charts > EWMA

EWMA Chart of Time


960

UCL=957.53

955
EWMA

__
950 X=950

945

LCL=942.47
1 9 17 25 33 41 49 57 65 73
Sample

Test Results for EWMA Chart of time

TEST 1. One point more than 2.70 standard deviations from center line.
Test Failed at points: 8, 12, 13

Process is out of control at samples 8, 12, and 13.

5.39. Reconstruct the control chart in Exercise 5.34 using λ=0.4 and L=3 . Compare this chart to the one
constructed in Exercise 5.38.

¿ 0.4 , L=3 , σ^ =12.16 , CL=❑0=950 ,

UCL=μ 0+ Lσ
√ λ
2−λ
=950+3∗12.16
0.4
2−0.4
=968.25

LCL=μ0 + Lσ
√ λ
2− λ
=950−3∗12.16
0.4
2−0.4
=931.75

MTB > Stat > Control Charts > Time-Weighted Charts > EWMA
EWMA Chart of Time
970
UCL=968.25

960

__

EWMA
950 X=950

940

LCL=931.75
930
1 9 17 25 33 41 49 57 65 73
Sample

Test Results for EWMA Chart of time

TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 70

The EWMA chart does not detect out of control points until observation 70 compared to observation 8 in
the EWMA chart in Exercise 5.38.

5.40. Reconsider the data in Exercise 5.35. Set up and apply an EWMA control chart to these data using
λ=0.05 and L=2.6 .

¿ 0.05 , L=2.6 , σ^ =5.634 , CL=❑0=175 ,

UCL=μ 0+ Lσ
√ λ
2−λ
=175+2.6∗5.634
0.05
2−0.05
=177.30

LCL=μ0 + Lσ
√ λ
2− λ
=175−2.6∗5.634
0.05
2−0.05
=172.70

MTB > Stat > Control Charts > Time-Weighted Charts > EWMA

EWMA Chart of Hardness


190

185
EWMA

180

UCL=177.30
__
175 X=175
LCL=172.70
170

1 4 7 10 13 16 19 22 25 28 31
Sample

Process is out of control where the process mean ( ^μ=183.594 ) is remarkably larger than the process
target of μ0=175 .
5.41. A process is in control with x́=100 , s=1.05 , and n=5. The process specifications are at95 ± 10. The
quality characteristic has a normal distribution.

^μ= x́ =100 , s=1.05 , σ^ x =s /c 4 =1.05/0.9400=1.117

a. Estimate the potential capability.

^ p= USL−LSL = ( 95+10 )−( 95−10 ) =2.98


C
6 σ^ 6∗1.117
b. Estimate the actual capability.

μ^ −LSLx 100−( 95−10 )


^ =
C = =4.48
pl
3 σ^ x 3∗1.117

USL x − μ^ ( 95+ 10 )−100


^ =
C = =1.49
pu
3 σ^ x 3∗1.117

^ pk =min ( C
C ^ pl , C
^ pu) =1.49

c. How much could the fallout in the process be reduced if the process were corrected to operate at the
nominal specification?

^p Actual =P ( X < LSL )+ P ( X >USL ) =P ( X < LSL )+ [ 1−P ( X <USL ) ]

(
¿ P Z<
LSL− μ^
σ^ )[ (
+ 1−P Z <
USL−^μ
σ^
=P Z<
85−100
1.117 )] (
+ 1−P Z<
105−100
1.117 )[ ( )]
¿ P ( Z ←13.429 ) + [ 1−P ( Z< 4.476 ) ] =0.0000+ [ 1−0.999996 ] =0.000004

(
^p Potential=P Z <
85−95
1.117 )[ (
+ 1−P Z<
105−95
1.117 )]
=P ( Z ←8.953 ) + [ 1−P ( Z <8.953 ) ]

¿ 0.000000+ [ 1−1.000000 ] =0.000000

5.42. A process is in statistical control with x́=199 and R=3.5. The control chart uses a sample size of n=4 .
Specifications are at 200 ± 8. The quality characteristic is normally distributed.

n=4 , μ^ =x́=199 , R=3.5 , σ^ =R/d 2=3.5 /2.059=1.6998

USL=200+8=208 , LSL=200−8=192
a. Estimate the potential capability of the process.

^ p= USL−LSL = 208−192 =1.57


C
6 σ^ 6∗1.6998
b. Estimate the actual process capability.
^ pu= USL−u^ = 208−199 =1.765
C
3∗σ^ 3∗1.6998

^
^ pl = u−LSL 199−192
C = =1.373
3∗σ^ 3∗1.6998
^ pk =min( C
C ^ pu , C
^ pl )=min(1.765 , 1.373)=1.373

c. How much improvement could be made in process performance if the mean could be centered at the
nominal value?

The current fraction nonconforming is:

^p Actual =P ( X < LSL )+ P ( X >USL ) =P ( X < LSL )+ [ 1−P ( X <USL ) ]

(
¿ P Z<
LSL− μ^
σ^ )[ (
+ 1−P Z <
USL−^μ
σ^
=P Z<
192−199
1.6998)] (
+ 1−P Z <
208−199
1.6998 )[ ( )]
¿ P ( Z ←4.118 ) + [ 1−P ( Z<5.295 ) ] =0.000019+ [ 1−1 ] =0.000019

If the process mean could be centered at the specification target, the fraction nonconforming would be:

(
^p Potential=P Z <
192−200
1.6998 )[ (
+ 1−P Z <
208−200
1.6998 )]
=P ( Z ← 4.706 ) + [ 1−P ( Z< 4.706 ) ]

¿ 0.0000013+ ( 1−0.99999987 ) =0.0000026

5.43. A process is in statistical control with x=39.7 and R=2.5. The control chart uses a sample size of
n=2. Specifications are at 40 ± 5. The quality characteristic is normally distributed.

n=2 , μ^ =x́=39.7 , R=2.5 , σ^ x =R /d 2=2.5/1.128=2.216

USL=40+ 5=45 , LSL=40 – 5=35


a. Estimate the potential capability of the process.

^ p= USL−LSL = 45−35 =0.75


C
6 σ^ 6∗2.216
b. Estimate the actual process capability.

μ^ −LSLx 39.7−35
^ =
C = =0.71
pl
3 σ^ x 3∗2.216

USL x − μ^ 45−39.7
^ =
C = =0.80
pu
3 σ^ x 3∗2.216
^ pk =min ( C
C ^ pl , C
^ pu) =0.71

c. How much improvement could be made in process performance if the mean could be centered at the
nominal value?

The current fraction nonconforming is:

^p Actual =P ( X < LSL )+ P ( X >USL ) =P ( X < LSL )+ [ 1−P ( X <USL ) ]

(
¿ P Z<
LSL− μ^
^σ )[ (
+ 1−P Z <
USL−^μ

=P Z<
35−39.7
2.216 )] (
+ 1−P Z<
45−39.7
2.216 )[ ( )]
¿ P ( Z ←2.12094 )+ [ 1−P ( Z <2.39170 ) ] =0.0169634+ [ 1−0.991615 ] =0.0253

If the process mean could be centered at the specification target, the fraction nonconforming would be:

(
^p Potential=2∗P Z<
35−40
2.216 )
=2∗P ( Z ←2.25632 )=2∗0.0120253=0.0241

5.44. A process is in control with x́=75 and s=2. The process specifications are at 80 ± 8. The sample size is
n=5.
n=5 , μ^ =x́=75 , s=2 , σ^ x =s/c 4 =2/0.9400=2.128

USL=80+8=88 , LSL=80−8=72
a. Estimate the potential capability.

^ p= USL−LSL = 88−72 =1.25


C
6 σ^ 6∗2.128
b. Estimate the actual capability.

μ^ −LSLx 75−72
^ =
C = =0.47
pl
3 σ^ x 3∗2.128

USL x − μ^ 88−75
^ =
C = =2.04
pu
3 σ^ x 3∗2.128

^ pk =min ( C
C ^ pl , C
^ pu) =0.47

c. How much could process fallout be reduced by shifting the mean to the nominal dimension? Assume that
the quality characteristic is normally distributed.

The current fraction nonconforming is:

^p Actual =P ( X < LSL )+ P ( X >USL ) =P ( X < LSL )+ [ 1−P ( X <USL ) ]


(
¿ P Z<
LSL− μ^
^σ )[ (
+ 1−P Z <
USL−^μ

=P Z<
72−75
2.128
+ 1−P Z <
88−75
2.128 )] ( )[ ( )]
¿ P ( Z ←1.40977 ) + [ 1−P ( Z <6.10902 ) ] =0.0793038+ [ 1−1.00000 ] =0.0793

If the process mean could be centered at the specification target, the fraction nonconforming would be:

(
^p Potential=2∗P Z<
72−80
2.128 )
=2∗P ( Z ←3.75940 ) =2∗0.0000852=0.0001704

5.45. The weights of nominal 1-kg containers of a concentrated chemical ingredient are shown in Table 5E.14.
Prepare a normal probability plot of the data and estimate process capability.

0.9475 0.9775 0.9965 1.0075 1.0180


0.9705 0.9860 0.9975 1.0100 1.0200
0.9770 0.9960 1.0050 1.0175 1.0250

Prepare a normal probability plot of the disk height data and estimate process capability.

MTB > Graph > Probability Plot

Add percentile lines at Y values 50 and 84 to estimate and

Probability Plot of Weight


Normal - 95% CI
99
Mean 0.9968
95 StDev 0.02167
90
N 15
80
84 AD 0.323
P-Value 0.492
70
Percent

60
50 50
40
30
20

10
0.9968

1.0183

1
0.92 0.94 0.96 0.98 1.00 1.02 1.04 1.06 1.08
Weight

A normal probability plot of the 1-kg container weights shows that the normality assumption for the data is
reasonable.
x ≈ p50=0.9975 , p 84=1.0200

σ^ = p84− p50=1.0200−0.9975=0.0225

6 σ^ =6∗0.0225=0.1350
5.46. Consider the package weight data in Exercise 5.45. Suppose there is a lower specification at 0.985 kg.
Calculate an appropriate process capability ratio for this material. What percentage of the packages
produced by this process is estimated to be below the specification limit?

From exercise 5.45: u^ =0.9975 , σ^ =0.0225


^ pl = μ^ −LSL = 0.9975−0.985 =0.1852
C
3 σ^ 3∗0.0225

^p=P ( X < LSL )=P Z<( LSL− μ^


σ
=P Z < ) (
0.985−0.9975
0.0225
=P ( Z ←0.5556 )=0.289242 )
So 28.92% of the packages produced by this process is estimated to be below the specification limit.

5.47. The height of the disk used in a computer disk drive assembly is a critical quality
characteristic. Table 5E.15 gives the heights (in mm) of 25 disks randomly selected
from the manufacturing process. Prepare a normal probability plot of the disk height
data and estimate process capability.

20.0106 20.0090 20.0067 19.9772 20.0001


19.9940 19.9876 20.0042 19.9986 19.9958
20.0075 20.0018 20.0059 19.9975 20.0089
20.0045 19.9891 19.9956 19.9884 20.0154
20.0056 19.9831 20.0040 20.0006 20.0047

MTB > Graph > Probability Plot

Add percentile lines at Y values 50 and 84 to estimate and

Probability Plot of Height


Normal - 95% CI
99
Mean 20.00
95 StDev 0.009242
90
N 25
80
84 AD 0.515
P-Value 0.174
70
Percent

60
50 50
40
30
20
19.99986

20.00905

10
5

1
19.97 19.98 19.99 20.00 20.01 20.02 20.03
Height

A normal probability plot of computer disk heights shows the distribution is close to normal.

x ≈ p50=19.99986 , p 84=20.00905

σ^ = p84− p50=20.00905−19.99986=0.00919

6 σ^ =6∗0.00919=0.0551
5.48. The length of time required to reimburse employee expense claims is a characteristic that can be used to
describe the performance of the process. Table 5E.16 gives the cycle times (in days) of 30 randomly
selected employee expense claims. Estimate the capability of this process.

5 5 16 17 14 12
8 13 6 12 11 10
18 18 13 12 19 14
17 16 11 22 13 16
10 18 12 12 12 14

MTB > Graph > Probability Plot

Add percentile lines at Y values 50 and 84 to estimate and

Probability Plot of Time


Normal - 95% CI
99
Mean 13.2
95 StDev 4.097
90
N 30
80
84 AD 0.401
P-Value 0.340
70
Percent

60
50 50
40
30
20

10
5

17.27
1 13.2
0 5 10 15 20 25
Time

A normal probability plot of reimbursement time shows the distribution is approximately normal.

x= p50=13.2 , p 84=17.27

σ^ = p84− p50=17.27−13.2=4.07

6 σ^ =6∗4.07=24.42
5.49. An electric utility tracks the response time to customer reported outages. The data in
Table 5E.17 are a random sample of 40 of the response times (in minutes) for one
operating division of this utility during a single month.

10
80 102 86 94 86 106 110 127 97
5
110 104 97 128 98 84 97 87 99 94
105 104 84 77 125 85 80 104 103 109
10
115 89 100 96 96 87 100 102 93
6

MTB > Graph > Probability Plot

Add percentile lines at Y values 50 and 84 to estimate and .


Probability Plot of Time
Normal - 95% CI
99
Mean 98.78
95 StDev 12.27
90
N 40
80
84 AD 0.463
P-Value 0.243
70

Percent
60
50 50
40
30
20

10

110.98
5

98.78
1
60 70 80 90 100 110 120 130 140
Time

Ignoring the three outliers in upper right portion of the graph, the normality assumption is reasonable.

a. Estimate the capability of the utility’s process for responding to customer-reported outages.

x ≈ p50=98.78 , p 84=110.98

σ^ = p84− p50=110.98−98.78=12.2

6 σ^ =6∗12.2=73.2
b. The utility wants to achieve a 90% response rate in under two hours, as response to emergency outages is
an important measure of customer satisfaction. What is the capability of the process with respect to this
objective?
USL=2 hrs=120 mins

^ pu= USL−u^ = 120−98.78 =0.58


C
3∗σ^ 3∗12.2

(
^p=P Z >
USL−^μ
σ^
=1−P Z<
USL−^μ
σ^ )
=1−P Z < (
120−98.78
12.2 ) ( )
^p=1−P ( Z <1.739 )=1−0.958983=0.0410

5.50. The failure time in hours of 10 memory devices follows: 1210, 1275, 1400, 1695, 1900, 2105, 2230, 2250,
2500, and 2625. Plot the data on normal probability paper and, if appropriate, estimate the process
capability. Is it safe to estimate the proportion of circuits that fail below 1200 h?

MTB > Graph > Probability Plot

Add percentile lines at Y values 50 and 84 to estimate and .


Probability Plot of Failure Time
Normal - 95% CI
99
Mean 1919
95 StDev 507.1
90
N 10
80
84 AD 0.272
P-Value 0.587
70

Percent
60
50 50
40
30
20

10
5

1919

2423
1
0 1000 2000 3000 4000
Failure Time

A normal probability plot of failure time shows the distribution is approximately normal.

x ≈ p50=1919 , p84 =2423

σ^ = p84− p50=2423−1919=504

6 σ^ =6∗504=3024
To use the capability metric to estimate the proportion of circuits that fail below 1200 h, the process must
be in statistical control. As we can see in the following I-MR control charts, the process is not in statistical
control.

I-MR Chart of Failure Time


1
1
2500
UCL=2337
Individual Value

2000
_
X=1919

1500 LCL=1501
1
1
1
1000
1 2 3 4 5 6 7 8 9 10
Observation

UCL=513.7
480
Moving Range

360

240
__
MR=157.2
120

0 LCL=0
1 2 3 4 5 6 7 8 9 10
Observation
CHAPTER 6

Control Charts for Attributes


Learning Objectives

After completing this chapter you should be able to:


1. Understand the statistical basis of attributes control charts
2. Know how to design attributes control charts
3. Know how to set up and use the p chart for fraction nonconforming
4. Know how to set up and use the np control chart for the number of nonconforming items
5. Know how to set up and use the c control chart for defects
6. Know how to set up and use the u control chart for defects per unit
7. Use attributes control charts with variable sample sizes
8. Understand the advantages and disadvantages of attributes versus variables control charts
9. Understand the rational subgroup concept for attributes control charts

Important Terms and Concepts


Cause-and-effect diagram Control chart for fraction Design of attributes control charts
Choice between attributes and nonconforming or p chart Fraction defective
variables data Control chart for nonconformities Fraction nonconforming
Control chart for defects or or c chart Nonconformity
nonconformities per unit or u Control chart for number Variable sample size for attributes
chart nonconforming or np chart control chart
Defect

Exercises

6.1. The data in Table 6E.1 give the number of nonconforming bearing and seal assemblies in samples of size
100. Construct a fraction nonconforming control chart for these data. If any points plot out of control,
assume that assignable causes can be found and determine the revised control limits.

Sample Sample
Number of Number of
Numbe Numbe
Nonconforming Assemblies Nonconforming Assemblies
r r
1 7 11 6
2 4 12 15
3 1 13 0
4 3 14 9
5 6 15 5
6 8 16 1
7 10 17 4
8 5 18 5
9 2 19 7
10 7 20 12

m
117
∑ Di
n=100 , m=20 , ∑ Di=117 , p= i=1 = =0.0585
i=1 mn 20∗100

UCL p= p+ 3
√ p (1− p )
n
=0.0585+3
0.0585 ( 1−0.0585 )
100
=0.1289

LCL p= p−3
√ p ( 1− p )
n
=0.0585−3
0.0585 (1−0.0585 )
100 √
=0.0585−0.0704 ⇒ 0

MTB > Stat > Control Charts > Attributes Charts > P

P Chart of Nonconforming Assemblies


0.16 1

0.14
UCL=0.1289
0.12

0.10
Proportion

0.08
_
0.06 P=0.0585
0.04

0.02

0.00 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample

Test Results for P Chart of Nonconforming Assemblies

TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 12

Sample 12 is out-of-control, so remove from control limit calculation:

m
102
∑ Di
n=100 , m=19 , ∑ D i=102 , p= i=1 = =0.0537
i =1 mn 19∗100

UCL p= p+ 3
√ p (1− p )
n
=0.0537+3
0.0537 ( 1−0.0537 )
100
=0.1213

LCL p= p−3
√ p ( 1− p )
n
=0.0537−3
0.0537 ( 1−0.0537 )
100 √
=0.0537−0.0676 ⇒ 0

MTB > Stat > Control Charts > Attributes Charts > P
P Chart of Nonconforming Assemblies
0.16 1

0.14

0.12 UCL=0.1213
0.10

Proportion
0.08

0.06 _
P=0.0537
0.04

0.02

0.00 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample

Test Results for P Chart of Nonconforming Assemblies

TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 12

p chart : UCL=0.1213 , CL=0.0537 , LCL=0


6.2. The number of nonconforming switches in samples of size 150 is shown in Table 6E.2. Construct a fraction
nonconforming control chart for these data. Does the process appear to be in control? If not, assume that
assignable cause can be found for all points outside the control limits, and calculate the revised control
limits.

Sample Number of Sample Number of


Numbe Nonconformin Numbe Nonconformin
r g Switches r g Switches
1 8 11 6
2 1 12 0
3 3 13 4
4 0 14 0
5 2 15 3
6 4 16 1
7 0 17 15
8 1 18 2
9 10 19 3
10 6 20 0

69
m ∑ Di
n=150 , m=20 , ∑ Di=69 , p= i=1 = =0.023
i=1 mn 20∗150

UCL= p+ 3
√ p (1− p )
n
=0.023+3
0.023 ( 1−0.023 )
150 √
=0.0597
LCL= p−3
√ p ( 1− p )
n
=0.023−3
0.023 (1−0.023 )
150 √
=0.023−0.0367 ⇒ 0

P Chart of Nonconforming Switches


1
0.10

0.08

Proportion
0.06 UCL=0.0597

0.04

_
0.02
P=0.023

0.00 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample

Test Results for P Chart of Nonconforming Switches

TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 9, 17

Samples 9 and 17 are out-of-control, so remove from control limit calculation:

m
44
∑ Di
n=100 , m=18 , ∑ D i=44 , p= i =1 = =0.01630
i =1 mn 18∗150

UCL p= p+ 3
√ p (1− p )
n
=0.01630+3
0.01630 (1−0.01630 )
150
=0.0473

LCL p= p−3
√ p ( 1− p )
n
=0.01630−3
0.01630 (1−0.01630 )
150 √
=0.01630−0.0310 ⇒ 0

MTB > Stat > Control Charts > Attributes Charts > P

P Chart of Nonconforming Switches


1
0.10

0.08

1
Proportion

0.06 1

UCL=0.0473
0.04

0.02
_
P=0.0163

0.00 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample
Test Results for P Chart of Nonconforming Switches

TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 1, 9, 17

Sample 1 is now out of control, assuming an assignable cause is identified, remove it from control limit
calculations:

36
m ∑ Di
n=100 , m=17 , ∑ Di=36 , p= i=1 = =0.01412
i=1 mn 17∗150

UCL p= p+ 3
√ p (1− p )
n
=0.01412+3
0.01412 ( 1−0.01412 )
150 √
=0.0430

LCL p= p−3
√ p ( 1− p )
n
=0.01412−3
0.01412 ( 1−0.01412 )
150 √
=0.01630−0.0289 ⇒ 0

MTB > Stat > Control Charts > Attributes Charts > P

P Chart of Nonconforming Switches


1
0.10

0.08

1
Proportion

0.06 1

0.04
UCL=0.0430

0.02 _
P=0.0141
0.00 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample

UCL=0.0430 ,CL=0.0141, LCL=0


6.3. The data in Table 6E.3 represent the results of inspecting all units of a personal computer produced for the
past 10 days. Does the process appear to be in control?

Inspecte Nonconformin Fraction


Da
d g Nonconformin
y
Units Units g
1 80 4 0.05
2 110 7 0.064
3 90 5 0.056
4 75 8 0.107
5 130 6 0.046
6 120 6 0.05
7 70 4 0.057
8 125 5 0.04
9 105 8 0.076
10 95 7 0.074

m m ∑ Di 60
∑ ni=1,000 , m=10 , ∑ Di=60 , p= i=1m =
1,000
=0.06
i=1 i=1
∑ ni
i=1

UCLi= p+3
√ √ p ( 1−p )
ni
=0.06+3
0.06 (1−0.06 )
ni

LCL i=max 0 , p−3


{ √ } { √ p ( 1− p )
ni
=max 0 , 0.06−3
0.06 ( 1−0.06 )
ni }
As an example, forn=80:

UCL1= p+3
√ p ( 1−p )
ni
=0.06+3
0.06 (1−0.06 )
80 √
=0.1397

{
LCL 1=max 0 , p−3
√ p ( 1− p )
ni } {
=max 0 , 0.06−3
0.06 ( 1−0.06 )
80
=0
√ }
MTB > Stat > Control Charts > Attributes Charts > P

P Chart of Nonconforming PC Units


0.16

0.14
UCL=0.1331
0.12

0.10
Proportion

0.08
_
0.06 P=0.06
0.04

0.02

0.00 LCL=0
1 2 3 4 5 6 7 8 9 10
Sample
Tests performed with unequal sample sizes

p chart : UCLm=10=0.1331 , CLm=10=0.06 , LCL m=10=0

The process appears to be in statistical control.

6.4. A process that produces titanium forgings for automobile turbocharger wheels is to be controlled through
the use of a fraction nonconforming chart. Initially, one sample of size 150 is taken each day for 20 days,
and the results shown in Table 6E.4 are observed.

Nonconforming Nonconforming
Day Day
Units Units
1 3 11 2
2 2 12 4
3 4 13 1
4 2 14 3
5 5 15 6
6 2 16 0
7 1 17 1
8 2 18 2
9 0 19 3
10 5 20 2

a. Establish a control chart to monitor future production.


m

50
m ∑ Di
n=150 , m=20 , ∑ Di=50 , p= i=1 = =0.01667
i=1 mn 20∗150

UCL= p+ 3
√ p (1− p )
n
=0.01667+3
0.01667∗(1−0.01667)
150
=0.04802

CL= p=0.01667

LCL= p−3
√ p ( 1− p )
n
=0.01667−3
0.01667∗( 1−0.01667 )
150 √
=−0.0147=0

MTB > Stat > Control Charts > Attributes Charts > P

P Chart of Nonconforming Titanium Forgings


0.05
UCL=0.04802

0.04
Proportion

0.03

0.02 _
P=0.01667
0.01

0.00 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample

The process appears to be in statistical control, so we can use these control limits to monitor future
production.

b. What is the smallest sample size that could be used for this process and still give a positive lower
control limit on the chart?

To get a positive lower control limit, we require:

LCL> 0
⇒ p−3
√ p ( 1− p )
n
>0

9∗( 1− p )
⇒ n>
p

⇒ n> 530.9

n=531 is the smallest sample size that could be used to give a positive lower control limit.
6.5. A process produces rubber belts in lots of size 2,500. Inspection records on the last 20 lots reveal the data
in Table 6E.5.

Number of
Lot Lot Number of
Nonconforming
Number Number Nonconforming Belts
Belts
1 230 11 456
2 435 12 394
3 221 13 285
4 346 14 331
5 230 15 198
6 327 16 414
7 285 17 131
8 311 18 269
9 342 19 221
10 308 20 407

a. Compute trial control limits for a fraction nonconforming control chart.

m
6141
∑ Di
n=2500 , m=20 , ∑ Di=6141 , p= i=1 = =0.1228
i=1 mn 20∗2500

UCL p= p+ 3
√ p (1− p )
n
=0.1228+3

0.1228 ( 1−0.1228 )
2500
=0.1425

CL= p=0.1228

LCL p= p−3
√ p ( 1− p )
n
=0.1228−3

0.1228 (1−0.1228 )
2500
=0.1031

MTB > Stat > Control Charts > Attributes Charts > P
P Chart of Nonconforming Belts
0.200
1
1
0.175 1 1
1

0.150
UCL=0.1425

Proportion
_
0.125 P=0.1228

0.100 LCL=0.1031
1 1
1 1
0.075 1

0.050 1
1 3 5 7 9 11 13 15 17 19
Sample

Test Results for P Chart of Belt Inspection

TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 1, 2, 3, 5, 11, 12, 15, 16, 17, 19, 20

p chart : UCL=0.1425 , CL=0.1228 , LCL=0.1031


b. If you wanted to set up a control chart for controlling future production, how would you use these data to
obtain the center line and control limits for the chart?

Since many subgroups are out of control (11 of 20), the data should not be used to establish control limits
for future production. Instead, the process should be investigated for causes of the wild swings in p.

6.6. Based on the data in Table 6E.6 if an np chart is to be established, what would you recommend as the
center line and control limits? Assume that n=500.

Number of
Day Nonconforming
Units
1 3
2 4
3 3
4 2
5 6
6 12
7 5
8 1
9 2
10 2

m
40
∑ Di
n=500 , m=10 , ∑ D i=40 , p= i=1 = =0.008
i =1 mn 10∗500

UCL=n p+ 3 √ n p ( 1− p ) =500∗0.008+ 3 √ 500∗0.008∗( 1−0.008 )=9.98


CL=n p=500∗0.008=4

LCL=n p−3 √ n p ( 1− p )=500∗0.008−3 √ 500∗0.008∗( 1−0.008 )=4−5.98 ⇒ 0

MTB > Stat > Control Charts > Attributes Charts > NP

NP Chart of Nonconforming Units


1
12

10 UCL=9.98
Sample Count
8

__
4 NP=4

0 LCL=0
1 2 3 4 5 6 7 8 9 10
Sample

Test Results for NP Chart of Nonconforming Units

TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 6

Sample 6 is out of control. Assume an assignable cause can be found and remove it from control limit
calculations:

28
m ∑ Di
n=500 , m=9 , ∑ Di=28 , p= i=1 = =0.006222
i=1 mn 9∗500

UCL=n p+ 3 √ n p ( 1− p ) =500∗0.006222+3 √ 500∗0.006222∗ (1−0.006222 )=8.39

CL=n p=500∗0.00622=4

LCL=n p−3 √ n p ( 1− p )=500∗0.006222−3 √ 500∗0.006222∗( 1−0.006222 )=4−5.27 ⇒ 0

MTB > Stat > Control Charts > Attributes Charts > NP

NP Chart of Nonconforming Units


1
12

10

UCL=8.39
Sample Count

4 __
NP=3.11
2

0 LCL=0
1 2 3 4 5 6 7 8 9 10
Sample
All other sample points remain in control use the following control limits for future monitoring:

UCL=8.39 ,Cl=3.11 , LCL=0


6.7. A company purchases a small metal bracket in containers of 5,000 each. Ten containers have arrived at the
unloading facility, and 250 brackets are selected at random from each container. The fraction
nonconforming in each sample are 0, 0, 0, 0.004, 0.008, 0.02, 0.004, 0, 0, and 0.008. Do the data from this
shipment indicate statistical control?

10

0.044
∑ pi
m=10 , n=250 , p= i=1 = =0.0044
m 10

UCL= p+ 3
√ p (1− p )
n
=0.0044+ 3
250 √
0.0044∗(1−0.0044)
=0.01696

CL= p=0.01667

LCL= p−3
√ p ( 1− p )
n
=0.0044−3
0.0044∗( 1−0.0044 )
250 √
=0.0044−0.012560 ⇒ 0

Since p6=0.02> UCL=0.01696 , sample 6 is out of control. The shipment is not in statistical control.

MTB > Stat > Control Charts > Attributes Charts > P

P Chart of Nonconforming Metal Brackets


1
0.020

UCL=0.01696
0.015
Proportion

0.010

_
0.005
P=0.0044

0.000 LCL=0
1 2 3 4 5 6 7 8 9 10
Sample

6.8. A control chart for the fraction nonconforming is to be established using a center line of p=0.10. What
sample size is required if we wish to detect a shift in the process fraction nonconforming to 0.20 with
probability 0.50?

p=0.10 pnew =0.20

Desired: P ( detect ) =0.50 Assume: L=3 sigma control limits

δ= pnew − p=0.20−0.10=0.10
() ( )
2 2
L 3
n= p ( 1−p )= 0.10 ( 1−0.10 )=81
δ 0.10

6.9. A maintenance group improves the effectiveness of its repair work by monitoring the number of
maintenance requests that require a second call to complete the repair. Twenty weeks of data are shown in
Table 6E.7.

Sample Sample Number


Number Size Nonconforming
1 100 10
2 100 15
3 100 31
4 100 18
5 100 24
6 100 12
7 100 23
8 100 15
9 100 8
10 100 8

a. Find trial control limits for this process.

n=100 m=10
m

∑ D i=164 ∑ Di
164
i=1 p= i=1 = =0.164
mn 10∗100

UCL p= p+ 3
√ p (1− p )
n
=0.164+ 3
100 √
0.164 ( 1−0.164 )
=0.2751

LCL p= p−3
√ p ( 1− p )
n
=0.164−3
100 √
0.164 ( 1−0.164 )
=0.0529

P Chart of Nonconforming Maintenance Requests


1

0.30

UCL=0.2751
0.25
Proportion

0.20
_
P=0.164
0.15

0.10

0.05 LCL=0.0529
1 2 3 4 5 6 7 8 9 10
Sample

Sample 3 is out of control. Let us assume there are assignable causes for sample 3, and it can be removed.
m

n=100 m=9
m

∑ D i=133 ∑ Di
133
i=1 p= i=1 = =0.1478
mn 9∗100

UCL p= p+ 3
√ p (1− p )
n
=0.1478+3
100 √
0.1478 ( 1−0.1478 )
=0.2542

LCL p= p−3
√ p ( 1− p )
n
=0.1478−3
100 √
0.1478 (1−0.1478 )
=0.0413

P Chart of Nonconforming Maintanence Requests

0.25 UCL=0.2542

0.20
Proportion

_
0.15 P=0.1478

0.10

0.05
LCL=0.0413
1 2 3 4 5 6 7 8 9
Sample

The revised control limits for the p chart are: UCL=0.2542 ,CL=0.1478 , LCL=0.0413

b. Design a control chart for controlling future production.

There are two approaches for controlling future production. The first approach would be to plot and use
the revised limits in (a) unless there is at least one sample of different size. In those cases, calculate the
√ √
exact control limits by p ± p ( 1− p ) /n i=¿ 0.1640 ± 0.1640 ( 1−0.1640 ) /ni ¿. The second
approach, preferred in many cases, would be to construct standardized control limits with control limits at

 3, and to plot Zi =( ^pi−0.1640 ) / 0.1640 (1−0.1640 ) /ni .

6.10. Why is the np chart not appropriate with variable sample sizes?

When there are variable sample sizes, the results can be misleading because the chart does not display the
relative magnitude of the number of defects within each sample. For example, suppose n pi=5 and
n pi+1 =7. Looking at the magnitude for the two samples only and there does not seem to be much
difference between the two. Now consider the sample sizes of the two samples: ni =100 and ni +1=200 .
Looking at the relative frequency of the two samples now indicates that the sample i has worse quality.

6.11. A process that produces bearing housings is controlled with a fraction nonconforming control chart, using
sample size n=100 and a center line p=0.02.

a. Find the three-sigma limits for this chart.

n=100 , p=0.02 , L=3


UCL p= p+ 3
√ p (1− p )
n
=0.02+3
CL p= p=0.02

0.02 ( 1−0.02 )
100
=0.0620

LCL p= p−3
√ p ( 1− p )
ni
=0.02+3
100 √
0.02 ( 1−0.02 )
=−0.0220 ⇒ 0

p chart: UCL=0.0620 ,CL=0.02, LCL=0

b. Analyze the ten new samples ( n=100 ) shown in Table 6E.8 for statistical control. What conclusions can
you draw about the process now?

Sampl Sampl
Number Number
e e
Numb Nonconformi Numb Nonconformi
er ng er ng
1 5 6 1
2 2 7 2
3 3 8 6
4 8 9 3
5 4 10 4

MTB > Stat > Control Charts > Attributes Charts > P

P Chart of Nonconforming Bearing Housings


0.09
1
0.08

0.07

0.06 UCL=0.062
Proportion

0.05

0.04

0.03
_
0.02 P=0.02
0.01

0.00 LCL=0
1 2 3 4 5 6 7 8 9 10
Sample

Test Results for P Chart of Nonconforming Bearing Housings

TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 4

p chart : UCL=0.0620 , CL=0.02 , LCL=0

Sample 4 exceeds the upper control limit, signaling a potentially unstable process ( p=0.038 ,
σ^ p=0.0191 ¿

6.12. Consider the fraction nonconforming control chart in Exercise 6.4. Find the equivalent np chart.
m

50
m ∑ Di
n=150 , m=20 , ∑ Di=50 , p= i=1 = =0.01667
i=1 mn 20∗150

UCL=n p+ 3 √ n p ( 1− p ) =2.5005+3 √ 150∗0.01667∗( 1−0.01667 )=7.204

CLnp=n p=2.5005

LCL=n p−3 √ n p ( 1− p )=2.5005−3 √ 150∗0.01667 (1−0.01667 ) =−2.204 ⇒ 0

MTB > Stat > Control Charts > Attributes Charts > NP

NP Chart of Nonconforming Titanium Forgings


8

7
UCL=7.204

6
Sample Count

3 __
NP=2.5
2

0 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample

np chart :UCL=7.204 , CL=2.55 , LCL=0


The process is in control; results are the same as for the p chart.

6.13. Consider the fraction nonconforming control chart in Exercise 6.5. Find the equivalent np chart.

m
6141
∑ Di
n=2500 , m=20 , ∑ Di=6141 , p= i=1 = =0.12282
i=1 mn 20∗2500

UCL=n p+ 3 √ n p ( 1− p ) =2500∗0.12282+ 3 √ 2500∗0.12282∗(1−0.12282)=356.3

CL=n p=2500∗0.12282=307.05

LCL p=n p−3 √ n p ( 1− p )=2500∗0.12282−3 √ 2500∗0.12282∗( 1−0.12282 )=257.8

MTB > Stat > Control Charts > Attributes Charts > NP
NP Chart of Nonconforming Belts
500
1
1
1 1
1
400

UCL=356.3

Sample Count
__
300 NP=307.1
LCL=257.8
1 1
1 1
200
1

1
100
1 3 5 7 9 11 13 15 17 19
Sample

np chart :UCL=356.3 ,CL=307.1 , LCL=257.8

The process is not in control; results are the same as for the p chart.

6.14. Surface defects have been counted on 25 rectangular steel plates, and the data are shown in Table 6E.9. Set
up a control chart for nonconformities using these data. Does the process producing the plates appear to be
in statistical control?

Plate Number of Plate Number of


Number Nonconformities Number Nonconformities
1 1 14 0
2 0 15 2
3 4 16 1
4 3 17 3
5 1 18 5
6 2 19 4
7 5 20 6
8 0 21 3
9 2 22 1
10 1 23 0
11 1 24 2
12 0 25 4
13 8

25

59
∑ ci
n=25 , c= i=1 = =2.36
n 25

UCL=c+ 3∗√ c=2.36 +3∗√ 2.36=6.969

CL=c=2.36

LCL=c−3∗√ c=2.36−3∗√ 2.36=−2.249 ⇒0


MTB > Stat > Control Charts > Attributes Charts > C
C Chart of Nonconformities of Steel Plates
9
1
8

7 UCL=6.969
6

Sample Count
5

3 _
C=2.36
2

0 LCL=0
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample

The process is not in statistical control; there is an out of control point at sample 13.

6.15. A paper mill uses a control chart to monitor the imperfection in finished rolls of paper. Production output
is inspected for 20 days, and the resulting data are shown in Table 6E.10. Use these data to set up a control
chart for nonconformities per roll of paper. Does the process appear to be in statistical control? What center
line and control limits would you recommend for controlling current production?

Total Total
Number Total Number Number Total Number
Number Number
Rolls of Rolls of
of of
Produce Imperfections Produce Imperfections
Day Day
d d
1 18 12 11 18 18
2 18 14 12 18 14
3 24 20 13 18 9
4 22 18 14 20 10
5 22 15 15 20 14
6 22 12 16 20 13
7 20 11 17 24 16
8 20 15 18 24 18
9 20 12 19 22 20
10 20 10 20 21 17

20

∑ ui 288
u= i=1
20
= =0.7007
411
∑ ni
i=1

UCL=u+3 √ u /ni=0.7007+ 3 √ 0.7007 / ni

CL=u=0.7007

LCL=u−3 √ u /n i=0.7007−3 √ 0.7007/ni

For example, control limits for the following sample sizes are:
n UCL CL LCL
18 1.2926 0.7007 0.1088
20 1.2622 0.7007 0.1392
22 1.2361 0.7007 0.1653

MTB > Stat > Control Charts > Attributes Charts > U

U Chart of Imperfections in Paper Rolls


1.4

1.2
UCL=1.249

Sample Count Per Unit 1.0

0.8 _
U=0.701
0.6

0.4

0.2
LCL=0.153
0.0
1 3 5 7 9 11 13 15 17 19
Sample
Tests performed with unequal sample sizes

The process appears to be in statistical control.

6.16. Consider the papermaking process in Exercise 6.15. Set up a u chart based on an average sample size to
control this process.

20

∑ ni411
n= i =1 = =20.55
20 20
20

∑ ui 288
u= i=1
20
= =0.7007
411
∑ ni
i=1

UCL=u+3 √u /n=0.7007 +3 √ 0.7007/n=1.255

CL=u=0.7007

LCL=u−3 √ u /n=0.7007−3 √ 0.7007/n=0.147


MTB > Stat > Control Charts > Attributes Charts > U
U Chart of Imperfections in Paper Rolls
1.4

UCL=1.255
1.2

Sample Count Per Unit


1.0

0.8 _
U=0.701
0.6

0.4

0.2
LCL=0.147
0.0
1 3 5 7 9 11 13 15 17 19
Sample

u chart :UCL=1.255 , CL=0.701 , LCL=0.147


The process appears to be in statistical control.

6.17. The number of nonconformities found on final inspection of a tape deck is shown in Table 6E.11. Can you
conclude that the process is in statistical control? What center line and control limits would you
recommend for controlling future production?

Deck Number of Deck Number of


Number Nonconformities Number Nonconformities
2412 0 2421 1
2413 1 2422 0
2414 1 2423 3
2415 0 2424 2
2416 2 2425 5
2417 1 2426 1
2418 1 2427 2
2419 3 2428 1
2420 2 2429 1

25

27
∑ ci
n=18 , c= i =1 = =1.5
n 18

UCL=c+ 3∗√ c=1.5+ 3∗√ 1.5=5.174

CL=c=1.5

LCL=c−3∗√ c=2.36−3∗√2.36=−2.174 ⇒ 0
C Chart of Nonconformities in Tape Decks

5
UCL=5.174

Sample Count
3

2
_
C=1.5
1

0 LCL=0
1 3 5 7 9 11 13 15 17
Sample

The process appears to be in statistical control.

UCL=5.174 , CL=1.5 , LCL=0


6.18. The data in Table 6E.12 represent the number of nonconformities per 1,000 meters of telephone cable.
From an analysis of these data, would you conclude that the process is in statistical control? What control
limits would you recommend for future production?

Sample Number of Sample Number of


Number Nonconformities Number Nonconformities
1 1 12 6
2 1 13 9
3 3 14 11
4 7 15 15
5 8 16 8
6 10 17 3
7 5 18 6
8 13 19 7
9 0 20 4
10 19 21 9
11 24 22 20

22

189
∑ ci
n=22 , c= i=1 = =8.591
n 22

UCL=c+ 3 √ c=8.591+3 √ 8.591=17.38

CL=c=8.591

LCL=c−3 √ c=8.591−3 √ 8.591=−0.203 ⇒0


MTB > Stat > Control Charts > Attributes Charts > C
C Chart of Nonconformities in Telephone Cables
25 1

1
20 1

UCL=17.38

Sample Count
15

10 _
C=8.59

0 LCL=0
1 3 5 7 9 11 13 15 17 19 21
Sample

Test Results for C Chart of Nonconformities in Telephone Cables

TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 10, 11, 22

Process is not in statistical control; three subgroups exceed the UCL. Exclude subgroups 10, 11 and 22,
then re-calculate the control limits.

C Chart of Nonconformities in Telephone Cables


25 1

1
20 1

1
Sample Count

15
UCL=14.36

10
_
C=6.63
5

0 LCL=0
1 3 5 7 9 11 13 15 17 19 21
Sample

Subgroup 15 is now out of control, so exclude it as well and update control limits.

22

111
∑ ci
n=18 , c= i =1 = =6.167
n 18

UCL=c+ 3 √ c=6.167+3 √ 6.167=13.62

CL=c=6.167

LCL=c−3 √ c=6.167−3 √ 6.167=−1.28⇒ 0


C Chart of Nonconformities in Telephone Cables
25 1

1
20 1

Sample Count
15
UCL=13.62

10

_
C=6.17
5

0 LCL=0
1 3 5 7 9 11 13 15 17 19 21
Sample

Revised c chart :UCL=13.62 , CL=6.17 , LCL=0

Test Results for C Chart of Nonconformities for Telephone Cables

TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 10, 11, 15, 22

6.19. Consider the data in Exercise 6.17. Suppose we wish to define a new inspection unit of four tape decks.

a. What are the center line and control limits for a control chart for monitoring future production based on the
total number of defects in the new inspection unit?

The new inspection unit is n=4 of the old unit. A c chart of the total number of nonconformities per
inspection unit is appropriate.

From exercise 6.17, c=1.5 .

UCL=n c+ 3 √ n c=6+3 √ 6=13.348

CL=n c=4∗1.5=6

LCL=n c−3 √ n c=6−3 √ 6=−1.348 ⇒ 0

c char t :UCL=13.348 ,CL=6 , LCL=0

Note plot point, c^ , is the total number of nonconformities found while inspecting a sample of four tape
decks.

b. What are the center line and control limits for a control chart for nonconformities per unit used to monitor
future production?

The sample is n=1 new inspection units. A u chart of average nonconformities per inspection unit is
appropriate.

total nonconformities 27
u= = =6
Number of inspection units (18∗1/4 )

UCL=u+3 √ u /n=6 +3 √ 6 /1=13.348


CL=u=6

LCL=u−3 √ u /n=6−3 √ 6 /1=−1.348 ⇒ 0

^ , is the average number of nonconformities found in 4 tape decks, and since n = 1, this is
The plot point,u
the same as the total number of nonconformities.

u chart :UCL=13.348 , CL=6 , LCL=0


6.20. Consider the data in Exercise 6.18. Suppose a new inspection unit is defined as 2,500 meters of wire.

a. What are the center line and control limits for a control chart for monitoring future production based on the
total number of nonconformities in the new inspection unit?

The new inspection unit is n=2500/1000=2.5 of the old unit. A c chart of the total number of
nonconformities per inspection unit is appropriate.

From exercise 6.18, c=6.17.

UCL=n c+ 3 √n c=15.43+ 3 √ 15.43=27.21

CL=n c=2.5∗6.17=15.43

LCL=n c−3 √ n c=15.43−3 √ 15.43=3.65

c chart :UCL=27.21 , CL=15.43 , LCL=3.65

Note plot point, c^ , is the total number of nonconformities found while inspecting a sample 2500m in
length.

b. What are the center line and control limits for a control chart for average nonconformities per unit used to
monitor future production?

The sample is n=1 new inspection units. A u chart of average nonconformities per inspection unit is
appropriate.

total nonconformities 111


u= = =15.42
total inspection units ( 18∗1000 ) /2500

UCL=u+3 √ u /n=15.42+3 √ 15.42/1=27.20

CL=u=15.42

LCL=+3 √ u/n=15.42+3 √ 15.42/1=3.64

^ , is the average number of nonconformities found in 2500 m, and since n = 1, this is the
The plot point,u
same as the total number of nonconformities.

u chart :UCL=27.20 , CL=15.42 , LCL=3.64


6.21. An automobile manufacturer wishes to control the number of nonconformities in a subassembly area
producing manual transmissions. The inspection unit is defined as four transmissions, and data from 16
samples (each of size 4) are shown in Table 6E.13.

Sample Number of Sample Number of


Number Nonconformities Number Nonconformities
1 1 9 2
2 3 10 1
3 2 11 0
4 1 12 2
5 0 13 1
6 2 14 1
7 1 15 2
8 5 16 3

a. Set up a control chart for nonconformities per unit.

16

¿ nonconformities i=1 27
∑ ui
n=4 ,m=16 ,u= = = =0.422
¿ inspection units n∗m 4∗16

UCL=u+3 √ u /n=0.422+3 √ 0.422/ 4=1.396

CL=u=0.422

LCL=u−3 √ u /n=0.422−3 √ 0.422 /4=−0.552 ⇒ 0


MTB > Stat > Control Charts > Attributes Charts > U

U Chart of Nonconformities in Manual Transmissions


1.4 UCL=1.396
1.2
Sample Count Per Unit

1.0

0.8

0.6
_
0.4 U=0.422

0.2

0.0 LCL=0
1 3 5 7 9 11 13 15
Sample

UCL=1.396 ,CL=0.422, LCL=0


b. Do these data come from a controlled process? If not, assume that assignable causes can be found for all
out-of-control points and calculate the revised control chart parameters.

The process has no out of control points. The process appears to be in statistical control.

c. Suppose the inspection unit is redefined as eight transmissions. Design an appropriate control chart for
monitoring future production.
The sample is n=1 new inspection units. A u chart of average nonconformities per inspection unit is
appropriate.

total nonconformities 27
u= = =3.375
total inspection units ( 16∗4 ) /8

UCL=u+3 √ u /n=3.375+ 3 √ 3.375 /1=8.886

CL=u=3.375

LCL=u+3 √ u/n=3.375+3 √ 3.375 /1=−2.136 ⇒0

The plot point,u^ , is the average number of nonconformities found in 8 transmissions, and since n = 1, this
is the same as the total number of nonconformities.

u chart :UCL=8.886 ,CL=3.375 , LCL=0


6.22. The number of workmanship nonconformities observed in the final inspection of disk-drive assemblies has
been tabulated as shown in Table 6E.14. Does the process appear to be in control?

Number of Number of
Total Number Total Number
Assemblie Assemblie
Day of Day of
s s
Imperfections Imperfections
Inspected Inspected
1 2 10 6 4 24
2 4 30 7 2 15
3 2 18 8 4 26
4 1 10 9 3 21
5 3 20 10 1 8

u chart with control limits based on each sample size:

total nonconformities 182


u= = =7
total inspection units 26

UCL=u+3 √ u /ni

CL=u=7

LCL=u−3 √ u /n i

u chart : UCL=7+3 √ 7 /ni ,CL=7 , LCL=7−3 √ 7 / ni

For example, control limits for the following sample sizes are:

n UCL CL LCL
1 14.937 7 0
2 12.612 7 1.388
3 11.583 7 2.417
4 10.969 7 3.031

MTB > Stat > Control Charts > Attributes Charts > U

U Chart of Imperfections in Disk-Drive Assemblies


16
UCL=14.94
14

Sample Count Per Unit


12

10

8 _
U=7
6

0 LCL=0
1 2 3 4 5 6 7 8 9 10
Sample
Tests performed with unequal sample sizes

The process is in statistical control.

6.23. The manufacturer wishes to set up a control chart at the final inspection station for a gas water heater.
Defects in workmanship and visual quality features are checked in this inspection. For the past 22 working
days, 176 water heaters were inspected and a total of 924 nonconformities reported.

a. What type of control chart would you recommend here and how would you use it?

Given that 1 water heater is the inspection unit, we can use a c chart to monitor the number of
nonconformities per water heater.

924
c= =5.25
176

UCL=c+ 3 √c=5.25+ 3 √5.25=12.124

CL=c=5.25

LCL=c−3 √ c=5.25−3 √ 5.25=−1.624 ⇒ 0

c chart :UCL=12.124 , CL=5.25 , LCL=0


b. Using two water heaters as the inspection unit, calculate the center line and control limits that are consistent
with the past 22 days of inspection data.

The new inspection unit is n=2/1=2 of the old unit. A c chart of the total number of nonconformities
per inspection unit is appropriate.

From part a, c=5.25 .

UCL=n c+ 3 √n c=2∗5.25+3 √ 2∗5.25=20.221

CL=n c=2∗5.25=10.50
LCL=n c−3 √ n c=2∗5.25−3 √ 2∗5.25=0.779

c chart :UCL=20.221 , CL=10.50 , LCL=0.779


6.24. Assembled portable television sets are subjected to a final inspection for surface defects. A total procedure
is established based on the requirement that the average number of nonconformities per unit be 4.0 units.
What is the appropriate type of control chart?

Average number of nonconformities/unit: u=4 .


Desire  = 0.99. Use the cumulative Poisson distribution to determine the UCL.

Using EXCEL

Ex7.55X Ex7.55alpha
0 =POISSON(0,4,TRUE)
1 =POISSON(1,4,TRUE)
2 =POISSON(2,4,TRUE)
3 =POISSON(3,4,TRUE)
4 =POISSON(4,4,TRUE)
5 =POISSON(5,4,TRUE)
6 =POISSON(A8,4,TRUE)
7 =POISSON(A9,4,TRUE)
8 =POISSON(A10,4,TRUE)
9 =POISSON(A11,4,TRUE)
10 =POISSON(A12,4,TRUE)
11 =POISSON(A13,4,TRUE)

Ex7.55X Ex7.55alpha
0 0.02
1 0.09
2 0.24
3 0.43
4 0.63
5 0.79
6 0.89
7 0.95
8 0.98
9 0.99
10 1.00
11 1.00

An UCL=9 will give a probability of 0.99 of concluding the process is in control, when in fact it is.

6.25. A control chart for nonconformities is to be established in conjunction with the final inspection of a radio.
The inspection unit is to be a group of ten radios. The average number of nonconformities per radio has, in
the past, been 0.5. Find three-sigma control limits for a c chart based on this size of inspection unit.

n=10 , c=0.5

UCL=n c+ 3 √ n c=10∗0.5+ 3 √ 10∗0.5=11.708

CL=n c=10∗0.5=5

LCL=n c−3 √ n c=10∗0.5−3 √ 10∗0.5=−1.708 ⇒ 0


6.26. A production line assembles electric clocks. The average number of nonconformities per clock is estimated
to be 0.75. The quality engineer wishes to establish a c chart for this operation using an inspection unit of
six clocks. Find the three-sigma limits for this chart.

n=6 , c=0.75

UCL=n c+ 3 √ n c=6∗0.75+3 √ 6∗0.75=10.864

CL=n c=6∗0.75=4.5

LCL=n c−3 √ n c=6∗0.75−3 √ 6∗0.75=−1.864 ⇒ 0


6.27. Kittlitz (1999) presents data on homicides in Waco, Texas, for the years 1980-1989 (data taken from the
Waco Tribune-Herald, December 29, 1989). There were 29 homicides in 1989. Table 6E.15 gives the dates
of the 1989 homicides and the number of days between each homicide. The asterisks refer to the fact that
two homicides occurred on June 16 and were determined to have occurred 12 hours apart.

√ √
Days ( Days Between 4) Days Mont Dat Days ( Days Between 4) Days
Mont Dat
Betwee Betwee
h e
n ¿ 0.2777 Between h e
n ¿ 0.2777 Between
Jan. 20 July 8 2 1.212261 1.189207
Feb. 23 34 2.662513 2.414736 July 9 1 1 1
Feb. 25 2 1.212261 1.189207 July 26 17 2.19632 2.030543
Marc
5 8 1.781509 1.681793 Sep. 9 45 2.878042 2.59002
h
Marc
10 5 1.563522 1.495349 Sep. 22 13 2.038647 1.898829
h
April 4 25 2.444601 2.236068 Sep. 24 2 1.212261 1.189207
May 7 33 2.640531 2.396782 Oct. 1 7 1.716658 1.626577
May 24 17 2.19632 2.030543 Oct. 4 3 1.35674 1.316074
May 28 4 1.469576 1.414214 Oct. 8 4 1.469576 1.414214
June 7 10 1.895396 1.778279 Oct. 19 11 1.946233 1.82116
June 16* 9.25 1.854802 1.743956 Nov. 2 14 2.081037 1.934336
June 16* 0.5 0.824905 0.840896 Nov. 25 23 2.388646 2.189939
June 22 5.25 1.58485 1.5137 Dec. 28 33 2.640531 2.396782
June 25 3 1.35674 1.316074 Dec. 29 1 1 1
July 6 11 1.946233 1.82116

b. Transform the data using the 0.2777 root of the data. Plot the transformed data on a normal probability plot.
Does this plot indicate that the transformation has been successful in making the new data more closely
resemble data from a normal distribution?

To transform the data:

MTB > Calc > Calculator

Expression : 'DaysBetween'^0.2777

MTB > Graph > Probability Plot


Probability Plot of Transformed Days Between
Normal - 95% CI
99
Mean 1.806
95 StDev 0.5635
90
N 28
AD 0.238
80
P-Value 0.760
70

Percent
60
50
40
30
20

10
5

1
0 1 2 3 4
Transformed Days Between

The transformed data points now form a straight line in the normal probability plot; the transformation was
successful in making the new data more closely resemble data from a normal distribution.

c. Transform the data using the fourth root (0.25) of the data. Plot the transformed data on a normal
probability plot. Does this plot indicate that the transformation has been successful in making the new data
resemble more closely data from a normal distribution? Is the plot very different from the one in part (b)?

To transform the data:

MTB > Calc > Calculator

Expression : 'DaysBetween'^0.25

MTB > Graph > Probability Plot

Probability Plot of Transformed (Fourth Root) Days Between


Normal - 95% CI
99
Mean 1.695
95 StDev 0.4789
90
N 28
AD 0.223
80
P-Value 0.807
70
Percent

60
50
40
30
20

10
5

1
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5
Transformed 2 Days Between

The transformed data points now form a straight line in the normal probability plot; the transformation was
successful in making the new data more closely resemble data from a normal distribution. This
transformation is very similar to that of the transformation in part (b).

d. Construct an individual control chart using the transformed data from part (b).

x=1.806 , MR=0.587 , d 2=1.128

Individual control chart:

UCL= x+3
MR
d2
=1.806+ 3
0.587
1.128
=3.367 ( )
CL=x=1.806

LCL=x−3
MR
d2
=1.806−3
0.587
1.128
=0.245 ( )
MTB > Stat > Control Charts > Variables Charts for Individuals > I-MR

I-MR Chart of Transformed Days Between


UCL=3.366
3

Individual Value
2
_
X=1.806

LCL=0.246
0
1 4 7 10 13 16 19 22 25 28
Observation

2.0
UCL=1.917
1.5
Moving Range

1.0
__
0.5
MR=0.587

0.0 LCL=0
1 4 7 10 13 16 19 22 25 28
Observation

I chart :UCL=3.366 ,CL=1.806 , LCL=0.246


MR chart :UCL=1.917 ,CL=0.587 , LCL=0

e. Construct an individual control chart using the transformed data from part (c). How similar is it to the one
you constructed in part (d)?

x=1.695 , MR=0.500 , d 2=1.128

Individual control chart:

UCL= x+3
MR
d2
=1.695+ 3
0.500
1.128
=3.025 ( )
CL=x=1.695

LCL=x−3
MR
d2
=1.695−3
0.500
1.128
=0.365 ( )
MTB > Stat > Control Charts > Variables Charts for Individuals > I-MR
I-MR Chart of Transformed (Fourth Root) Days Between
3 UCL=3.025

Individual Value
2 _
X=1.695
1

LCL=0.365
0
1 4 7 10 13 16 19 22 25 28
Observation

1.6 UCL=1.634

Moving Range
1.2

0.8
__
MR=0.500
0.4

0.0 LCL=0
1 4 7 10 13 16 19 22 25 28
Observation

I chart :UCL=3.025 ,CL=1.695 , LCL=0.365


MR chart :UCL=1.634 , CL=0.500 , LCL=0

The control chart for this transformation leads to similar results as the transformation in part (b).

f. Is the process stable? Provide a practical interpretation of the control chart.

The process is in statistical control.

6.28. Suggest at least two nonmanufacturing scenarios in which attributes control charts could be useful for
process monitoring.

Many possible solutions, for example:

Scenario 1: In health care, tracking the number of errors in entries to a patient’s health record.

Scenario 2: In service industry, tracking the number of errors made in completing a transaction.

6.29. What practical difficulties could be encountered in monitoring the days-between-events data?

The events might occur rarely, so it may take a long time to collect enough data to monitor days-between-
events.

6.30. A paper by R. N. Rodriguez (“Health Care Applications of Statistical Process Control: Examples Using the
SAS® System,” SAS Users Group International: Proceedings of the 21st Annual Conference, 1996)
illustrated several informative applications of control charts to the health care environment. One of these
showed how a control chart was employed to analyze the rate of CAT scans performed each month at a
clinic. The data used in this example are shown in Table 6E.16. NSCANB is the number of CAT scans
performed each month and MMSB is the number of members enrolled in the health care plan each month,
in units of member months. “Days” is the number of days in each month. The variable NYRSB converts
MMSB to units of thousand members per year, and is computed as follows: NYRSB=MMSB
(days/30)/12000. NYRSB represents the “area of opportunity.”

Construct an appropriate control chart to monitor the rate at which CAT scans are performed at this clinic.

NSCAN
Month MMSB Days NYRSB
B
Jan. 94 50 26838 31 2.31105
Feb. 94 44 26903 28 2.09246
Mar 94 71 26895 31 2.31596
Apr. 94 53 26289 30 2.19075
May 94 53 26149 31 2.25172
Jun. 94 40 26185 30 2.18208
July 94 41 26142 31 2.25112
Aug. 94 57 26092 31 2.24681
Sept. 94 49 25958 30 2.16317
Oct. 94 63 25957 31 2.23519
Nov. 94 64 25920 30 2.16
Dec. 94 62 25907 31 2.23088
Jan. 95 67 26754 31 2.30382
Feb. 95 58 26696 28 2.07636
Mar 95 89 26565 31 2.28754

The variable NYRSB can be thought of as an “inspection unit”, representing an identical “area of
opportunity” for each “sample”. The “process characteristic” to be controlled is the rate of CAT scans. A
u chart which monitors the average number of CAT scans per NYRSB is appropriate.

15

∑ ui 861
u= i=1
15
= =25.857
33.2989
∑ ni
i=1

UCL=u+3 √ u /ni=25.857+ 3 √ 25.857 / ni

CL=u=25.857

LCL=u−3 √ u /n i=25.857−3 √ 25.857 /ni

MTB > Stat > Control Charts > Attributes Charts > U

U Chart of Scans
40 1

35
UCL=35.94
Sample Count Per Unit

30

_
25
U=25.86

20

15
LCL=15.77
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Sample
Tests performed with unequal sample sizes

Test Results for U Chart of Scans

TEST 1. One point more than 3.00 standard deviations from center line.
Test Failed at points: 15
u chart :UCL=35.95 , CL=25.86 , LCL=15.77

The rate of monthly CAT scans is out of control because sample 15 exceeds the UCL.

6.31. A paper by R. N. Rodriguez (“Health Care Applications of Statistical Process Control: Examples Using the
SAS® System,”, SAS Users Group International: Proceedings of the 21st Annual Conference, 1996)
illustrated several informative applications of control charts to the health care environment. One of these
showed how a control chart was employed to analyze the number of office visits by health care plan
members. The data for clinic E are shown in table 6E.17. The variable NVISITE is the number of visits to
clinic E each month, and MMSE is the number of members enrolled in the health care plan each month, in
units of member months. “Days” is the number of days in each month. The variable NYRSE converts
MMSE to units of thousand members per year and is computed as follows:
NYRSE=MMSE(days /30)/12000.NYRSE represents the “area of opportunity.” The variable Phase
separates the data into two time periods.

Month Phase NVISITE NYRSE Days MMSE


Jan. 94 1 1421 0.66099 31 7676
Feb. 94 1 1303 0.59718 28 7678
Mar. 94 1 1569 0.66219 31 7690
Apr. 94 1 1576 0.64608 30 7753
May 94 1 1567 0.66779 31 7755
Jun. 94 1 1450 0.65575 30 7869
July 94 1 1532 0.68105 31 7909
Aug. 94 1 1694 0.68820 31 7992
Sep. 94 2 1721 0.66717 30 8006
Oct. 94 2 1762 0.69612 31 8084
Nov. 94 2 1853 0.68233 30 8188
Dec. 94 2 1770 0.70809 31 8223
Jan. 95 2 2024 0.78215 31 9083
Feb. 95 2 1975 0.70684 28 9088
Mar. 95 2 2097 0.78947 31 9168

The variable NYRSE can be thought of as an “inspection unit”, representing an identical “area of
opportunity” for each “sample”. The “process characteristic” to be controlled is the rate of clinic visits. A
u chart which monitors the average number clinic visits per NYRSE is appropriate.

a. Use the data from P1 to construct a control chart for monitoring the rate of office visits performed at clinic
E. Does this chart exhibit control?

15

∑ ui 12112
u= i=1
15
= =2303.0
5.25923
∑ ni
i=1

UCL=u+3 √ u /ni=2303.0+3 √ 2303.0/n i

CL=u=2303.0
LCL=u−3 √ u /n i=2303.0−3 √ 2303.0 / ni

MTB > Stat > Control Charts > Attributes Charts > U

U Chart of Clinic Visits


2500
UCL=2476.5

Sample Count Per Unit


2400

_
2300 U=2303.0

2200

LCL=2129.5
2100
1 2 3 4 5 6 7 8
Sample
Tests performed with unequal sample sizes

u chart :UC L n=0.68820=2476.5 , CL=2303.0 , LC Ln=0.68820=2129.5


The process is in control.

b. Plot the data from P2 on the chart constructed in part (a). Is there a difference in the two phases?

MTB > Stat > Control Charts > Attributes Charts > U
In “Estimate” tab, enter 9:15 to omit the phase 2 data when estimating parameters.
U Chart of Clinic Visits
1
2800
1
2700 1
Sample Count Per Unit

1 1
2600
1
1
2500
UCL=2465.0
2400
_
2300 U=2303.0
2200
LCL=2141.0
2100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Sample
Tests performed with unequal sample sizes

There is a difference between the two phases; all the points in phase two are out of control.

c. Consider only the P2 data. Do these data exhibit control?

15

∑ ui 13202
u= i=1
15
= =2623.52
5.03217
∑ ni
i=1

UCL=u+3 √ u /ni=2623.52+3 √ 2623.52/ni

CL=u=2623.52
LCL=u−3 √( u/ ni )=2623.52−3 √ 2623.52/ ni

MTB > Stat > Control Charts > Attributes Charts > U

U Chart of Clinic Visits: Phase 2

2800 UCL=2796.5

Sample Count Per Unit


2700

_
U=2623.5
2600

2500

LCL=2450.6
2400
1 2 3 4 5 6 7
Sample
Tests performed with unequal sample sizes

u chart :UC L n=0.78947=2796.5 , CL=2623.5 , LC Ln=0.78947 =2450.6


The phase two data are in statistical control.

6.32. The data in Table 6E.18 are the number of information errors found in customer records in a marketing
company database. Five records were sampled each day. Set up a c chart for the total number of errors. Is
the process in control?

Day Record 1 Record 2 Record 3 Record 4 Record 5


1 8 7 1 11 17
2 11 1 11 2 9
3 1 1 8 2 5
4 3 2 5 1 4
5 3 2 13 6 5
6 6 3 3 3 1
7 8 8 2 1 5
8 4 10 2 6 4
9 1 6 1 3 2
10 15 1 3 2 8
11 1 7 13 5 1
12 6 7 9 3 1
13 7 6 3 3 1
14 2 9 3 8 7
15 6 14 7 1 8
16 2 9 4 2 1
17 11 1 1 3 2
18 5 5 19 1 3
19 6 15 5 6 6
20 2 7 9 2 8
21 7 5 6 14 10
22 4 3 8 1 2
23 4 1 4 20 5
24 15 2 7 10 17
25 2 15 3 11 2

Given 25 samples, each of size 5, we have:


25 5

∑ ∑ c ij 698
n=5 , m=25 , c= i=1 j=1
= =27.92
m 25

UCL=c+ 3∗√ c=27.92+3∗√ 27.92=43.77

CL=c=27.92

LCL=c−3∗√ c=27.92−3∗√ 27.92=12.07


MTB > Stat > Control Charts > Attributes Charts > C

C Chart of Record Errors


1
50

1
UCL=43.77
40
Sample Count

30 _
C=27.92

20

LCL=12.07
10
1 3 5 7 9 11 13 15 17 19 21 23 25
Day

c chart :UCL=43.77 ,CL=27.92 , LCL=12.07


The process is not in control; there are out of control points on the first and 24th day.

6.33. Kaminski et al. (1992) present data on the number of orders per truck at a distribution center. Some of this
data is shown in Table 6E.19. Set up a c chart for the number of orders per truck. Is the process in control?

No. of No. of No. of No. of


Truck Truck Truck Truck
Orders Orders Orders Orders
1 22 9 5 17 8 25 6
2 58 10 26 18 35 26 13
3 7 11 12 19 6 27 9
4 39 12 26 20 23 28 21
5 7 13 10 21 10 29 8
6 33 14 30 22 17 30 12
7 8 15 5 23 7 31 4
8 23 16 24 24 10 32 18
25

542
∑ ci
n=32 , c= i=1 = =16.9375
n 32

UCL=c+ 3∗√ c=16.9375+ 3∗√ 16.9375=29.28

CL=c=16.9375

LCL=c−3∗√ c=16.9375−3∗√ 16.9375=4.59


MTB > Stat > Control Charts > Attributes Charts > C

C Chart of Number of Orders


60 1

50

1
40
Sample Count

1
1
1
30 UCL=29.28

20 _
C=16.94
10

1
LCL=4.59
0
1 4 7 10 13 16 19 22 25 28 31
Truck

c chart :UCL=29.28 , CL=16.94 , LCL=4.59


The process is not in control; 5 of the trucks have orders above the UCL and one truck has orders below the
LCL.
CHAPTER 7

Lot-by-Lot Acceptance Sampling Procedures


Learning Objectives

After completing this chapter you should be able to:


1. Understand the role of acceptance-sampling in modern quality control systems
2. Understand the advantages and disadvantages of sampling
3. Understand the difference between attributes and variables sampling plans and the major types of
acceptance-sampling procedures
4. Know how single-, double-, and sequential-sampling plans are used
5. Understand the importance of random-sampling
6. Know how to determine the OC curve for a single-sampling plan for attributes
7. Understand the effects of the sampling plan parameters on sampling plan performance
8. Know how to design single-, double-, and sequential-sampling plans for attributes
9. Know how rectifying inspection is used
10. Understand the structure and use of MIL STD 105E and its civilian counterpart plans
11. Understand the structure and use of the Dodge–Romig system of sampling plans
12. Understand the structure and use of MIL STD 414 and its civilian counterpart plans

Important Terms and Concepts


100% inspection Dodge–Romig sampling plans MIL STD 105E
Acceptance-sampling plan Double-sampling plan Multiple-sampling plan
ISO 2859 Ideal OC curve Random-sampling
ANSI/ASQ Z1.9 Lot disposition actions Rectifying inspection
AOQL plans Lot sentencing Sequential-sampling plan
Attributes data Lot tolerance percent defective Single-sampling plan
Average outgoing quality (LTPD) Switching rules
Average total inspection LTPD plans Type-A and type-B OC curves

Exercises

7.1. Draw the type-B OC curve for the single sampling plan n=50 , c=1.

1
50 !
Pa=P {d ≤ c }=∑ pd (1− p)50−d
d =0 d ! ( 50−d ) !
Excel Formulas:

Excel Results:

7.2. Draw the type-B OC curve for the single-sampling plan n=100 , c=2.
2
100 !
Pa=P {d ≤ c }=∑ pd (1− p)100−d
d =0 d ! ( 100−d ) !

Excel Formulas:

Excel Results:
7.3. Suppose that a product is shipped in lots of size N=5,000 . The receiving inspection
procedure used is single-sampling with n=50 and c=1 .

a. Draw the type-A OC curve for the plan.

Hypergeometric distribution:

Pa=Pr { d ≤ c }=∑
( d )( 50−d )
k 5000−k
1

(5000
50 )
d=0

where k = p∗N

Excel formulas:
A E F G H
1 N= 5000
2 n= 50
3 c= 1
4 d= 0 1

5
6 p D f(d=0) f(d=1) Pr{d<=1}
7 0.001 =INT(D7) =HYPGEOMDIST(E$4,$E$2,$E7,$E$1) =HYPGEOMDIST(F$4,$E$2,$E7,$E$1) =F7+G7
8 0.002 =INT(D8) =HYPGEOMDIST(E$4,$E$2,$E8,$E$1) =HYPGEOMDIST(F$4,$E$2,$E8,$E$1) =F8+G8
9 0.003 =INT(D9) =HYPGEOMDIST(E$4,$E$2,$E9,$E$1) =HYPGEOMDIST(F$4,$E$2,$E9,$E$1) =F9+G9
10 0.004 =INT(D10) =HYPGEOMDIST(E$4,$E$2,$E10,$E$1) =HYPGEOMDIST(F$4,$E$2,$E10,$E$1) =F10+G10
11 0.005 =INT(D11) =HYPGEOMDIST(E$4,$E$2,$E11,$E$1) =HYPGEOMDIST(F$4,$E$2,$E11,$E$1) =F11+G11
12 0.006 =INT(D12) =HYPGEOMDIST(E$4,$E$2,$E12,$E$1) =HYPGEOMDIST(F$4,$E$2,$E12,$E$1) =F12+G12
13 0.007 =INT(D13) =HYPGEOMDIST(E$4,$E$2,$E13,$E$1) =HYPGEOMDIST(F$4,$E$2,$E13,$E$1) =F13+G13
14 0.0075 =INT(D14) =HYPGEOMDIST(E$4,$E$2,$E14,$E$1) =HYPGEOMDIST(F$4,$E$2,$E14,$E$1) =F14+G14
15 0.008 =INT(D15) =HYPGEOMDIST(E$4,$E$2,$E15,$E$1) =HYPGEOMDIST(F$4,$E$2,$E15,$E$1) =F15+G15
16 0.009 =INT(D16) =HYPGEOMDIST(E$4,$E$2,$E16,$E$1) =HYPGEOMDIST(F$4,$E$2,$E16,$E$1) =F16+G16
17 …

Excel results:
A D E F G H
5 hypergeometric - Type A
6 p N*p D f(d=0) f(d=1) Pr{d<=1}
7 0.001 5 5 0.95097 0.04807 0.99904
8 0.002 10 10 0.90430 0.09151 0.99581
9 0.003 15 15 0.85988 0.13065 0.99053
10 0.004 20 20 0.81759 0.16581 0.98340
11 0.005 25 25 0.77735 0.19726 0.97461
12 0.006 30 30 0.73905 0.22527 0.96432
13 0.007 35 35 0.70260 0.25011 0.95271
14 0.008 37.5 37 0.68852 0.25921 0.94773
15 0.008 40 40 0.66791 0.27201 0.93992
16 0.009 45 45 0.63491 0.29118 0.92609
17 0.010 50 50 0.60350 0.30785 0.91135
18 …

Pa ( number of defectives∈ population=35 )=0.95271 ,∨0.05


¿ Pa (number of defectives∈ population=375)=0.1013 ,∨0.10

Excel graph:

Type-A OC Cur ve for N=5000, n=50, c=1


1.00000
0.90000
0.80000
P(Acceptance)

0.70000
0.60000
0.50000
0.40000
0.30000
0.20000
0.10000
0.00000
0 0.05 0.1 0.15

b. Draw the type-B OC curve for this plan and compare it to the type-A OC curve found
in part (a).

Excel formulas:

A B C D E
1 N= 5000
2 n= 50
3 c= 1
4 d= 0 1

5 binomial - Type B hypergeometric - Type A


6 p Pr{d<=1} N*p D
7 0.001 =BINOMDIST($B$3,$B$2,A7,TRUE) =$B$1*A7 =INT(D7)
8 0.002 =BINOMDIST($B$3,$B$2,A8,TRUE) =$B$1*A8 =INT(D8)
9 0.003 =BINOMDIST($B$3,$B$2,A9,TRUE) =$B$1*A9 =INT(D9)
10 0.004 =BINOMDIST($B$3,$B$2,A10,TRUE) =$B$1*A10 =INT(D10)
11 0.005 =BINOMDIST($B$3,$B$2,A11,TRUE) =$B$1*A11 =INT(D11)
12 0.006 =BINOMDIST($B$3,$B$2,A12,TRUE) =$B$1*A12 =INT(D12)
13 0.007 =BINOMDIST($B$3,$B$2,A13,TRUE) =$B$1*A13 =INT(D13)
14 0.0075 =BINOMDIST($B$3,$B$2,A14,TRUE) =$B$1*A14 =INT(D14)
15 0.008 =BINOMDIST($B$3,$B$2,A15,TRUE) =$B$1*A15 =INT(D15)
16 0.009 =BINOMDIST($B$3,$B$2,A16,TRUE) =$B$1*A16 =INT(D16)

Excel results:
A B

5 binomial - Type B
6 p Pr{d<=1}
7 0.001 0.99881
8 0.002 0.99540
9 0.003 0.98998
10 0.004 0.98274
11 0.005 0.97387
12 0.006 0.96353
13 0.007 0.95190
14 0.008 0.94563
15 0.008 0.93910
16 0.009 0.92528
17 0.010 0.91056
18 …

Pa ( p=0.007 )=0.9519 ,∨0.05∧P a ( p=0.075 )=0.1025 ,∨0.10

Excel graph:

Type-B OC Cur ve for N=5000, n=50, c=1


1
0.9
0.8
P(Acceptance)

0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.05 0.1 0.15

c. Which curve is appropriate for this situation?

Based on values for  and , the difference between the two curves is small; either is appropriate.

7.4. Find a single sample-sampling plan for which p1=0 , 01 , α =0.05 , p 2=0.10 ,and β=0.10 .

From the binomial nomograph (Figure 7.10), the sampling plan is n=35 and c=1 .

7.5. Find a single-sampling plan for which p1=0.05 , α =0.05 , p2=0.15 ,∧β=0.10 .

From the binomial nomograph (Figure 7.10), the sampling plan is n=80 and c=7 .

7.6. Find a single-sampling plan for which p1=0 , 02 , α =0.01 , p 2=0.06 ,and β=0.10 .

From the binomial nomograph (Figure 7.10), the sampling plan is n=300 and c=11

7.7. A company uses the following acceptance-sampling procedure. A sample equal to


10% of the lot is taken. If 2% or less of the items in the sample are defective, the lot
is accepted; otherwise, it is rejected. If submitted lots vary in size from 5,000 to
10,000 units, what can you say about the protection by this plan? If 0.05 is the
desired LPD, does this scheme offer reasonable protection to the consumer?

Excel formulas:
A B C D E F
1 LTPD = 0.05
2
3 N1 = 5000 N2 = 10000
4 n1 = =0.1*C3 n1 = =0.1*F3
5 pmax = 0.02 pmax = 0.02
6 cmax = =C5*C4 cmax = =F5*F4
7 binomial binomial
8 p Pr{d<=10} Pr{reject} Pr{d<=20} Pr{reject}
9 0.001 =BINOMDIST($C$6,$C$4,A9,TRUE) =1-B9 =BINOMDIST($F$6,$F$4,A9,TRUE) =1-E9
10 0.002 =BINOMDIST($C$6,$C$4,A10,TRUE) =1-B10 =BINOMDIST($F$6,$F$4,A10,TRUE) =1-E10
11 0.003 =BINOMDIST($C$6,$C$4,A11,TRUE) =1-B11 =BINOMDIST($F$6,$F$4,A11,TRUE) =1-E11
12 0.004 =BINOMDIST($C$6,$C$4,A12,TRUE) =1-B12 =BINOMDIST($F$6,$F$4,A12,TRUE) =1-E12
13 0.005 =BINOMDIST($C$6,$C$4,A13,TRUE) =1-B13 =BINOMDIST($F$6,$F$4,A13,TRUE) =1-E13
14 0.006 =BINOMDIST($C$6,$C$4,A14,TRUE) =1-B14 =BINOMDIST($F$6,$F$4,A14,TRUE) =1-E14
15 0.007 =BINOMDIST($C$6,$C$4,A15,TRUE) =1-B15 =BINOMDIST($F$6,$F$4,A15,TRUE) =1-E15
16 0.0075 =BINOMDIST($C$6,$C$4,A16,TRUE) =1-B16 =BINOMDIST($F$6,$F$4,A16,TRUE) =1-E16
17 0.008 =BINOMDIST($C$6,$C$4,A17,TRUE) =1-B17 =BINOMDIST($F$6,$F$4,A17,TRUE) =1-E17
18 0.009 =BINOMDIST($C$6,$C$4,A18,TRUE) =1-B18 =BINOMDIST($F$6,$F$4,A18,TRUE) =1-E18
19 0.01 =BINOMDIST($C$6,$C$4,A19,TRUE) =1-B19 =BINOMDIST($F$6,$F$4,A19,TRUE) =1-E19
20 …

Excel results:

A B C D E F G H
8 p Pr{d<=10} Pr{reject} Pr{d<=20} Pr{reject} difference
9 0.0010 1.00000 0.0000 1.00000 0.0000 0.00000
10 0.0020 1.00000 0.0000 1.00000 0.0000 0.00000
11 0.0030 1.00000 0.0000 1.00000 0.0000 0.00000
12 0.0040 0.99999 0.0000 1.00000 0.0000 -0.00001
13 0.0050 0.99994 0.0001 1.00000 0.0000 -0.00006
14 0.0060 0.99972 0.0003 1.00000 0.0000 -0.00027
15 0.0070 0.99903 0.0010 0.99999 0.0000 -0.00095
16 0.0075 0.99834 0.0017 0.99996 0.0000 -0.00163
17 0.0080 0.99729 0.0027 0.99991 0.0001 -0.00263
18 0.0090 0.99359 0.0064 0.99959 0.0004 -0.00600
19 0.0100 0.98676 0.0132 0.99850 0.0015 -0.01175
20 0.0110 0.97545 0.0245 0.99556 0.0044 -0.02010
21 0.0120 0.95837 0.0416 0.98886 0.0111 -0.03049
22 0.0130 0.93444 0.0656 0.97579 0.0242 -0.04135
23 0.0140 0.90298 0.0970 0.95330 0.0467 -0.05031
24 0.0150 0.86386 0.1361 0.91861 0.0814 -0.05474
25 0.0200 0.58304 0.4170 0.55910 0.4409 0.02395
26 0.0250 0.29404 0.7060 0.18221 0.8178 0.11183
27 0.0300 0.11479 0.8852 0.03328 0.9667 0.08151
28 0.0350 0.03631 0.9637 0.00380 0.9962 0.03251
29 0.0400 0.00967 0.9903 0.00030 0.9997 0.00938
30 0.0450 0.00224 0.9978 0.00002 1.0000 0.00222
31 0.0500 0.00046 0.9995 0.00000 1.0000 0.00046
32 …

Different sample sizes offer different levels of protection.

Pa ( d=10 ; N =5,000; p=0.025 ) =0.2940, Pa ( d=20 ; N =10,000 ; p=0.02 )=0.1822

Samples of 5,000 (10,000) units where 0.025% of the sample is defective will be accepted only 29.40%
(18.22%) of the time. If the LTPD=0.05, this sampling scheme will often lead to rejecting acceptable
lots.

7.8. A company uses a sample size equal to the square root of the lot size. If 1% or less of the items in the
sample are defective, the lot is accepted; otherwise, it is rejected. Submitted lots vary in size from 1,000 to
5,000 units. Comment on the effectiveness of this procedure.

Excel formulas:
Excel Results:

Different sample sizes offer different levels of protection. The company will reject the lot if one or more
items in the sample are defective.

Pa ( d=0 ; N =1,000 ; p=0.025 )=0.45619, Pa ( d=0 ; N =5,000 ; p=0.025 )=0.16995

Samples of 5,000 (10,000) units where 0.025% of the sample is defective will be accepted only 45.62%
(16.99%) of the time.

7.9. Consider the single-sampling plan found in Exercise 7.4. Suppose that lots of
N=2,000 are submitted. Draw the ATI curve for this plan. Draw the AOQ curve and
find the AOQL.
n=35 , c=1 , N =2,000

ATI =n+ ( 1−Pa ) ( N−n )=35+ ( 1−Pa ) ( 2000−35 )=2000−1965 P a

Pa p ( N −n ) 1965
AOQ= = P p
N 2000 a
AOQL=0.0234
Excel formulas:

Excel results:
Excel graphs:

ATI Curve for n=35, c=1 AOQ Cur ve for n=35, c=1
2500 0.0250

2000 0.0200

1500 0.0150

AOQ
ATI

1000 0.0100

500 0.0050

0 0.0000
0 0.05 0.1 0.15 0.2 0.25 0 0.05 0.1 0.15 0.2 0.25

p p

7.10. Suppose that a single-sampling plan with n=150 and c=2 is being used for receiving inspection where
the supplier ships the product in lots of size N=3,000.

a. Draw the OC curve for this plan.

n 150
Note that = =0.05≤ 0.10 . Therefore, the Type-A and Type-B OC curves are virtually
N 3000
indistinguishable.

Excel formulas for parts a, b, and c:

Excel Results for a, b, and c:


Excel Graph for OC Curve:

b. Draw the AOQ curve and find the AOQL.

Pa p(N−n) (3000−150)
AOQ= = Pa p=0.95∗Pa p
N 3000

AOQL=0.008676
c. Draw the ATI curve for this plan.

ATI=n+ ( 1−Pa ) ( N−n )=150+ ( 1−Pa ) ( 3000−150 )=3000−2850∗Pa

7.11. Suppose that a supplier ships components in lots of size 5,000. A single-sampling
plan with n=50 and c=2 is being used for receiving inspection. Rejected lots are
screened, and all defective items are reworked and returned to the lot.

a. Draw the OC curve for this plan.

Excel formulas:

A B C
1 N = 5000 n = 50
2 c= 2
3 binomial
4 p Pa=Pr{d<=2} Pr{reject}
5 0.001 =BINOMDIST($C$2,$C$1,A5,TRUE) =1-B5
6 0.002 =BINOMDIST($C$2,$C$1,A6,TRUE) =1-B6
7 0.003 =BINOMDIST($C$2,$C$1,A7,TRUE) =1-B7
8 0.004 =BINOMDIST($C$2,$C$1,A8,TRUE) =1-B8
9 0.005 =BINOMDIST($C$2,$C$1,A9,TRUE) =1-B9
10 0.006 =BINOMDIST($C$2,$C$1,A10,TRUE) =1-B10
11 0.007 =BINOMDIST($C$2,$C$1,A11,TRUE) =1-B11
12 0.0075 =BINOMDIST($C$2,$C$1,A12,TRUE) =1-B12
13 0.008 =BINOMDIST($C$2,$C$1,A13,TRUE) =1-B13
14 0.009 =BINOMDIST($C$2,$C$1,A14,TRUE) =1-B14
15 0.01 =BINOMDIST($C$2,$C$1,A15,TRUE) =1-B15
16 …

Excel results:
A B C
1 N = 5000 n = 50
2 c= 2
3 binomial
4 p Pa=Pr{d<=2} Pr{reject}
5 0.0010 0.99998 0.00002
6 0.0020 0.99985 0.00015
7 0.0030 0.99952 0.00048
8 0.0040 0.99891 0.00109
9 0.0050 0.99794 0.00206
10 0.0060 0.99657 0.00343
11 0.0070 0.99474 0.00526
12 0.0075 0.99364 0.00636
13 0.0080 0.99242 0.00758
14 0.0090 0.98957 0.01043
15 0.0100 0.98618 0.01382
16 …

Excel graph:

OC Cur ve for n=50, c=2


1
Probability of Acceptance, Pa

0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.05 0.1 0.15 0.2 0.25

Fraction Defective, p

b. Find the level of lot quality that will be rejected 90% of the time.

If P ( Rejection )=0.90 , then, P ( acceptance )=0.10. From the graph in part (a) with Pa=0.10,
p=0.057 will be rejected about 90% of the time.

c. Management has objected to the use of the above sampling procedure and wants to
use a plan with an acceptance number c=0, arguing that this is more consistent with
their Zero Defects program. What do you think of this?

A zero-defects sampling plan, with acceptance number c = 0, will be extremely hard on the vendor because
the Pa is low even if the lot fraction defective is low. Generally, quality improvement begins with the
manufacturing process control, not the sampling plan.

d. Design a single-sampling plan with c=0 that will give a 0.90 probability of rejection
of lots having the quality level found in part (b). Note that the two plans are now
matched at the LTPD point. Draw the OC curve for this plan and compare it to the
one forn=50, c=2 in part (a).

From the nomograph (Figure 7.10), select n = 20, yielding Pa=1 – 0.11372=0.88638 0.90. The OC
curve for this zero-defects plan is much steeper.
Excel formulas:

E F G
1 N = 5000 n = 20
2 c= 0
3 binomial
4 p Pa=Pr{d<=0} Pr{reject}
5 0.001 =BINOMDIST($G$2,$G$1,A5,TRUE) =1-F5
6 0.002 =BINOMDIST($G$2,$G$1,A6,TRUE) =1-F6
7 0.003 =BINOMDIST($G$2,$G$1,A7,TRUE) =1-F7
8 0.004 =BINOMDIST($G$2,$G$1,A8,TRUE) =1-F8
9 0.005 =BINOMDIST($G$2,$G$1,A9,TRUE) =1-F9
10 0.006 =BINOMDIST($G$2,$G$1,A10,TRUE) =1-F10
11 0.007 =BINOMDIST($G$2,$G$1,A11,TRUE) =1-F11
12 0.0075 =BINOMDIST($G$2,$G$1,A12,TRUE) =1-F12
13 0.008 =BINOMDIST($G$2,$G$1,A13,TRUE) =1-F13
14 0.009 =BINOMDIST($G$2,$G$1,A14,TRUE) =1-F14
15 0.01 =BINOMDIST($G$2,$G$1,A15,TRUE) =1-F15
16 …

Excel results:

E F G
1 N = 5000 n = 20
2 c= 0
3 binomial
4 p Pa=Pr{d<=0} Pr{reject}
5 0.0010 0.98019 0.01981
6 0.0020 0.96075 0.03925
7 0.0030 0.94168 0.05832
8 0.0040 0.92297 0.07703
9 0.0050 0.90461 0.09539
10 0.0060 0.88660 0.11340
11 0.0070 0.86893 0.13107
12 0.0075 0.86022 0.13978
13 0.0080 0.85160 0.14840
14 0.0090 0.83459 0.16541
15 0.0100 0.81791 0.18209
16 …

Excel graph:

OC Cur ve for n=20, c=0


1
Probability of Acceptance, Pa

0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.05 0.1 0.15 0.2 0.25

Fraction Defective, p

e. Suppose that incoming lots are 0.5% nonconforming. What is the probability of
rejecting these
lots under both plans? Calculate the ATI at this point for both plans. Which plan do
you prefer? Why?

P ( reject∨ p=0.005 , c=0 )=0.09539


P ( reject∨ p=0.005 , c=2 )=¿ 0.00013
ATI c=0 =n+ ( 1−P a ) ( N −n )=20+0.09539∗( 5000−20 )=495
ATI c=2=n+ ( 1−P a ) ( N −n ) =20+0.00013∗( 5000−20 )=20

The c=2 plan is preferred because the c=0 plan will reject good lots 1.98% of the time as opposed to
0.0001% of the time with c=2 .

7.12. Draw the primary and supplementary OC curves for a double-sampling plan with
n1=50 ,c 1=2 , n2=100 ,c 2=6. If the incoming lots have fraction nonconforming p=0.05, what is the
probability of acceptance on the first sample? What is the probability of final acceptance? Calculate the
probability of rejection on the first sample.

Excel Formulas:

Excel Results:
Excel graph:

n1=50 ,c 1=2 , n2=100 ,c 2=6 , p=0.05


2
50 !
P a= ∑
I d
( 0.05 ) (0.95)50−d =0.54053
1 1

d1=0 d 1 ! ( 50−d 1 ) !
I II
Pa=P a+ Pa

Pa =P { d 1=3 , d 2 ≤3 ) + P { d 1=4 , d 2 ≤ 2 } + P { d 1=5 , d 2 ≤ 1 } + P { d 1=6 , d2 =0 }


II

¿ 0.05669+0.01608+ 0.00244+ 0.00015=0.07536

Pa=0.54053+0.07536=0.61589

Pr { ℜ ject on first sample }=P { d 1> c 2 }=P { d 1 >6 }=1−P { d 1 ≤6 }


6
50!
¿ 1− ∑
d 50−d
( 0.05 ) ( 0.95 )
1 1

d 1=0 d 1 ! ( 50−d 1 ) !
¿ 1−0.98821=0.01179
7.13.

a. Derive an item-by-item sequential-sampling plan for which


p1=0.01 , α =0.05 , p2=0.10 , and
β=0.10 .

p1=0.01 , 1−α=1−0.05=0.95 , p2=0.10 , β=0.10

p 2( 1− p1) 0.10(0.99)
k =log =log =1.0414
p 1( 1− p2) 0.01(0.90)

1−α 0.95
h1=(log )/k =( log )/1.0414=0.9389
β 0.10

1−β 0.90
h2 =(log )/k =( log )/1.0414=1.2054
α 0.05

s=log ⁡[( 1− p1)/(1− p2 )]/k =[log ⁡( 0.99/0.90)]/1.0414=0.0397

X A =−h 1+ sn=−0.9389+0.0397 n, X R=h2+ sn=1.2054+ 0.0397 n

This problem can be solved in Excel.

Excel formulas:

A B C D E
1 p1 = 0.01 k= =LOG((B2*(1-B1))/(B1*(1-B2)))
2 p2 = 0.1 h1 = =(LOG((1-B3)/B4))/E1
3 alpha = 0.05 h2 = =(LOG((1-B4)/B3))/E1
4 beta = 0.1 s= =LOG((1-B1)/(1-B2))/E1
5
6 n XA XR Acc Rej
7 1 =-$E$2+$E$4*A7 =$E$3+$E$4*A7 n/a 2
8 =A7+1 =-$E$2+$E$4*A8 =$E$3+$E$4*A8 n/a 2
9 =A8+1 =-$E$2+$E$4*A9 =$E$3+$E$4*A9 n/a 2
10 =A9+1 =-$E$2+$E$4*A10 =$E$3+$E$4*A10 n/a 2
11 =A10+1 =-$E$2+$E$4*A11 =$E$3+$E$4*A11 n/a 2
12 =A11+1 =-$E$2+$E$4*A12 =$E$3+$E$4*A12 n/a 2
13 =A12+1 =-$E$2+$E$4*A13 =$E$3+$E$4*A13 n/a 2
14 =A13+1 =-$E$2+$E$4*A14 =$E$3+$E$4*A14 n/a 2
15 =A14+1 =-$E$2+$E$4*A15 =$E$3+$E$4*A15 n/a 2
16 =A15+1 =-$E$2+$E$4*A16 =$E$3+$E$4*A16 n/a 2
17 …

Excel results:
A B C D E
6 n XA XR Acc Rej
7 1 -0.899 1.245 n/a 2
8 2 -0.859 1.285 n/a 2
9 3 -0.820 1.325 n/a 2
10 4 -0.780 1.364 n/a 2
11 5 -0.740 1.404 n/a 2
17 …
27 20 -0.144 2.000 n/a 2
28 21 -0.104 2.040 n/a 3
29 22 -0.064 2.080 n/a 3
30 23 -0.025 2.120 n/a 3
31 24 0.015 2.159 0 3
32 25 0.055 2.199 0 3
33 …
53 45 0.850 2.994 0 3
54 46 0.890 3.034 0 4
55 47 0.929 3.074 0 4
56 48 0.969 3.113 0 4
57 49 1.009 3.153 1 4
58 50 1.049 3.193 1 4

The sampling plan is n=49 ; Acc=1; Rej=4 .

Item-by-Item Sampling Plans:

n Accept Reject
24 0 3
49 1 4
71 1 5
74 2 5
96 2 6
10
3 6
5

Note from exercise 7.4, the single sampling plan is n=35 , c=1. Therefore, the item-by-item sequential-
sampling plan would be truncated after the inspection of the 105th unit.

b. Draw the OC curve for this plan.

Three points on the OC curve are:

p1=0.01 ; Pa=1−α =0.95


h2 1.2054
p=s=0.0397 ; Pa= = =0.5621
h1 +h 2 0.9389+1.2054
p2=0.10 ; β=0.10

7.14.
a. Derive an item-by-item sequential-sampling plan for which
p1=0.02 , α =0.05 , p2=0.15 ,∧β=0.10 .

p1=0.02 , 1−α=1−0.05=0.95 , p2=0.15 , β=0.10

p 2( 1− p1) 0.15(0.98)
k =log =log =0.936868
p 1( 1− p2) 0.02(0.85)
1−α 0.95
h1=(log )/k =( log )/0.936868=1.0436
β 0.10

1−β 0.90
h2 =(log )/k =( log )/0.936868=1.33986
α 0.05

s=log ⁡[(1− p1)/(1− p2 )]/k =[log ⁡( 0.98/0.85)]/0.936868=0.06597

X A =h 1+ sn=1.0436+0.06597 n , X R=h2+ sn=1.33986+0.06597 n

This problem can be solved in Excel.

Excel formulas:

Excel Results:
Item-by-Item Sampling Plans:

n Accept Reject
22 0 3
31 1 4
41 1 5
62 3 6
90 4 8

Note that the single sampling plan is n=30 , c=2. Therefore, the item-by-item sequential- sampling plan
would be truncated after the inspection of the 90th unit.

b. Draw the OC curve for this plan.

Three points on the OC curve are:

p1=0.02 ; Pa=1−α =0.98


h2 1.33986
p=s=0.06597 ; Pa= = =0.5621
h1 +h 2 1.0436+1.33986
p2=0.15 ; Pa= β=0.10

7.15. Consider rectifying inspection for single sampling. Develop an AOQ equation
assuming that all defective items are removed but not replaced with good ones.

Based on AOQ=P a p ( N−n ) / N :

AOQ=P a p ( N−n ) / [ N−P a∗np−( 1−P a )∗Np ]


7.16. A supplier ships a component in lots of size N=3000. The AQL has been established for this product at
1%. Find the normal, tightened, and reduced single-sampling plans for this situation from MIL STD 105E,
assuming that general inspection level II is appropriate.

N=3000 , AQL=1 %
General Level II

From Table 7.4:

The Sample Size Code Letter for Normal Sampling is “K”

From Tables 7.5, 7.6, 7.7:

Normal sampling plan: Sample size code letter ¿ K , n=125 , Acc=3 , Rej=4
Tightened sampling plan: Sample size code letter ¿ K , n=125 , Acc=2 , Rej=3
Reduced sampling plan: Sample size code letter ¿ K , n=50 , Acc=1, Rej=4

7.17. Repeat Exercise 7.16, using general inspection level I. Discuss the differences in the
various sampling plans.

N=3000, AQL=1 %

General level I

From Table 7.4:

The Sample Size Code Letter for Normal Sampling is “H”

From Tables 7.5, 7.6, 7.7:

Normal sampling plan: Sample size code letter ¿ H , n=50, Acc=1, Rej=2
Tightened sampling plan: Sample size code letter ¿ J , n=80, Acc=1, Rej=2
Reduced sampling plan: Sample size code letter ¿ H , n=20, Acc=0, Rej=2

The impact of changing from General level II to I has been to reduce the sample size by 50%, while also
reduces the Accept and Reject numbers. Recall that Level I may be used when less discrimination is
needed.

7.18. A product is supplied in lots of size N=10,000 . The AQL has been specified at 0.10%. Find the normal,
tightened, and reduced single-sampling plans from MIL STD 105E, assuming general inspection level II.

N=10,000 , AQL=0.10 %
General level II

From Table 7.4:

The Sample Size Code Letter for Normal Sampling is “L”

From Tables 7.5, 7.6, 7.7:

Normal sampling plan: Sample size code letter ¿ K , n=125 , Acc=0 , Rej=1
Tightened sampling plan: Sample size code letter ¿ L , n=200 , Acc=0 , Rej=1
Reduced sampling plan: Sample size code letter ¿ K , n=50 , Acc=0 , Rej=1

7.19. MIL STD 105E is being used to inspect incoming lots of size N=5,000 . Single-
sampling, general inspection level II, and an AQL of 0.65% are being used.

a. Find the normal, tightened, and reduced inspection plans.

N=5000 , AQL=0.65 %
General level II
From Table 7.4:

The Sample Size Code Letter is “L”

From Tables 7.5, 7.6, 7.7:


Normal sampling plan: Sample size code letter ¿ L , n=200 , Acc=3, Rej=4
Tightened sampling plan: Sample size code letter ¿ L , n=200 , Acc=2, Rej=3
Reduced sampling plan: Sample size code letter ¿ L , n=80 , Acc=1, Rej=4

b. Draw the OC curves of the normal, tightened, and reduced inspection plans on the
same graph.

This exercise can be solved in Excel.

Excel formulas:

A B C D
1 N = 5000 normal tightened reduced
2 n= 200 200 80
3 c= 3 2 1
4
5 p Pa=Pr{d<=3} Pa=Pr{d<=2} Pa=Pr{d<=1}
6 0.001 =BINOMDIST($B$3,$B$2,A6,TRUE) =BINOMDIST($C$3,$C$2,A6,TRUE) =BINOMDIST($D$3,$D$2,A6,TRUE)
7 0.002 =BINOMDIST($B$3,$B$2,A7,TRUE) =BINOMDIST($C$3,$C$2,A7,TRUE) =BINOMDIST($D$3,$D$2,A7,TRUE)
8 0.003 =BINOMDIST($B$3,$B$2,A8,TRUE) =BINOMDIST($C$3,$C$2,A8,TRUE) =BINOMDIST($D$3,$D$2,A8,TRUE)
9 0.004 =BINOMDIST($B$3,$B$2,A9,TRUE) =BINOMDIST($C$3,$C$2,A9,TRUE) =BINOMDIST($D$3,$D$2,A9,TRUE)
10 0.005 =BINOMDIST($B$3,$B$2,A10,TRUE) =BINOMDIST($C$3,$C$2,A10,TRUE) =BINOMDIST($D$3,$D$2,A10,TRUE)
11 0.006 =BINOMDIST($B$3,$B$2,A11,TRUE) =BINOMDIST($C$3,$C$2,A11,TRUE) =BINOMDIST($D$3,$D$2,A11,TRUE)
12 0.007 =BINOMDIST($B$3,$B$2,A12,TRUE) =BINOMDIST($C$3,$C$2,A12,TRUE) =BINOMDIST($D$3,$D$2,A12,TRUE)
13 0.0075 =BINOMDIST($B$3,$B$2,A13,TRUE) =BINOMDIST($C$3,$C$2,A13,TRUE) =BINOMDIST($D$3,$D$2,A13,TRUE)
14 0.008 =BINOMDIST($B$3,$B$2,A14,TRUE) =BINOMDIST($C$3,$C$2,A14,TRUE) =BINOMDIST($D$3,$D$2,A14,TRUE)
15 0.009 =BINOMDIST($B$3,$B$2,A15,TRUE) =BINOMDIST($C$3,$C$2,A15,TRUE) =BINOMDIST($D$3,$D$2,A15,TRUE)
16 0.01 =BINOMDIST($B$3,$B$2,A16,TRUE) =BINOMDIST($C$3,$C$2,A16,TRUE) =BINOMDIST($D$3,$D$2,A16,TRUE)
17 …

Excel results:
A B C D
1 N = 5000 normal tightened reduced
2 n= 200 200 80
3 c= 3 2 1
4
5 p Pa=Pr{d<=3} Pa=Pr{d<=2} Pa=Pr{d<=1}
6 0.0010 0.9999 0.9989 0.9970
7 0.0020 0.9992 0.9922 0.9886
8 0.0030 0.9967 0.9771 0.9756
9 0.0040 0.9911 0.9529 0.9588
10 0.0050 0.9813 0.9202 0.9389
11 0.0060 0.9667 0.8800 0.9163
12 0.0070 0.9469 0.8340 0.8916
13 0.0075 0.9351 0.8093 0.8786
14 0.0080 0.9220 0.7838 0.8653
15 0.0090 0.8922 0.7309 0.8377
16 0.0100 0.8580 0.6767 0.8092
17 …

Excel graph:

OC Curve for N=2000, II, AQL=0.65%


1.2
Probability of Acceptance, Pa

0.8

0.6

0.4

0.2

0
0 0.02 0.04 0.06 0.08 0.1

Fraction Defective, p

Normal Tightened Reduced

7.20. A product is shipped in lots of size N=2,000. Find a Dodge-Romig single-sampling plan for which the
LTPD = 1%, assuming that the process average is 0.25% defective. Draw the OC curve and the ATI curve
for this plan. What is the AOQL for this sampling plan?

N=2,000 , LTPD=1 % , average process fallout ¿ p=0.25 % defective

From Table 7.9, n=490 , c=2


AOQL=0.21 %
Excel formulas:
Excel Results:

Excel Graphs:

7.21. We wish to find a single-sampling plan for a situation where lots are shipped from a
supplier. The supplier’s process operates at a fallout level of 0.50% defective. We
want the AOQL from the inspection
activity to be 3%.
a. Find the appropriate Dodge–Romig plan.

Dodge-Romig single sampling, AOQL=3 % , average process fallout ¿ p=0.50 % defective

From Table 7.8, minimum sampling plan that meets the quality requirements is 50,001 N 100,000 ;
n=65; c=3 (last row, second process average column)

b. Draw the OC curve and the ATI curve for this plan. How much inspection will be
necessary, on the average, if the supplier’s process operates close to the average
fallout level?

This exercise can be solved in Excel.

Excel formulas:

A B C
1 N= 50001
2 n= 65
3 c= 3
4
5 p Pa ATI
6 0.001 =BINOMDIST($B$3,$B$2,A6,TRUE) =$B$2+(1-B6)*($B$1-$B$2)
7 0.002 =BINOMDIST($B$3,$B$2,A7,TRUE) =$B$2+(1-B7)*($B$1-$B$2)
8 0.003 =BINOMDIST($B$3,$B$2,A8,TRUE) =$B$2+(1-B8)*($B$1-$B$2)
9 0.004 =BINOMDIST($B$3,$B$2,A9,TRUE) =$B$2+(1-B9)*($B$1-$B$2)
10 0.005 =BINOMDIST($B$3,$B$2,A10,TRUE) =$B$2+(1-B10)*($B$1-$B$2)
11 0.006 =BINOMDIST($B$3,$B$2,A11,TRUE) =$B$2+(1-B11)*($B$1-$B$2)
12 0.007 =BINOMDIST($B$3,$B$2,A12,TRUE) =$B$2+(1-B12)*($B$1-$B$2)
13 0.0075 =BINOMDIST($B$3,$B$2,A13,TRUE) =$B$2+(1-B13)*($B$1-$B$2)
14 0.008 =BINOMDIST($B$3,$B$2,A14,TRUE) =$B$2+(1-B14)*($B$1-$B$2)
15 0.009 =BINOMDIST($B$3,$B$2,A15,TRUE) =$B$2+(1-B15)*($B$1-$B$2)
16 0.01 =BINOMDIST($B$3,$B$2,A16,TRUE) =$B$2+(1-B16)*($B$1-$B$2)
17 …

Excel results:

A B C
1 N= 50001
2 n= 65
3 c= 3
4
5 p Pa ATI
6 0.001 1.00000 65
7 0.002 0.99999 65
8 0.003 0.99995 67
9 0.004 0.99986 72
10 0.005 0.99967 82
11 0.006 0.99934 98
12 0.007 0.99884 123
13 0.008 0.99851 139
14 0.008 0.99812 159
15 0.009 0.99713 208
16 0.010 0.99583 273
17 …
Excel graph:

OC Cur ve for Dodge-Romig n=65, c=3 ATI Cur ve for N-50001, n=65, c=3
1.2 60000
Probability of Acceptance, Pa

1 50000

0.8 40000

ATI
0.6 30000

0.4 20000

0.2 10000

0 0
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2

Fraction Defective, p Fraction Defective, p

Let N=50,001
Pa= ( 3 ,65 , 0.005 ) 0.99967
¿
ATI =n+ ( 1−Pa ) ( N−n )=65+ ( 1−0.99967 ) ( 50,001−65 )=82

On average, if the vendor’s process operates close to process average, the average inspection required will
be 82 units.

c. What is the LTPD protection for this plan?

From Table 7.8, Lot ¿ 50,001 – 100,000 (last row); p=0.50 % defective (2nd process average column)
LTPD=10.3 %

7.22. A supplier ships a product in lots of size N=8,000. We wish to have an AOQL of 3%, and we are going
to use single-sampling. We do not know the supplier’s process fallout but suspect that it is at most 1%
defective.

a. Find the appropriate Dodge-Romig plan.

N=8000 , AOQL=3 %

From Table 7.8, Lot ¿ 7,001−10,000 ; p=1% defective (3rd process average column)

Dodge-Romig single sampling plan: n=65 , c=3

b. Find the ATI for this plan, assuming that incoming lots are 1% defective.

3
Pa= ∑ 65 ( 0.01 ) (0.99)65−d=¿0.99583
d=0 d
d
( )
ATI =n+ ( 1−Pa ) ( N−n )=65+ ( 1−0.99583 ) ( 8000−64 )=98

c. Suppose that our estimate of the supplier’s process average is incorrect and that it is really 0.25% defective.
What sampling plan should we have used? What reduction in ATI would have been realized if we had used
the correct plan?

From Table 7.8, Lot ¿ 7,001−10,000 ; p=0.25 % defective (2nd process average column)

Dodge-Romig single sampling pans: n=46 ; c =2

d=0 d
( )
Pa= ∑ 46 ( 0.0025 ) (.9975)46−d=0.999781
d

ATI=n+ ( 1−Pa ) ( N−n )=46 + ( 1−0.999781 ) ( 8000−46 )=47

7.23. An inspector wants to use a variables sampling plan with an AQL of 1.5%. Lots are of size 7,000 and the
standard deviation is unknown. Find a sampling plan using Procedure 1 from MIL STD 414.

N=7,000 , AQL=1.5 %
Under MIL STD 414, Inspection level IV, sample size code letter = M (Table 7.10)

From Table 7.11:


Normal inspection: n=50 , k=1.80
Tightened inspection: n=50 , k=1.93

7.24. How does the sample size found in Exercise 7.23 compare with what would have
been used under MIL STD 105 E ?

Under MIL STD 105E, Inspection level II, Sample size code letter ¿ L (Tables 7.4, 7.5, 7.6, 7.7):

Normal Tightened Reduced


n 200 200 80
Acc 7 5 3
Rej 8 6 6

The MIL STD 414 sample sizes are considerably smaller than those for MIL STD 105 E .

7.25. A lot of 500 items is submitted for inspection. Suppose that we wish to find a plan from MIL STD 414,
using inspection level II. If the AQL is 4%, find the Procedure 1 sampling plan from the standard.

N=500 , AQL=4 % , Inspection level II


Under MIL STD 414, inspection level II, sample size code letter = E

From Table 7.11:


Normal Inspection: n=7 , k=1.15
Tightened inspection: n=7 , k=1.33

7.26. A soft-drink bottler purchases nonreturnable glass bottles from a supplier. The lower
specification on the bursting strength of the bottles is 225 psi. The bottler wishes to
use variables sampling to sentence the lots and has decided to use an AQL of 1%.
Find an appropriate set of normal and tightened sampling plans from the standard.
Suppose that a lot is submitted, and the sample results yield

x=255, s=10
Determine the disposition of the lot using Procedure 1. The lot size is N=100,000 .

LSL=225 psi, AQL=1 % , N=100,000

Assume inspection level IV, sample size code letter ¿ O (Table 7.10)
Normal sampling: n=100, k =2.00 (Table 7.11)
Tightened sampling: n=100, k =2.14 (Table 7.12)

Assume normal sampling is in effect:

x=255 , S=10
[ Z LSL=( x−LSL ) /S= (2550−225 ) /10=3.000 ] >2, so accept the lot.
7.27. A chemical ingredient is packed in metal containers. A large shipment of these containers has been
delivered to a manufacturing facility. The mean bulk density of this ingredient should not be less than
0.15g/cm3. Suppose that lots of this quality are to have a 0.95 probability of acceptance. If the mean bulk
density is as low as 0.1450, the probability of acceptance of the lot should be 0.10. Suppose we know that
the standard deviation of bulk density is approximately 0.005/cm3. Obtain a variables sampling plan that
could be used to sentence the lots.

3
σ =0.005 g/c m , x 1=0.15 , 1−α =1−0.95=0.05

x A −x 1
=Φ ( 1−α )
σ / √n

x A −0.15
=+1.645
0.005/ √ n

x 2=0.145 , β=0.10

x A −x 2
=Φ ( β )
σ / √n

x A −0.145
=−1.282
0.005/ √ n

n ≈ 9and the target x A=0.1527

7.28. A standard of 0.3 ppm has been established for formaldehyde emission levels in
wood products. Suppose that the standard deviation of emissions in an individual
board is σ =0.10 ppm. Any lot that contains 1% of its items above 0.3 ppm is
considered acceptable. Any lot that has 8% or more of its items above 0.3 ppm is
considered unacceptable. Good lots are to be accepted with probability 0.95, and bad
lots are to be rejected with probability 0.90.

a. Using the 1% nonconformance level as an AQL, and assuming that lots consist of
5,000 panels, find an appropriate set of sampling plans from MIL STD 414 , assuming
σ is unknown.
AQL=1 % , N =5000 ,unknown
Double specification limit, assume inspection level IV

From Table 7.10:

Sample size code letter ¿ M

From Table 7.11 (Standard deviation method, Single-Specification Limit—Form 1)

Normal: n = 50, M =1.93


Tightened: n = 50, M =2.08

b. Find an attributes-sampling plan that has the same OC curve as the variables
sampling plan derived in part (a). Compare the sample sizes required for equivalent
protection. Under what circumstances would variables sampling be more
economically efficient?

1 –=0.95 ,=0.10 , p1=0.01 , p2=0.08

From binomial nomograph (Figure 7.10): n=60 , c=2

The sample size is slightly larger than required for the variables plan (a). Variables sampling would be
more efficient if were known.

c. Using the 1% nonconforming as an AQL, find an attributes sampling plan from


MIL STD 105 E . Compare the sample sizes and the protection obtained from this
plan with the plans derived in parts (a) and (b).

AQL=1 % , N=5,000

Assume General Inspection Level II: sample size code letter = L (Table 7.4)

Normal: n=200, Acc=5, Rej=6 (Table 7.5)


Tightened: n=200, Acc=3, Rej=4 (Table 7.6)
Reduced: n=80, Acc=2, Rej=5(Table 7.7)

The sample sizes required are much larger than for the other plans.

7.29. Consider a single-sampling plan with n=25 , c=0.Draw the OC curve for this plan. Now consider chain-
sampling plans with n=25 , c=0 ,∧i=1, 2 , 5 ,7.Sketch the OC curves for these chain-sampling plans
on the same axis. Discuss the behavior of chain sampling in this situation compared to the conventional
single-sampling plan with c=0.

Excel Formulas:
Excel Results:

Excel Graph:

7.30. An electronics manufacturer buys memory devices in lots of 30,000 from a supplier. The supplier has a
long record of good quality performance, with an average fraction defective of approximately 0.10%. The
quality engineering department has suggested using a conventional acceptance-sampling plan with
.

a. Draw the OC curve of this sampling plan.

Excel Formulas:

Excel Results:

Excel graph:
b. If lots are of a quality that is near the supplier’s long-term process average, what is the average total
inspection at that level of quality?
p=0.001, Pa =¿
ATI =n+ ( 1−Pa ) ( N−n )
¿ 32+ ( 1−0.96849 )( 30,000−32 )
¿ 976

c. Consider a chain sampling plan with n=32 , c=0 ,∧i=3 . Contrast the performance of this plan with the
conventional sampling plan n=32 , c=0.

The chain-sampling plan with i=3 is similar to the single sampling plan. It provides a higher probability of
acceptance when 0< p< 0.04.

d. How would the performance of this chain sampling plan change if we substituted i=4 in part (c)?
A chain-sampling with i=3 and i=4 give similar results. i=3 provides a higher probability of
acceptance when 0< p< 0.03.

7.31. Suppose that a manufacturing process operates in continuous production, such that continuous sampling
plans could be applied. Determine three different CSP-1 sampling plans that could be used for an AOQL of
0.198%.

From Table 7.12:

1. f =1/2 and i=140


2. f =1/10 and i=550
3. f =1/100 and i=1302
7.32. For the sampling plans developed in Exercise 7.31, compare the plans’ performance
in terms of average fraction inspected, given that the process is in control at an
average fallout level of 0.15%. Compare the plans in terms of their operating-
characteristic curves.

This exercise may be solved in Excel.

Average process fallout, p = 0.15% = 0.0015 and q = 1 – p = 0.9985

1. f =1/2 andi=140 : u=155.915, v=1333.3, AFI=0.5523 , Pa=0.8953


2. f =1/10 andi=550 : u=855.530, v=6666.7, AFI=0.2024 , Pa=0.8863
3. f =1/100 andi=1302: u = 4040.000, v=66,666.7 , AFI=0.0666 , Pa=0.9429

Excel formulas:
A B C D
1 AOQL=0.198%
2 p= 0.0015 q= =1-B2
3 f= 0.5 0.1 =1/100
4 i= 140 550 1302
5 u= =(1-$D$2^B4)/($B$2*$D$2^B4) =(1-$D$2^C4)/($B$2*$D$2^C4) =(1-$D$2^D4)/($B$2*$D$2^D4)
6 v= =1/(B3*$B$2) =1/(C3*$B$2) =1/(D3*$B$2)
7 AFI = =(B5+B3*B6)/(B5+B6) =(C5+C3*C6)/(C5+C6) =(D5+D3*D6)/(D5+D6)
8 Pa{p=.0015} =B6/(B5+B6) =C6/(C5+C6) =D6/(D5+D6)
9
10 f = 1/2 and i = 140
11 p u v Pa
12 0.001 =(1-(1-A12)^$B$4)/(A12*(1-A12)^$B$4) =1/($B$3*A12) =C12/(B12+C12)
13 0.0015 =(1-(1-A13)^$B$4)/(A13*(1-A13)^$B$4) =1/($B$3*A13) =C13/(B13+C13)
14 0.002 =(1-(1-A14)^$B$4)/(A14*(1-A14)^$B$4) =1/($B$3*A14) =C14/(B14+C14)
15 0.0025 =(1-(1-A15)^$B$4)/(A15*(1-A15)^$B$4) =1/($B$3*A15) =C15/(B15+C15)
16 0.003 =(1-(1-A16)^$B$4)/(A16*(1-A16)^$B$4) =1/($B$3*A16) =C16/(B16+C16)
17 …
35 0.06 =(1-(1-A35)^$B$4)/(A35*(1-A35)^$B$4) =1/($B$3*A35) =C35/(B35+C35)
36 0.07 =(1-(1-A36)^$B$4)/(A36*(1-A36)^$B$4) =1/($B$3*A36) =C36/(B36+C36)
37 0.08 =(1-(1-A37)^$B$4)/(A37*(1-A37)^$B$4) =1/($B$3*A37) =C37/(B37+C37)
38 0.09 =(1-(1-A38)^$B$4)/(A38*(1-A38)^$B$4) =1/($B$3*A38) =C38/(B38+C38)
39 0.1 =(1-(1-A39)^$B$4)/(A39*(1-A39)^$B$4) =1/($B$3*A39) =C39/(B39+C39)

Excel results:

A B C D E F G H I J K L
1 AOQL=0.198%
2 p= 0.0015 q= 0.9985
3 f= 0.5000 0.1000 0.0100
4 i= 140.0000 550.0000 1302.0000
5 u= 155.9150 855.5297 4040.0996
6 v= 1333.3333 6666.6667 66666.6667
7 AFI = 0.5523 0.2024 0.0666
8 Pa{p=.0015} 0.8953 0.8863 0.9429
9
10 f = 1/2 and i = 140 f = 1/10 and i = 550 f = 1/100 and i = 1302
11 p u v Pa u v Pa u v Pa
12 0.0010 1.5035E+02 2000.0000 0.9301 7.3373E+02 10000.0000 0.9316 2.6790E+03 100000.0000 0.9739
13 0.0015 1.5592E+02 1333.3333 0.8953 8.5553E+02 6666.6667 0.8863 4.0401E+03 66666.6667 0.9429
14 0.0020 1.6175E+02 1000.0000 0.8608 1.0037E+03 5000.0000 0.8328 6.2765E+03 50000.0000 0.8885
15 0.0025 1.6788E+02 800.0000 0.8266 1.1848E+03 4000.0000 0.7715 1.0010E+04 40000.0000 0.7998
16 0.0030 1.7431E+02 666.6667 0.7927 1.4066E+03 3333.3333 0.7032 1.6331E+04 33333.3333 0.6712
17 …
35 0.0600 9.6355E+04 33.3333 0.0003 1.0035E+16 166.6667 0.0000 1.6195E+36 1666.6667 0.0000
36 0.0700 3.6921E+05 28.5714 0.0001 3.0852E+18 142.8571 0.0000 1.5492E+42 1428.5714 0.0000
37 0.0800 1.4676E+06 25.0000 0.0000 1.0318E+21 125.0000 0.0000 1.7586E+48 1250.0000 0.0000
38 0.0900 6.0251E+06 22.2222 0.0000 3.7410E+23 111.1111 0.0000 2.3652E+54 1111.1111 0.0000
39 0.1000 2.5471E+07 20.0000 0.0000 1.4676E+26 100.0000 0.0000 3.7692E+60 1000.0000 0.0000

7.33. Suppose that CSP-1 is used for a manufacturing process where it is desired to maintain an AOQL of 1.90%.
Specify two CSP-1 plans that would meet this AOQL target.

From Table 7.12:


1
Plan A: f = and i=38
5
Plan B: f =1/25 and i=86

7.34. Compare the plans developed in Exercise 7.33 in terms of average fraction inspected
and their operating-characteristic curves. Which plan would you prefer if p=0.0375 ?

From Exercise 7.33:

CSP-1 with AOQL=1.90 %


Plan A: f =1/5 and i=38
Plan B: f =1/25 andi=86
Plan A: AFI=0.5165 and Pa ( p=0.0375 )=0.6043
Plan B: AFI=0.5272 and Pa ( p=0.0375 )=0.4925

Prefer Plan B over Plan A since it has a lower Pa at the unacceptable level of p.
CHAPTER 8

Process Design and Improvement with Designed


Experiments
Learning Objectives

After completing this chapter you should be able to:


1. Explain how designed experiments can be used to improve product design and improve process
performance
2. Explain how designed experiments can be used to reduce the cycle time required to develop new products
and processes
3. Understand how main effects and interactions of factors can be estimated
4. Understand the factorial design concept
5. Know how to use the analysis of variance (ANOVA) to analyze data from factorial designs
6. Know how residuals are used for model adequacy checking for factorial designs
7. Know how to use the 2k system of factorial designs
8. Know how to construct and interpret contour plots and response surface plots
9. Know how to add center points to a 2k factorial design to test for curvature and provide an estimate of pure
experimental error
10. Understand how the blocking principal can be used in a factorial design to eliminate the effects of a
nuisance factor
11. Know how to use the system of fractional factorial designs
12. Understand the basics of response surface methods
13. Understand the robust product and process design tools

Important Terms and Concepts


2k factorial designs Crossed array design Pre-experimental planning
2k−p fractional factorial designs Curvature in the response function Projection of fractional factorial
Aliasing Defining relation for a fractional designs
Analysis of variance (ANOVA) factorial design Regression model representation
Analysis procedure for factorial Factorial design of experimental results
designs Fractional factorial design Residual analysis
Blocking Generators for a fractional Residuals
Center points in 2k and 2k−p factorial Resolution of a fractional factorial
factorial design design
designs Guidelines for planning Response surface
Central composite design experiments Robust design
Combined array design Interaction Screening experiments
Completely randomized design Main effect of a factor Sequential experimentation
Confounding Noise variable Sparsity of effects principle
Contour plot Normal probability plot of effects Two-factor interaction
Controllable process variables Orthogonal design
Exercises

8.1. An article in Industrial Quality Control (1956, pp. 5–8) describes an experiment to investigate the effect of
glass type and phosphor type on the brightness of a television tube. The response measured is the current
necessary (in microamps) to obtain a specified brightness level. The data are shown in Table 8E.1.
Analyze the data and draw conclusions.

Glass Phosphor Type


Type 1 2 3
280 300 290
1 290 310 285
285 295 290
230 260 220
2 235 240 225
240 235 230

This experiment is three replicates of a factorial design in two factors—two levels of glass type and three
levels of phosphor type—to investigate brightness. Enter the data into the MINITAB worksheet using the
first three columns: one column for glass type, one column for phosphor type, and one column for
brightness. This is how the Excel file is structured (Chap13.xls). Since the experiment layout was not
created in MINITAB, the design must be defined before the results can be analyzed.

After entering the data in MINITAB, select Stat > DOE > Factorial > Define Custom Factorial
Design. Select the two factors (Glass Type and Phosphor Type), then for this exercise, check “General
full factorial”. The dialog box should look:

Next, select “Designs”. For this exercise, no information is provided on standard order, run order, point
type, or blocks, so leave the selections as below, and click “OK” twice.
Note that MINITAB added four new columns (4 through 7) to the worksheet. DO NOT insert or delete
columns between columns 1 through 7. MINITAB recognizes these contiguous seven columns as a
designed experiment; inserting or deleting columns will cause the design layout to become corrupt.

Select Stat > DOE > Factorial > Analyze Factorial Design. Select the response (Brightness), then
click on “Terms”, verify that the selected terms are Glass Type, Phosphor Type, and their interaction, click
“OK”. Click on “Graphs”, select “Residuals Plots : Four in one”. The option to plot residuals versus
variables is for continuous factor levels; since the factor levels in this experiment are categorical, do not
select this option. Now click the box “Residuals versus variables:”, then select the two factors. Click
“OK”. Click on “Storage”, select “Fits” and “Residuals”, and click “OK” twice.

General Linear Model: Ex13-1Bright versus Ex13-1Glass, Ex13-1Phosphor

Factor Type Levels Values


Ex13-1Glass fixed 2 1, 2
Ex13-1Phosphor fixed 3 1, 2, 3

Analysis of Variance for Ex13-1Bright, using Adjusted SS for Tests


Source DF Seq SS Adj SS Adj MS F P
Ex13-1Glass 1 14450.0 14450.0 14450.0 273.79 0.000
Ex13-1Phosphor 2 933.3 933.3 466.7 8.84 0.004
Ex13-1Glass*Ex13-1Phosphor 2 133.3 133.3 66.7 1.26 0.318
Error 12 633.3 633.3 52.8
Total 17 16150.0

S = 7.26483 R-Sq = 96.08% R-Sq(adj) = 94.44%

The effects of glass ( p−value=0.000) and phosphor ( p−value=0.004) are significant, while the
effect of the interaction is not significant ( p−value=0.318).
Visual examination of residuals on the normal probability plot and histogram does not reveal any problems.
The plot of residuals versus observation order is not meaningful since no order was provided with the data.
If the model were re-fit with only Glass Type and Phosphor Type, the residuals should be re-examined.

There are some issues with the constant variance assumption across all levels of phosphor since the
variability at level 2 appears larger than the variability observed at levels 1 and 3.

Select Stat > DOE > Factorial > Factorial Plots. Select “Interaction Plot” and click on “Setup”,
select the response (Brightness) and both factors (Glass Type and Phosphor Type), and click “OK” twice.

The absence of a significant interaction is evident in the parallelism of the two lines. Final selected
combination of glass type and phosphor type depends on the desired brightness level.
Alternate Solution: This exercise may also be solved using MINITAB’s ANOVA functionality instead of
its DOE functionality. The DOE functionality was selected to illustrate the approach that will be used for
most of the remaining exercises. Select Stat > ANOVA > Two-Way, and complete the dialog box as
below.

Two-way ANOVA: Ex13-1Bright versus Ex13-1Glass, Ex13-1Phosphor

Source DF SS MS F P
Ex13-1Glass 1 14450.0 14450.0 273.79 0.000
Ex13-1Phosphor 2 933.3 466.7 8.84 0.004
Interaction 2 133.3 66.7 1.26 0.318
Error 12 633.3 52.8
Total 17 16150.0
S = 7.265 R-Sq = 96.08% R-Sq(adj) = 94.44%

Individual 95% CIs For Mean Based on


Pooled StDev
Ex13-1Glass Mean -----+---------+---------+---------+----
1 291.667 (--*-)
2 235.000 (--*-)
-----+---------+---------+---------+----
240 260 280 300

Individual 95% CIs For Mean Based on


Pooled StDev
Ex13-1Phosphor Mean -------+---------+---------+---------+--
1 260.000 (-------*-------)
2 273.333 (-------*-------)
3 256.667 (-------*-------)
-------+---------+---------+---------+--
256.0 264.0 272.0 280.0

8.2. A process engineer is trying to improve the life of a cutting tool. He has run a 23 experiment using cutting
speed (A), metal hardness (B), and cutting angle (C) as the factors. The data from two replicates are shown
in Table 8E.2.
Replicate
Run
I II
(1) 221 311
a 325 435
b 354 348
ab 552 472
c 440 453
ac 406 377
bc 605 500
abc 392 419

a. Do any of the three factors affect tool life?

This experiment is a 23 factorial design (cutting speed, metal hardness, and cutting angle) to investigate
tool life. Generate a 23 factorial design: Select Stat > DOE > Factorial > Create Factorial Design.
Select 2-level factorial with 3 factors. Select Designs and choose number of replicates for corner points as
two. No information is given in the example about the run order of the experiment. Click OK.

Note that MINITAB created 7 columns in a new worksheet. DO NOT insert or delete columns between
columns 1 through 7. MINITAB recognizes these contiguous seven columns as a designed experiment;
inserting or deleting columns will cause the design layout to become corrupt. Now create a new column for
the response variable LIFE and enter in the values for tool life in the appropriate cells of the worksheet.

Select Stat > DOE > Factorial > Analyze Factorial Design. Select the response (Life), then click on
“Terms”, verify that the selected terms are Glass Type, Phosphor Type, and their interaction, click “OK”.
Click on “Graphs”, select “Residuals Plots : Four in one”. Now click the box “Residuals versus
variables:”, then select the two factors. Click “OK”. Click on “Storage”, select “Fits” and “Residuals”,
and click “OK” twice.

Factorial Regression: Life versus Speed, Hardness, Angle

Analysis of Variance

Source DF Adj SS Adj MS F-Value P-Value


Model 7 114888 16412.5 6.66 0.008
Linear 3 50317 16772.3 6.81 0.014
Speed 1 1332 1332.3 0.54 0.483
Hardness 1 28392 28392.3 11.53 0.009
Angle 1 20592 20592.3 8.36 0.020
2-Way Interactions 3 59741 19913.6 8.09 0.008
Speed*Hardness 1 506 506.3 0.21 0.662
Speed*Angle 1 56882 56882.2 23.10 0.001
Hardness*Angle 1 2352 2352.3 0.96 0.357
3-Way Interactions 1 4830 4830.2 1.96 0.199
Speed*Hardness*Angle 1 4830 4830.2 1.96 0.199
Error 8 19700 2462.5
Total 15 134588

Model Summary

S R-sq R-sq(adj) R-sq(pred)


49.6236 85.36% 72.56% 41.45%

The effects of hardness ( p−value=0.009), angle ( p−value=0.020), and the interaction between
speed and angle are significant, while the main effect of speed and the other interaction terms are not
significant (p-value > 0.05).
Residual Plots for Life
Normal Probability Plot Versus Fits
99
50
90
25

Residual
Percent
50 0

-25
10
-50
1
-100 -50 0 50 100 300 360 420 480 540
Residual Fitted Value

Histogram Versus Order


3 50

Frequency
25

Residual
2
0

1 -25

-50
0
-40 -20 0 20 40 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Residual Observation Order

Visual examination of residuals on the normal probability plot and histogram does not reveal any problems.
The plot of the residuals versus fitted values indicates that the assumption of constant variance may not be
met. The plot of residuals versus observation order is not meaningful since no order was provided with the
data.

Now consider the reduced model which includes factors Speed, Hardness, Angle, and Speed*Angle.
Although Speed was not a significant factor on its own, because its interaction with angle is significant, we
leave it in the model to maintain hierarchy in the model.

Factorial Regression: Life versus Speed, Hardness, Angle

Analysis of Variance

Source DF Adj SS Adj MS F-Value P-Value


Model 4 107199 26800 10.76 0.001
Linear 3 50317 16772 6.74 0.008
Speed 1 1332 1332 0.54 0.480
Hardness 1 28392 28392 11.40 0.006
Angle 1 20592 20592 8.27 0.015
2-Way Interactions 1 56882 56882 22.85 0.001
Speed*Angle 1 56882 56882 22.85 0.001
Error 11 27389 2490
Lack-of-Fit 3 7689 2563 1.04 0.425
Pure Error 8 19700 2463
Total 15 134588

Model Summary

S R-sq R-sq(adj) R-sq(pred)


49.8988 79.65% 72.25% 56.95%

The diagnostic plots of the residuals do not reveal any unusual behavior or violations of assumptions.
Residual Plots for Life
Normal Probability Plot Versus Fits
99
50
90

Residual
Percent
0
50

-50
10

1 -100
-100 -50 0 50 100 300 360 420 480 540
Residual Fitted Value

Histogram Versus Order


3
50

Frequency

Residual
2
0

1 -50

0 -100
-80 -60 -40 -20 0 20 40 60 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Residual Observation Order

Select Stat > DOE > Factorial > Factorial Plots. Select “Interaction Plot” and click on “Setup”,
select the response (Brightness) and both factors (Glass Type and Phosphor Type), and click “OK” twice.

Interaction Plot for Life


Fitted Means
Speed * Angle Angle
500 -1
1

450
Mean of Life

400

350

300
-1 1
Speed

The interaction between speed and angle is evident in the interaction plot.

The regression equation (in coded units) is:


^y =413.1+9.1∗speed + 42.1∗hardness+35.9∗angle−59.6∗speed∗angle
b. What combination of factor levels produces the longest tool life?

Consider the cube plot. MTB > Stat > DOE > Factorial > Cube Plot

Cube Plot (fitted means) for Life

541.625 440.625

350.625 488.125
1

Hardness 457.375 356.375


1

Angle
266.375 403.875
-1 -1
-1 1
Speed

The low level of speed and high levels of hardness and angle produce the longest tool life.

^y =413.1+9.1∗(−1 ) +42.1∗( +1 ) +35.9∗(+1 )−59.6 (−1 )∗( +1 )


^y =541.6
c. Is there a combination of cutting speed and cutting angle that always gives good results regardless of metal
hardness?

Consider contour plots of speed and angle for hardness fixed at -1 and +1.

MTB > Stat > DOE > Factorial > Contour Plot

Click settings and set Hardness to -1 or +1

Contour Plot of Life vs Angle, Speed Contour Plot of Life vs Angle, Speed
1.0 1.0
Life Life
< 300 < 360
300 – 350 360 – 400
0.5 350 – 400 0.5 400 – 440
400 – 450 440 – 480
> 450 480 – 520
> 520
Angle

Angle

0.0 Hold Values 0.0


Hardness -1 Hold Values
Hardness 1

-0.5 -0.5

-1.0 -1.0
-1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0
Speed Speed

The low level of speed and high level of angle tend to give good results regardless of the metal hardness.
The high level of speed and low level of angle also tends to give good results; however, this region is much
smaller compared to the area with the low level of speed and high level of angle.

8.3. Find the residuals from the tool life experiment in Exercise 8.2. Construct a normal probability plot of the
residuals. Plot the residuals versus the predicted values. Comment on the plots.

Replicate
Run
I II
-1 221 311
a 325 435
b 354 348
ab 552 472
c 440 453
ac 406 377
bc 605 500
abc 392 419

To find the residuals, select Stat > DOE > Factorial > Analyze Factorial Design. Select “Terms”
and verify that all terms for the reduced model (A, B, C, AC) are included. Select “Graphs”, and for
residuals plots choose “Normal plot” and “Residuals versus fits”. To save residuals to the worksheet,
select “Storage” and choose “Residuals”.

Normal Probability Plot of the Residuals Residuals Versus the Fitted Values
(response is Life) (response is Life)
99

95
50
90
25
80
70

Residual
Percent

60 0
50
40
30 -25
20

10 -50
5
-75
1
-100 -50 0 50 100 250 300 350 400 450 500 550
Residual Fitted Value

Normal probability plot of residuals indicates that the normality assumption is reasonable. Residuals
versus fitted values plot shows that the equal variance assumption across the prediction range is reasonable.

8.4. Four factors are thought to possibly influence the taste of a soft-drink beverage: type of sweetener (A), ratio
of syrup to water (B), carbonation level (C), and temperature (D). Each factor can be run at two levels,
producing a 24 design. At each run in the design, samples of the beverage are given to a test panel
consisting of 20 people. Each tester assigns a point score from 1 to 10 to the beverage. Total score is the
response variable, and the objective is to find a formulation that maximized total score. Two replicates of
this design are run, and the results are shown in Table 8E.3. Analyze the data and draw conclusions.

Treatment Replicate
Combination I II
(1) 188 195
a 172 180
b 179 187
ab 185 178
c 175 180
ac 183 178
bc 190 180
abc 175 168
d 200 193
ad 170 178
bd 189 181
abd 183 188
cd 201 188
acd 181 173
bcd 189 182
abcd 178 182

Generate a 24 factorial design: Select Stat > DOE > Factorial > Create Factorial Design. Select 2-
level factorial with 4 factors. Select Designs and choose number of replicates for corner points as two. No
information is given in the example about the run order of the experiment. Click OK.

Select Stat > DOE > Factorial > Analyze Factorial Design. Select the response variable, then click
on “Terms”, verify that the selected factors and their interactions are selected, click “OK”. Click on
“Graphs”, select “Residuals Plots : Four in one”.

Factorial Regression: Total Score versus A, B, C, D


Full Factorial Design

Factors: 4 Base Design: 4, 16


Runs: 32 Replicates: 2
Blocks: 1 Center pts (total): 0

All terms are free from aliasing.

Coded Coefficients

Term Effect Coef SE Coef T-Value P-Value VIF


Constant 182.781 0.950 192.31 0.000
A -9.063 -4.531 0.950 -4.77 0.000 1.00
B -1.313 -0.656 0.950 -0.69 0.500 1.00
C -2.688 -1.344 0.950 -1.41 0.177 1.00
D 3.938 1.969 0.950 2.07 0.055 1.00
A*B 4.063 2.031 0.950 2.14 0.048 1.00
A*C 0.688 0.344 0.950 0.36 0.722 1.00
A*D -2.188 -1.094 0.950 -1.15 0.267 1.00
B*C -0.562 -0.281 0.950 -0.30 0.771 1.00
B*D -0.188 -0.094 0.950 -0.10 0.923 1.00
C*D 1.687 0.844 0.950 0.89 0.388 1.00
A*B*C -5.187 -2.594 0.950 -2.73 0.015 1.00
A*B*D 4.687 2.344 0.950 2.47 0.025 1.00
A*C*D -0.937 -0.469 0.950 -0.49 0.629 1.00
B*C*D -0.937 -0.469 0.950 -0.49 0.629 1.00
A*B*C*D 2.437 1.219 0.950 1.28 0.218 1.00

Regression Equation in Uncoded Units

Total Score = 182.781 - 4.531 A - 0.656 B - 1.344 C + 1.969 D


+ 2.031 A*B + 0.344 A*C - 1.094 A*D - 0.281 B*C - 0.094 B*D +
0.844 C*D - 2.594 A*B*C + 2.344 A*B*D - 0.469 A*C*D -
0.469 B*C*D + 1.219 A*B*C*D

Factors A (sweetener) and D (temperature) both have relatively small p-values (<0.10). The two-way
interaction between A (sweetener) and B (water) is significant (p-value =0.048). Two three-way
interactions are also significant in the model: the interaction between sweetener, water, and carbonation
level and the interaction between sweetener, water, and temperature.

We can see these effects in the normal probability plot of the effect estimates as well.
Residual Plots for Total Score
Normal Plot of the Standardized Effects
(response is Total Score, α = 0.05) Normal Probability Plot Versus Fits
99
99
Effect Type 5.0
90
Not Significant 2.5

Residual
ABD

Percent
95 Significant
50 0.0
90 AB Factor Name -2.5
A A 10
80 -5.0
B B
1
70 C C -10 -5 0 5 10 170 180 190 200
Percent

60 D D
Residual Fitted Value
50
40
30 Histogram Versus Order
20 10.0 5.0
ABC

Frequency
10 2.5

Residual
7.5
5 A 5.0
0.0

-2.5
2.5
1 -5.0
-5 -4 -3 -2 -1 0 1 2 3 0.0
-6 -4 -2 0 2 4 6 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
Standardized Effect
Observation Order
: Residual

The normal probability plot of the residuals indicates that the normality assumption may not be reasonable.
The points do not fall in a straight line. The assumption of constant variance may not be reasonable.

We can remove a few insignificant terms from the model: CD, ACD, BCD, and ABCD. Note that although
the other main effects and two-way interactions (B, C, D, AC, AD, BC, and BD) were not statistically
significant, we leave these terms in the model to maintain hierarchy in the model.

Factorial Regression: Total Score versus A, B, C, D


Coded Coefficients

Term Effect Coef SE Coef T-Value P-Value VIF


Constant 182.781 0.924 197.73 0.000
A -9.063 -4.531 0.924 -4.90 0.000 1.00
B -1.313 -0.656 0.924 -0.71 0.486 1.00
C -2.688 -1.344 0.924 -1.45 0.162 1.00
D 3.938 1.969 0.924 2.13 0.046 1.00
A*B 4.063 2.031 0.924 2.20 0.040 1.00
A*C 0.688 0.344 0.924 0.37 0.714 1.00
A*D -2.188 -1.094 0.924 -1.18 0.251 1.00
B*C -0.562 -0.281 0.924 -0.30 0.764 1.00
B*D -0.188 -0.094 0.924 -0.10 0.920 1.00
A*B*C -5.188 -2.594 0.924 -2.81 0.011 1.00
A*B*D 4.687 2.344 0.924 2.54 0.020 1.00

Regression Equation in Uncoded Units

Total Score = 182.781 - 4.531 A - 0.656 B - 1.344 C + 1.969 D + 2.031 A*B


+ 0.344 A*C - 1.094 A*D - 0.281 B*C - 0.094 B*D - 2.594 A*B*C + 2.344 A*B*D

Residual Plots for Total Score


Normal Probability Plot Versus Fits
99 10

90
5
Residual
Percent

50
0

10
-5
1
-10 -5 0 5 10 170 180 190 200
Residual Fitted Value

Histogram Versus Order


10.0 10
Frequency

7.5 5
Residual

5.0
0

2.5
-5
0.0
-4 0 4 8 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
Residual Observation Order
The normal probability plot of the residuals of the reduced model is an improvement over the full model,
although the points still do not lie in a straight line. The constant variance assumption seems reasonable.
There is one unusual observation as noted by the large residual with this model.

The sweetener has the largest effect on the taste of the soft drink. The low level of sweetener gives higher
levels taste score.

8.5. Consider the experiment in Exercise 8.4. Plot the residuals against the levels of factors A, B, C, and D. Also
construct a normal probability plot of the residuals. Comment on these plots.

To find the residuals, select Stat > DOE > Factorial > Analyze Factorial Design. Select “Terms”
and verify that all terms for the reduced model are included. Select “Graphs”, choose “Normal plot” of
residuals and “Residuals versus variables”, and then select the variables.

Normal Probability Plot


(response isTotal Score)

99

90
Percent

50

10

1
-10 -5 0 5 10

Residual

Residuals Versus Sweetener Residuals Versus Syrup to Water Ratio


(response isTotal Score) (response isTotal Score)

8 8
Residual

Residual

0 0

-8 -8
-1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0

Sweetener Syrup to Water Ratio

Residuals Versus Carbonation Level Residuals Versus Temperature


(response isTotal Score) (response isTotal Score)

8 8
Residual

Residual

0 0

-8 -8
-1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0

Carbonation Level Temperature

There appears to be a slight indication of inequality of variance for sweetener and syrup ratio, as well as a
slight indication of an outlier. This is not serious enough to warrant concern.

8.6. Find the standard error of the effects for the experiment in Exercise 8.4. Using the standard errors as a
guide, what factors appear significant?

From the ANOVA table from Exercise 8.4 (using the reduced model), we can use the MSE to estimate σ 2.
Analysis of Variance

Source DF Adj SS Adj MS F-Value P-Value


Model 11 1420.59 129.145 4.72 0.001
Linear 4 852.62 213.156 7.80 0.001
A 1 657.03 657.031 24.03 0.000
B 1 13.78 13.781 0.50 0.486
C 1 57.78 57.781 2.11 0.162
D 1 124.03 124.031 4.54 0.046
2-Way Interactions 5 176.91 35.381 1.29 0.306
A*B 1 132.03 132.031 4.83 0.040
A*C 1 3.78 3.781 0.14 0.714
A*D 1 38.28 38.281 1.40 0.251
B*C 1 2.53 2.531 0.09 0.764
B*D 1 0.28 0.281 0.01 0.920
3-Way Interactions 2 391.06 195.531 7.15 0.005
A*B*C 1 215.28 215.281 7.87 0.011
A*B*D 1 175.78 175.781 6.43 0.020
Error 20 546.87 27.344
Lack-of-Fit 4 84.37 21.094 0.73 0.585
Pure Error 16 462.50 28.906
Total 31 1967.47

s . e . ( ^β )=
√ √ √
σ^ 2
n2 k
=
MSE
n2k
=
27.344
2(24)
=0.924

Using the coefficient estimates from Exercise 8.4 and the standard error of the coefficient estimates, the
following terms appear significant: A, AB, ABC, and ABD.

8.7. Suppose that only the data from replicate I in Exercise 8.4 were available. Analyze the data and draw
appropriate conclusions.

Create a 24 factorial design in MINITAB, and then enter the data. Select Stat > DOE > Factorial >
Analyze Factorial Design. Since there is only one replicate of the experiment, select “Terms” and verify
that all terms are selected. Then select “Graphs”, choose the normal effects plot, and set alpha to 0.10

Factorial Fit: Total Score versus Sweetener, Syrup to Water, ...


Estimated Effects and Coefficients for Total Score (coded units)
Term Effect Coef
Constant 183.625
Sweetener -10.500 -5.250
Syrup to Water -0.250 -0.125
Carbonation 0.750 0.375
Temperature 5.500 2.750
Sweetener*Syrup to Water 4.000 2.000
Sweetener*Carbonation 1.000 0.500
Sweetener*Temperature -6.250 -3.125
Syrup to Water*Carbonation -1.750 -0.875
Syrup to Water*Temperature -3.000 -1.500
Carbonation*Temperature 1.000 0.500
Sweetener*Syrup to Water*Carbonation -7.500 -3.750
Sweetener*Syrup to Water*Temperature 4.250 2.125
Sweetener*Carbonation*Temperature 0.250 0.125
Syrup to Water*Carbonation* -2.500 -1.250
Temperature
Sweetener*Syrup to Water* 3.750 1.875
Carbonation*Temperature

Normal Plot of the Effects


(response is Total Score, α = 0.10)
99
Effect Type
Not Significant
95 Significant
90
Factor Name
80 A Sweetener
B Syrup to Water
70

Percent
C Carbonation
60
D Temperature
50
40
30
20

10
5 A

1
-10 -5 0 5 10
Effect
Lenth’s PSE = 4.5

From visual examination of the normal probability plot of effects, only factor A (sweetener) is significant at
¿ 0.10.
Re-fit and analyze the reduced model.

Factorial Fit: Total Score versus Sweetener


Estimated Effects and Coefficients for Total Score (coded units)
Term Effect Coef SE Coef T P
Constant 183.625 1.865 98.48 0.000
Sweetener -10.500 -5.250 1.865 -2.82 0.014

S = 7.45822 R-Sq = 36.15% R-Sq(adj) = 31.59%

Analysis of Variance for Total Score (coded units)


Source DF Seq SS Adj SS Adj MS F P
Main Effects 1 441.00 441.000 441.00 7.93 0.014
Residual Error 14 778.75 778.750 55.63
Pure Error 14 778.75 778.750 55.63
Total 15 1219.75
Normal Probability Plot Versus Fits
(response isTotal Score) (response isTotal Score)

99
10

90

Residual
Percent 0
50

10 -10

1
-20 -10 0 10 20 180.0 182.5 185.0 187.5 190.0

Residual Fitted Value

Residuals Versus Sweetener Residuals Versus Syrup to Water


(response isTotal Score) (response isTotal Score)

10 10
Residual

Residual
0 0

-10 -10

-1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0

Sweetener Syrup to Water

Residuals Versus Carbonation Residuals Versus Temperature


(response isTotal Score) (response isTotal Score)

10 10
Residual

Residual
0 0

-10 -10

-1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0

Carbonation Temperature

The normality assumption appears to be reasonable. There appears to be a slight indication of inequality of
variance for sweetener, as well as in the predicted values. This is not serious enough to warrant concern.
Plots of the residuals versus the other variables do not indicate any unusual patterns, indicating they do not
need to be included in the model.

A reduced model based on factor A (type of sweetener) is sufficient to model taste. The assumptions of
normality and constant variance are reasonable for the reduced model.

8.8. Suppose that only one replicate of the 24 design in exercise 8.4 could be run, and that we could only
conduct eight tests each day. Set up a design that would block out the day effect. Show specifically which
runs would be made on each day.

A 24 design in two blocks will lose the ABCD interaction to blocks. The ABCD interaction is confounded
with blocks.

Block 1 Block 2
a (1)
b ab
c ac
abc bc
d ad
abd bd
acd cd
bcd abcd
8.9. Show how a 25 experiment could be set up in two blocks of 16 runs each. Specifically, which runs would
be made in each block?

A 25 design in two blocks will lose the ABCDE interaction to blocks.

Block 1 Block 2
(1) ae a e
ab be b abe
ac ce c ace
bc abce abc bce
ad de d ade
bd abde abd bde
cd acde acd cde
abcd bcde bcd abcde

8.10. R. D. Snee (“Experimenting with a Large Number of Variables,” in Experiments in Industry: Design,
Analysis and Interpretation of Results, by R. D. Snee, L. B. Hare, and J. B. Trout, editors, ASQC, 1985)
describes an experiment in which a 25-1 design with I = ABCDE was used to investigate the effects of five
factors on the color of a chemical product. The factors were A = solvent/reactant, B = catalyst/reactant, C =
temperature, D = reactant purity, and E = reactant pH. The results obtained are as follows:

e=−0.63 d=6.79
a=2.51 ade=6.47
b=2.68 bde=3.45
abe=1.66 abd =5.68
c=2.06 cde=5.22
ace=1.22 acd =4.38
bce=−2.09 bcd=4.30
abc=1.93 abcde=4.05

a. Prepare a normal probability plot of the effects. Which effects seem active? Fit a model using these effects.

Normal Plot of the Effects


(response is color, α = 0.10)
99
Effect Type
Not Significant
95 D Significant
90
Factor Name
80 A solvent
B catalyst
70
Percent

C temperature
60
D reactant purity
50
E reactant pH
40
30
20

10
5

1
-2 -1 0 1 2 3 4
Effect
Lenth’s PSE = 0.8325
Term Effect Coef
Constant 3.105
solvent 0.7650 0.3825
catalyst -0.7950 -0.3975
temperature -0.9425 -0.4712
reactant purity 3.875 1.938
reactant pH -1.3725 -0.6862
solvent*catalyst 0.4800 0.2400
solvent*temperature -0.2425 -0.1212
solvent*reactant purity -0.5600 -0.2800
solvent*reactant pH 1.0975 0.5488
catalyst*temperature -0.3775 -0.1888
catalyst*reactant purity -0.5500 -0.2750
catalyst*reactant pH -0.5075 -0.2537
temperature*reactant purity -0.16750 -0.08375
temperature*reactant pH 0.3050 0.1525
reactant purity*reactant pH 0.8825 0.4412

Only factor D (reactant purity) appears to be significant from the normal plot. Looking at the values of the
coefficients with the full model, factor E (reactant pH) has the next largest (in magnitude) coefficient, so
we will include both of these variables in the model.

Factorial Regression: color versus reactant purity, reactant pH

Analysis of Variance

Source DF Adj SS Adj MS F-Value P-Value


Model 2 67.598 33.799 19.92 0.000
Linear 2 67.598 33.799 19.92 0.000
reactant purity 1 60.063 60.063 35.39 0.000
reactant pH 1 7.535 7.535 4.44 0.055
Error 13 22.061 1.697
Total 15 89.659

Model Summary

S R-sq R-sq(adj) R-sq(pred)


1.30270 75.39% 71.61% 62.73%

Coded Coefficients

Term Effect Coef SE Coef T-Value P-Value VIF


Constant 3.105 0.326 9.53 0.000
reactant purity 3.875 1.938 0.326 5.95 0.000 1.00
reactant pH -1.373 -0.686 0.326 -2.11 0.055 1.00

Both terms are significant at a 10% significance level.

b. Calculate the residuals for the model you fit in part (a). Construct a normal probability plot of the residuals
and plot the residuals versus the fitted values. Comment on the plots.
Normal Probability Plot Versus Fits
(response is color) (response is color)
99
2

95
90 1
80
70

Residual
Percent

0
60
50
40
30 -1
20

10
-2
5

1 -3
-3 -2 -1 0 1 2 3 0 1 2 3 4 5 6
Residual Fitted Value

The normality assumption appears to be reasonable. The constant variance assumption may not be
appropriate here, there are some difference in the spread of the residuals for the different fitted values.

c. If any factors are negligible, collapse the 25-1 design into a full factorial in the active factors. Comment on
the resulting design and interpret the results.

Factors A, B, and C are not significant effects, so the 25−1 design collapses into a 22 design with factors D
and E. MTB > Stat > DOE > Factorial > Cube Plot. Select Data means and make sure factors D and E
are selected.

Cube Plot (data means) for color

0.0400 4.7975
1

reactant pH

2.2950 5.2875
-1
-1 1
reactant purity

Reactant purity has a positive effect on the color; reactant pH has an inverse relationship with color.

8.11. An article in Industrial and Engineering Chemistry (“More on Planning Experiments to Increase Research
Efficiency,” 1970, pp. 60–65) uses a 25−2 design to investigate the effect of A=¿condensation
temperature,
B=¿ amount of material 1, C=¿ solvent volume, D=¿condensation time, and E=¿ amount of material
2, on yield. The results obtained are as follows:

e=23.2 cd =23.8
ab=15.5 ab=23.4
ad =16.9 ad =16.8
bc=16.2 bc=18.1

a. Verify that the design generators used were I = ACE and I =BDE .

ACE=+1 , BDE=+1 for all treatments. For example:


Treatmen
t
A B C D E ACE BDE
-
e -1 -1 -1
1
+1 −1∗−1∗+1=+ 1 −1∗−1∗+1=+ 1
+ -
ab +1
1
-1
1
-1 +1∗−1∗−1=+1 +1∗−1∗−1=+1

b. Write down the complete defining relation and the aliases from this design.

Defining relation: I = ACE = BDE = ABCD

Aliases: A=CE= ABDE=BCD , B= ABCE=DE= ACD ,


C= AE=BCDE= ABD ,⋯ , AB=BCE= ADE=CD
Enter the factor levels and yield data into a MINITAB worksheet, then define the experiment using
Stat > DOE > Factorial > Define Custom Factorial Design.

Select Stat > DOE > Factorial > Analyze Factorial Design. Since there is only one replicate of the
experiment, select “Terms” and verify that all main effects and two-factor interaction effects are selected.

Factorial Fit: yield versus A:Temp, B:Matl1, C:Vol, D:Time, E:Matl2

Estimated Effects and Coefficients for yield (coded units)


Term Effect Coef
Constant 19.238
A:Temp -1.525 -0.762
B:Matl1 -5.175 -2.587
C:Vol 2.275 1.138
D:Time -0.675 -0.337
E:Matl2 2.275 1.138
A:Temp*B:Matl1 1.825 0.913
A:Temp*D:Time -1.275 -0.638

Alias Structure
I + A:Temp*C:Vol*E:Matl2 + B:Matl1*D:Time*E:Matl2 +
A:Temp*B:Matl1*C:Vol*D:Time
A:Temp + C:Vol*E:Matl2 + B:Matl1*C:Vol*D:Time +
A:Temp*B:Matl1*D:Time*E:Matl2
B:Matl1 + D:Time*E:Matl2 + A:Temp*C:Vol*D:Time +
A:Temp*B:Matl1*C:Vol*E:Matl2
C:Vol + A:Temp*E:Matl2 + A:Temp*B:Matl1*D:Time +
B:Matl1*C:Vol*D:Time*E:Matl2
D:Time + B:Matl1*E:Matl2 + A:Temp*B:Matl1*C:Vol +
A:Temp*C:Vol*D:Time*E:Matl2
E:Matl2 + A:Temp*C:Vol + B:Matl1*D:Time +
A:Temp*B:Matl1*C:Vol*D:Time*E:Matl2
A:Temp*B:Matl1 + C:Vol*D:Time + A:Temp*D:Time*E:Matl2 +
B:Matl1*C:Vol*E:Matl2
A:Temp*D:Time + B:Matl1*C:Vol + A:Temp*B:Matl1*E:Matl2 +
C:Vol*D:Time*E:Matl2

From the Alias Structure shown in the Session Window, the complete defining relation is:
I = ACE=BDE= ABCD
The aliases are:
A∗I =A∗ACE= A∗BDE=A∗ABCD A=CE= ABDE=BCD
B∗I =B∗ACE=B∗BDE=B∗ABCD B=ABCE=DE= ACD
C∗I =C∗ACE=C∗BDE=C∗ABCDC= AE=BCDE= ABD

AB∗I = AB∗ACE= AB∗BDE= AB∗ABCD AB=BCE= ADE=CD

The remaining aliases are calculated in a similar fashion.

c. Estimate the main effects.


A B C D E yield
-1 -1 -1 -1 1 23.2
1 1 -1 -1 -1 15.5
1 -1 -1 1 -1 16.9
-1 1 1 -1 -1 16.2
-1 -1 1 1 -1 23.8
1 -1 1 -1 1 23.4
-1 1 -1 1 1 16.8
1 1 1 1 1 18.1

[ A ]= A +CE+ BCD + ABDE


¿ ¼(– 23.2+15.5+16.9 – 16.2 – 23.8+ 23.4 – 16.8+18.1)=¼(– 6.1)=– 1.525

[ AB]=AB+ BCE + ADE +CD


¿ ¼(+23.2+15.5 – 16.9−16.2+23.8 – 23.4 – 16.8+18.1)=¼(7.3)=1.825
These are the same effect estimates provided in the MINITAB output above. The other main effects and
interaction effects are calculated in the same way.

[ A ] =−1.525 , [ B ] =−5.175 , [ C ] =2.275 , [ D ] =−0.675 , [E]=2.275


8.12. A 24 factorial design has been run in a pilot plant to investigate the effect of four factors on the molecular
weight of a polymer. The data from this experiment are as follows (values are coded by dividing by 10).

( 1 )=88 d=86
a=80 ad =81
b=89 bd=85
ab=87 abd =86
c=86 cd =85
ac=81 acd =79
bc=82 bcd=84
abc=80 abcd =81

a. Construct a normal probability plot of the effects. Which effects are active?

Enter the factor levels and yield data into a MINITAB worksheet, then define the experiment using
Stat > DOE > Factorial > Define Custom Factorial Design.

Select Stat > DOE > Factorial > Analyze Factorial Design. Since there is only one replicate of the
experiment, select “Terms” and verify that all main effects and two-factor interaction effects are selected.
Normal Plot of the Effects
(response is weight, α = 0.10)
99
Effect Type
Not Significant
95 Significant
90
Factor Name
80 A A
B B
70

Percent
C C
60
D D
50
40
30
20

10 C
5 A

1
-0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3
Effect
Lenth’s PSE = 0.1125

Factors A and C appear to the only active effects.

b. Construct an appropriate model. Fit this model and test for significant effects.

Factorial Regression: weight versus A, C


Analysis of Variance

Source DF Adj SS Adj MS F-Value P-Value


Model 2 0.9225 0.46125 10.21 0.002
Linear 2 0.9225 0.46125 10.21 0.002
A 1 0.5625 0.56250 12.45 0.004
C 1 0.3600 0.36000 7.97 0.014
Error 13 0.5875 0.04519
Total 15 1.5100

Model Summary
S R-sq R-sq(adj) R-sq(pred)
0.212585 61.09% 55.11% 41.06%

Coded Coefficients

Term Effect Coef SE Coef T-Value P-Value VIF


Constant 8.3750 0.0531 157.58 0.000
A -0.3750 -0.1875 0.0531 -3.53 0.004 1.00
C -0.3000 -0.1500 0.0531 -2.82 0.014 1.00

Both factors A and C are significant in the model (p-value < 0.05). Both factors have an inverse
relationship with polymer weight.

c. Analyze the residuals from this model by constructing a normal probability plot of the residuals and
plotting the residuals versus the predicted values of y.
Normal Probability Plot Versus Fits
(response is weight) (response is weight)
99 0.4

0.3
95
90
0.2
80
70
0.1

Residual
Percent

60
50 0.0
40
30 -0.1
20
-0.2
10
5 -0.3

1 -0.4
-0.50 -0.25 0.00 0.25 0.50 8.0 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8
Residual Fitted Value

The normal probability plot of the residuals indicates that the normality assumption is reasonable. The plot
of the residuals versus the fitted values indicates the constant variance assumption may not be valid.
However, the difference appears to be small.

8.13. Reconsider the data in Exercise 8.12. Suppose that four center points were added to this experiment. The
molecular weights at the center point are 90, 87, 86, and 93.

a. Analyze the data as you did in Exercise 8.12, but include a test for curvature.

Create a 24 factorial design with four center points in MINITAB, and then enter the data. Select Stat >
DOE > Factorial > Analyze Factorial Design. Select “Terms” and verify that all main effects and
two-factor interactions are selected. Also, DO NOT include the center points in the model (uncheck the
default selection). This will ensure that if both lack of fit and curvature are not significant, the main and
interaction effects are tested for significance against the correct residual error (lack of fit + curvature + pure
error). See the dialog box below.

To summarize MINITAB’s functionality, curvature is always tested against pure error and lack of fit (if
available), regardless of whether center points are included in the model. The inclusion/exclusion of center
points in the model affects the total residual error used to test significance of effects. Assuming that lack of
fit and curvature tests are not significant, all three (curvature, lack of fit, and pure error) should be included
in the residual mean square.

When looking at results in the ANOVA table, the first test to consider is the “lack of fit” test, which is a
test of significance for terms not included in the model (in this exercise, the three-factor and four-factor
interactions). If lack of fit is significant, the model is not correctly specified, and some terms need to be
added to the model.

If lack of fit is not significant, the next test to consider is the “curvature” test, which is a test of significance
for the pure quadratic terms. If this test is significant, no further statistical analysis should be performed
because the model is inadequate.
If tests for both lack of fit and curvature are not significant, then it is reasonable to pool the curvature, pure
error, and lack of fit (if available) and use this as the basis for testing for significant effects. (In MINITAB,
this is accomplished by not including center points in the model.)

Factorial Fit: Mole Wt versus A, B, C, D

Estimated Effects and Coefficients for Mole Wt (coded units)


Term Effect Coef SE Coef T P
Constant 848.00 8.521 99.52 0.000
A -37.50 -18.75 9.527 -1.97 0.081
B 10.00 5.00 9.527 0.52 0.612
C -30.00 -15.00 9.527 -1.57 0.150
D -7.50 -3.75 9.527 -0.39 0.703
A*B 22.50 11.25 9.527 1.18 0.268
A*C -2.50 -1.25 9.527 -0.13 0.898
A*D 5.00 2.50 9.527 0.26 0.799
B*C -20.00 -10.00 9.527 -1.05 0.321
B*D 2.50 1.25 9.527 0.13 0.898
C*D 7.50 3.75 9.527 0.39 0.703

Analysis of Variance for Mole Wt (coded units)
Source DF Seq SS Adj SS Adj MS F P
Main Effects 4 9850 9850 2462.5 1.70 0.234
2-Way Interactions 6 4000 4000 666.7 0.46 0.822
Residual Error 9 13070 13070 1452.2
Curvature 1 8820 8820 8820.0 16.60 0.004 ***
Lack of Fit 5 1250 1250 250.0 0.25 0.915
Pure Error 3 3000 3000 1000.0
Total 19 26920

Only factor A and curvature are significant at a 0.10 significance level.

b. If curvature is significant in an experiment such as this one, describe what strategy you would pursue next
to improve your model of the process.

The test for curvature is significant ( p−value=0.004). Although one could pick a “winning
combination” from the experimental runs, a better strategy is to add runs that would enable estimation of
the quadratic effects. Adding axial runs in a sequential experiment would provide the means to estimate
these quadratic effects.

8.14. An engineer has performed an experiment to study the effect of four factors on the surface roughness of a
machined part. The factors (and their levels) are A = tool angle (12, 15), B = cutting fluid viscosity (300,
400), C = feed rate (10, 15 in/min), and D = cutting fluid cooler used (no, yes). The data from this
experiment (with the factors coded to the usual +1, -1 levels) are shown in Table 8E.4.

Surface
Run A B C D
Roughness
1 - - - - 0.00340
2 + - - - 0.00362
3 - + - - 0.00301
4 + + - - 0.00182
5 - - + - 0.00280
6 + - + - 0.00290
7 - + + - 0.00252
8 + + + - 0.00160
9 - - - + 0.00336
10 + - - + 0.00344
11 - + - + 0.00308
12 + + - + 0.00184
13 - - + + 0.00269
14 + - + + 0.00284
15 - + + + 0.00253
16 + + + + 0.00163

a. Estimate the factor effects. Plot the effect estimates on a normal probability plot and select a tentative
model.

1
A= ( a+ ab+ac +abc +ad + abd+ acd+ abcd−( 1 ) −b−c−d−bc−bd−cd −bcd )¿ 1/2 ¿
2
¿−0.000463
Similarly, the other effects can be estimated:
A=−0.00463 , B=−0.000878 , C=−0.000508 , D=−0.000032
AB=−0.0006 , AC =0.000070 , AD=−0.000015 , BC =0.000065 , BD=0.000065 ,CD=0
ABC=0.000082 , ABD=0.000008 , ACD=0.000033 , BCD=−0.000013 , ABCD=−0.000015

Normal Plot of the Standardized Effects


(response is SurfaceRoughness, α = 0.10)
99
Effect Type
Not in Model
95 BC Not Significant
Significant
90
Factor Name
80 A A
70 B B
Percent

60 C C
50 D D
40
30
A
20 C
10 AB
5 B

1
-60 -50 -40 -30 -20 -10 0 10
Standardized Effect

Factors A, B, C, AB, and BC appear to be active. We select these variables for the preliminary model.

b. Fit the model identified in part (a) and analyze the residuals. Is there any indication of model inadequacy?

Factorial Regression: Surface Roughness versus A, B, C


Analysis of Variance

Source DF Adj SS Adj MS F-Value P-Value


Model 5 0.000006 0.000001 173.32 0.000
Linear 3 0.000005 0.000002 221.22 0.000
A 1 0.000001 0.000001 114.35 0.000
B 1 0.000003 0.000003 411.63 0.000
C 1 0.000001 0.000001 137.68 0.000
2-Way Interactions 2 0.000002 0.000001 101.46 0.000
A*B 1 0.000001 0.000001 192.45 0.000
B*C 1 0.000000 0.000000 10.48 0.009
Error 10 0.000000 0.000000
Total 15 0.000007

Model Summary

S R-sq R-sq(adj) R-sq(pred)


0.0000865 98.86% 98.29% 97.08%

Coded Coefficients

Term Effect Coef SE Coef T-Value P-Value VIF


Constant 0.002693 0.000022 124.51 0.000
A -0.000463 -0.000231 0.000022 -10.69 0.000 1.00
B -0.000878 -0.000439 0.000022 -20.29 0.000 1.00
C -0.000508 -0.000254 0.000022 -11.73 0.000 1.00
A*B -0.000600 -0.000300 0.000022 -13.87 0.000 1.00
B*C 0.000140 0.000070 0.000022 3.24 0.009 1.00

Normal Probability Plot Versus Fits


(response is SurfaceRoughness) (response is SurfaceRoughness)
99

0.00010
95
90

80 0.00005
70
Residual
Percent

60
50
0.00000
40
30
20

10 -0.00005

1 -0.00010
-0.0002 -0.0001 0.0000 0.0001 0.0002 0.0015 0.0020 0.0025 0.0030 0.0035
Residual Fitted Value

The residual plot indicates that normality assumption may not be reasonable.

c. Repeat the analysis from parts (a) and (b) using 1/y as the response variable. Is there an indication that the
transformation has been useful?

To add the transformed variable to the worksheet select MTB > Calc > Calculator and enter the
appropriate equation into the expression box. See the dialog box below:
MTB > Stat > DOE > Factorial > Analyze Factorial Design

Use the transformed variable as the response variable. Since this is a 24 factorial design with one replicate,
we combine the higher order interaction terms as an estimate of error and leave the main effects and all
two-factor interactions.

Normal Plot of the Standardized Effects


(response is 1/SurfaceRoughness, α = 0.10)
99
Effect Type
Not in Model
95 B Not Significant
Significant
90 AB
A Factor Name
80
C A A
70 B B
Percent

60 C C
50 D D
40
30
20

10
5 BD

1
0 10 20 30 40 50 60 70 80
Standardized Effect

With the transformed variable, effects A, B, C, AB, and BD appear to be active. Note that none of the
higher order interactions appear to be significant. We include in the model: A, B, C, D, AB, and BD. We
include D in the model to maintain hierarchy.

Factorial Regression: 1/SurfaceRoughness versus A, B, C, D

Analysis of Variance

Source DF Adj SS Adj MS F-Value P-Value


Model 6 206176 34362.7 3029.64 0.000
Linear 4 150770 37692.6 3323.23 0.000
A 1 42611 42610.9 3756.86 0.000
B 1 89386 89386.3 7880.88 0.000
C 1 18762 18762.3 1654.21 0.000
D 1 11 11.0 0.97 0.351
2-Way Interactions 2 55406 27702.8 2442.46 0.000
A*B 1 55130 55129.6 4860.59 0.000
B*D 1 276 275.9 24.32 0.001
Error 9 102 11.3
Total 15 206278
Coded Coefficients

Term Effect Coef SE Coef T-Value P-Value VIF


Constant 397.807 0.842 472.48 0.000
A 103.212 51.606 0.842 61.29 0.000 1.00
B 149.488 74.744 0.842 88.77 0.000 1.00
C 68.488 34.244 0.842 40.67 0.000 1.00
D 1.656 0.828 0.842 0.98 0.351 1.00
A*B 117.398 58.699 0.842 69.72 0.000 1.00
B*D -8.305 -4.152 0.842 -4.93 0.001 1.00

Normal Probability Plot Versus Fits


(response is 1/SurfaceRoughness) (response is 1/SurfaceRoughness)
99 5

4
95
90 3

80 2
70

Residual
Percent

60 1
50
40 0
30
-1
20

10 -2

5
-3

1 -4
-5.0 -2.5 0.0 2.5 5.0 7.5 300 350 400 450 500 550 600 650
Residual Fitted Value

The normality assumption is reasonable. The constant variance assumption also seems reasonable.

d. Fit the model in terms of the coded variables that you think can be used to provide the best predictions of
the surface roughness. Convert this prediction equation into a model in the natural variables.

Using the transformed variable, 1/ y , we have (in coded units):

1
=397.807+51.606 x 1+ 74.744 x 2+34.244 x3 +0.828 x 4 +58.699 x 1 x 2−4.152 x 2 x 4
y
Consider the uncoded variables:

angle−13.5 viscosity−350 feed rate−12.5


x 1= , x 2= , x 3= , x 4=cooler
1.5 50 2.5
In uncoded units, the model becomes:

1
y
=397.807+51.606
angle−13.5
1.5
+74.744( viscosity−350
50 )
+34.244 (
feed rate−12.5
2.5
+0.828∗cooler + ) ( )
1
=2937.0−239.53∗angle−9.071∗viscosity +13.698∗feed rate +29.90∗cooler+ 0.7827∗angle∗viscosit
y
8.15. An experiment was run to study the effect of two factors, time and temperature, on the inorganic impurity
levels in paper pulp. The results of this experiment are shown in Table 8E.5:

x1 x2 y
-1 -1 210
1 -1 95
-1 1 218
1 1 100
-
0 225
1.5
1.5 0 50
0 -1.5 175
0 1.5 180
0 0 145
0 0 175
0 0 158
0 0 166

a. What type of experimental design has been used in this study?

This design is a CCD with k =2 and ¿ 1.5. The design is not rotatable.

b. Fit a quadratic model to the response, using the method of least squares.

Enter the factor levels and response data into a MINITAB worksheet, including a column indicating
whether a run is a center point run (1 = not center point, 0 = center point). Then define the experiment
using Stat > DOE > Response Surface > Define Custom Response Surface Design. Select
Stat > DOE > Response Surface > Analyze Response Surface Design. Select “Terms” and
verify that all main effects, two-factor interactions, and quadratic terms are selected.

Response Surface Regression: y versus x1, x2

The analysis was done using coded units.


Estimated Regression Coefficients for y
Term Coef SE Coef T P
Constant 160.868 4.555 35.314 0.000
x1 -87.441 4.704 -18.590 0.000
x2 3.618 4.704 0.769 0.471
x1*x1 -24.423 7.461 -3.273 0.017
x2*x2 15.577 7.461 2.088 0.082
x1*x2 -1.688 10.285 -0.164 0.875

Analysis of Variance for y
Source DF Seq SS Adj SS Adj MS F P
Regression 5 30583.4 30583.4 6116.7 73.18 0.000
Linear 2 28934.2 28934.2 14467.1 173.09 0.000
Square 2 1647.0 1647.0 823.5 9.85 0.013
Interaction 1 2.3 2.3 2.3 0.03 0.875
Residual Error 6 501.5 501.5 83.6
Lack-of-Fit 3 15.5 15.5 5.2 0.03 0.991
Pure Error 3 486.0 486.0 162.0
Total 11 31084.9

Estimated Regression Coefficients for y using data in uncoded units
Term Coef
Constant 160.8682
x1 -58.2941
x2 2.4118
x1*x1 -10.8546
x2*x2 6.9231
x1*x2 -0.7500

2 2
The model is y=160.9−58.3 x 1+2.4 x2 −10.9 x 1 +6 x 2−0.75 x 1 x 2. Notice x 2 and x 1 x 2 are not
significant using α =0.10

c. Construct the fitted impurity response surface. What values of x 1 and x 2 would you recommend if you
wanted to minimize the impurity level?

Stat > DOE > Response Surface > Contour/Surface Plots

Contour Plot of y vs x2, x1 Surface Plot of y vs x2, x1

1.5
y
< 40
1.0 40 - 50
50 - 75
75 - 100
0.5 100 - 125
x1 = 1.49384 240
x2 = -0.217615 125 - 150
y = 49.6101 150 - 175
0.0
x2

175 - 200 180

200 - 225 y

-0.5 > 225 120

-1.0 60 1
0 x2
-1 -1
0
-1.5 x1
1

-1 0 1
x1

From visual examination of the contour and surface plots, it appears that minimum purity can be achieved
by setting x 1 (time) = +1.5 and letting x 2 (temperature) range from 1.5 to + 1.5. The range for x 2 agrees
with the ANOVA results indicating that it is statistically insignificant ( p−value=0.471). The level for
temperature could be established based on other considerations, such as cost. A flag is planted at one
option on the contour plot above.

d. Suppose that x 1=( temp−750 ) /50and x 2=( time−30 ) /15 where temperature is in °C and time is in
hours. Find the optimum operating conditions in terms of the natural variables temperature and time.

Temp=50 x1 +750=50∗(+1.50 )+ 750=825


Time=15 x 2+30=15∗(−0.1061 ) +30=28.4085

8.16. An article in Rubber Chemistry and Technology (Vol. 47, 1974, pp. 825–836)
describes an experiment that studies the relationship of the Mooney viscosity of
rubber to several variables, including silica filler (parts per hundred) and oil filler
(parts per hundred). Some of the data from this experiment are shown in Table 8E.6,
where x 1=( silica−60 ) /15and x 2=( oil−21 ) /1.5.

x1 x2 y
-1 -1 13.71
1 -1 14.15
-1 1 12.87
1 1 13.53
-1.4 0 12.99
1.4 0 13.89
0 -1.4 14.16
0 1.4 12.9
0 0 13.75
0 0 13.66
0 0 13.86
0 0 13.63
0 0 13.74

a. What type of experimental design has been used? Is it rotatable?

The design is a CCD with k =2 and ¿ 1.4 . The design is rotatable.

b. Fit a quadratic model to these data. What values of x 1 and x 2 will maximize the
Mooney viscosity?

Since the standard order is provided, one approach to solving this exercise is to create a two-factor response
surface design in MINITAB, then enter the data. Select Stat > DOE > Response Surface > Create
Response Surface Design. Leave the design type as a 2-factor, central composite design. Select
“Designs”, highlight the design with five center points (13 runs), and enter a custom alpha value of exactly
1.4 (the rotatable design is ¿ 1.41421). The worksheet is in run order, to change to standard order (and
ease data entry) select Stat > DOE > Display Design and choose standard order. To analyze the
experiment, select Stat > DOE > Response Surface > Analyze Response Surface Design.
Select “Terms” and verify that a full quadratic model (A, B, A2, B2, AB) is selected.

Response Surface Regression: y versus x1, x2

The analysis was done using coded units.


Estimated Regression Coefficients for y
Term Coef SE Coef T P
Constant 13.7273 0.04309 318.580 0.000
x1 0.2980 0.03424 8.703 0.000
x2 -0.4071 0.03424 -11.889 0.000
x1*x1 -0.1249 0.03706 -3.371 0.012
x2*x2 -0.0790 0.03706 -2.132 0.070
x1*x2 0.0550 0.04818 1.142 0.291

Analysis of Variance for y
Source DF Seq SS Adj SS Adj MS F P
Regression 5 2.16128 2.16128 0.43226 46.56 0.000
Linear 2 2.01563 2.01563 1.00781 108.54 0.000
Square 2 0.13355 0.13355 0.06678 7.19 0.020
Interaction 1 0.01210 0.01210 0.01210 1.30 0.291
Residual Error 7 0.06499 0.06499 0.00928
Lack-of-Fit 3 0.03271 0.03271 0.01090 1.35 0.377
Pure Error 4 0.03228 0.03228 0.00807
Total 12 2.22628

Estimated Regression Coefficients for y using data in uncoded units
Term Coef
Constant 13.7273
x1 0.2980
x2 -0.4071
x1*x1 -0.1249
x2*x2 -0.0790
x1*x2 0.0550

2 2
y uncoded =13.7273+ 0.2980 x1−0.4071 x 2 +0.0550 x 1 x 2−0.1249 x 1−0.0790 x 2

where x 1 x 2 is not significant using α =0.10 .

Values of x 1 and x 2maximizing the Mooney viscosity can be found from visual examination of the contour
and surface plots, or using MINITAB’s Response Optimizer.

Stat > DOE > Response Surface > Contour/Surface Plots

Contour Plot of y vs x2, x1 Surface Plot of y vs x2, x1

y
< 12.00
1.0
12.00 - 12.25
12.25 - 12.50
0.5 12.50 - 12.75
12.75 - 13.00
14.0
13.00 - 13.25
0.0
x2

13.25 - 13.50
13.5
13.50 - 13.75 y
-0.5 13.75 - 14.00
13.0
14.00 - 14.25
> 14.25 12.5
-1.0 1
0 x2
-1 -1
0
-1.0 -0.5 0.0 0.5 1.0 x1 1

x1

Stat > DOE > Response Surface > Response Optimizer

In Setup, let Goal=maximize , Lower=10, Target=20 , and Weight=7 .

From the plots and the optimizer, setting x 1 in a range from 0 to +1.4 and setting x 2 between –1 and –1.4
will maximize viscosity.

8.17. In their book Empirical Model Building and Response Surfaces (John Wiley, 1987), G. E. P. Box and N. R.
Draper describe an experiment with three factors. The data shown in Table 8E.7 are a variation of the
original experiment on p. 247 of their book. Suppose that these data were collected in a semiconductor
manufacturing process.
x1 x2 x3 y1 y2
-1 -1 -1 24.00 12.49
0 -1 -1 120.33 8.39
1 -1 -1 213.67 42.83
-1 0 -1 86.00 3.46
0 0 -1 136.63 80.41
1 0 -1 340.67 16.17
-1 1 -1 112.33 27.57
0 1 -1 256.33 4.62
1 1 -1 271.67 23.63
-1 -1 0 81.00 0.00
0 -1 0 101.67 17.67
1 -1 0 357.00 32.91
-1 0 0 171.33 15.01
0 0 0 372.00 0.00
1 0 0 501.67 92.50
-1 1 0 264.00 63.50
0 1 0 427.00 88.61
1 1 0 730.67 21.08
-1 -1 1 220.67 133.82
0 -1 1 239.67 23.46
1 -1 1 422.00 18.52
-1 0 1 199.00 29.44
0 0 1 485.33 44.67
1 0 1 673.67 158.21
-1 1 1 176.67 55.51
0 1 1 501.00 138.94
1 1 1 1010.00 142.45

a. The response y 1is the average of three readings on resistivity for a single wafer. Fit a quadratic model to
this response.

Enter the data into the worksheet. Then define the design as a response surface design by going to Stat >
DOE > Response Surface > Define Custom Response Surface Design
Select the three factors. To analyze the experiment, select Stat > DOE > Response Surface >
Analyze Response Surface Design. Select y 1 as the response variable. Select “Terms” and verify
that a full quadratic model (A, B, C, A2, B2, C2, AB, AC, BC) is selected.

Response Surface Regression: y1 versus x1, x2, x3


Analysis of Variance

Source DF Adj SS Adj MS F-Value P-Value


Model 9 1248237 138693 23.94 0.000
Linear 3 1090558 363519 62.74 0.000
x1 1 563929 563929 97.33 0.000
x2 1 215531 215531 37.20 0.000
x3 1 311097 311097 53.69 0.000
Square 3 14219 4740 0.82 0.502
x1*x1 1 6146 6146 1.06 0.317
x2*x2 1 3006 3006 0.52 0.481
x3*x3 1 5066 5066 0.87 0.363
2-Way Interaction 3 143461 47820 8.25 0.001
x1*x2 1 52317 52317 9.03 0.008
x1*x3 1 68350 68350 11.80 0.003
x2*x3 1 22794 22794 3.93 0.064
Error 17 98498 5794
Total 26 1346735

Coded Coefficients

Term Effect Coef SE Coef T-Value P-Value VIF


Constant 327.6 38.8 8.45 0.000
x1 354.0 177.0 17.9 9.87 0.000 1.00
x2 218.9 109.4 17.9 6.10 0.000 1.00
x3 262.9 131.5 17.9 7.33 0.000 1.00
x1*x1 64.0 32.0 31.1 1.03 0.317 1.00
x2*x2 -44.8 -22.4 31.1 -0.72 0.481 1.00
x3*x3 -58.1 -29.1 31.1 -0.94 0.363 1.00
x1*x2 132.1 66.0 22.0 3.00 0.008 1.00
x1*x3 150.9 75.5 22.0 3.43 0.003 1.00
x2*x3 87.2 43.6 22.0 1.98 0.064 1.00

Regression Equation in Coded Units:

2 2 2
y=327.6 +177.0 x1 +109.4 x 2 +131.5 x 3+32.0 x 1−22.4 x 2−29.1 x 3 +66.0 x 1 x 2+ 75.5 x 1 x3 + 43.6 x2 x 3

where the quadratic terms are not significant at α =0.10 .

b. The response y 2is the standard deviation of the three resistivity measurements. Fit a first-order model to
this response.

To analyze the experiment, select Stat > DOE > Response Surface > Analyze Response Surface
Design. Select y 2 as the response variable. Select “Terms” and change model to Linear, (A, B, C).
Response Surface Regression: y2 versus x1, x2, x3

Analysis of Variance

Source DF Adj SS Adj MS F-Value P-Value


Model 3 21957 7319 4.45 0.013
Linear 3 21957 7319 4.45 0.013
x1 1 2392 2392 1.45 0.240
x2 1 4226 4226 2.57 0.123
x3 1 15339 15339 9.32 0.006
Error 23 37864 1646
Total 26 59821

Coded Coefficients

Term Effect Coef SE Coef T-Value P-Value VIF


Constant 48.00 7.81 6.15 0.000
x1 23.06 11.53 9.56 1.21 0.240 1.00
x2 30.65 15.32 9.56 1.60 0.123 1.00
x3 58.38 29.19 9.56 3.05 0.006 1.00

Regression Equation in Uncoded Units:

y 2=48.0+11.53 x 1 +15.32 x 2 +29.19 x 3

Where x 1∧x 2 are not significant at α =0.10 .

c. Where would you recommend that we set x 1 , x 2 ,∧x 3 if the objective is to hold mean resistivity at 500
and minimize the standard deviation?

Using the Minitab response optimizer, we can determine where to set x 1 , x 2 ,∧x 3.

MTB > Stat > DOE > Response Surface > Response Optimizer
Select Minimize for y 2, since we want to minimize the standard deviation. Select Target for y 1 and enter
500. Click “View Model” to ensure the reduced models from parts (a) and (b) are correct. Click OK.
Optimal x1 x2 x3
High 1.0 1.0 1.0
D: 0.9047
Cur [1.0] [0.9880] [-0.6600]
Predict Low -1.0 -1.0 -1.0

Composite
Desirability
D: 0.9047

y2
Minimum
y = 28.7275
d = 0.81842

y1
Targ: 500.0
y = 500.0002
d = 1.0000

Setting x 1=1 , x 2 between 0.95 and 1.0, and x 3=−0.66 , the standard deviation is minimized and the
target value is close to 500.

8.18. The data shown in the Table 8E.8 were collected in an experiment to optimize crystal growth as a function
of three variables x 1 , x 2 ,∧x 3. Large values of y (yield in grams) are desirable. Fit a second-order model
and analyze the fitted surface. Under what set up conditions is maximum growth achieved?

x1 x2 x3 y
-1 -1 -1 66
-1 -1 1 70
-1 1 -1 78
-1 1 1 60
1 -1 -1 80
1 -1 1 70
1 1 -1 100
1 1 1 75
-1.682 0 0 65
1.682 0 0 82
0 -1.682 0 68
0 1.682 0 63
0 0 -1.682 100
0 0 1.682 80
0 0 0 83
0 0 0 90
0 0 0 87
0 0 0 88
0 0 0 91
0 0 0 85

Select Stat > DOE > Response Surface > Create Response Surface Design. Change to a 3
factor central composite design. Select “Designs”, highlight the design with six center points (20 runs).
Leave α =1.682. Enter the data into the created worksheet. To analyze the experiment, select Stat >
DOE > Response Surface > Analyze Response Surface Design. Select “Terms” and verify that
a full quadratic model (A, B, C, A2, B2, C2, AB, AC, BC) is selected.

Response Surface Regression: y versus A, B, C

Analysis of Variance

Source DF Adj SS Adj MS F-Value P-Value


Model 9 2499.29 277.699 20.17 0.000
Linear 3 989.17 329.723 23.95 0.000
A 1 463.84 463.844 33.69 0.000
B 1 25.31 25.308 1.84 0.205
C 1 500.02 500.019 36.32 0.000
Square 3 1217.74 405.914 29.49 0.000
A*A 1 368.65 368.654 26.78 0.000
B*B 1 896.27 896.266 65.11 0.000
C*C 1 8.68 8.675 0.63 0.446
2-Way Interaction 3 292.38 97.458 7.08 0.008
A*B 1 66.13 66.125 4.80 0.053
A*C 1 55.12 55.125 4.00 0.073
B*C 1 171.13 171.125 12.43 0.005
Error 10 137.66 13.766
Lack-of-Fit 5 92.33 18.466 2.04 0.227
Pure Error 5 45.33 9.067
Total 19 2636.95

Coded Coefficients

Term Effect Coef SE Coef T-Value P-Value


Constant 87.36 1.51 57.73 0.000
A 11.66 5.83 1.00 5.80 0.000
B 2.72 1.36 1.00 1.36 0.205
C -12.10 -6.05 1.00 -6.03 0.000
A*A -10.116 -5.058 0.977 -5.17 0.000
B*B -15.772 -7.886 0.977 -8.07 0.000
C*C 1.552 0.776 0.977 0.79 0.446
A*B 5.75 2.88 1.31 2.19 0.053
A*C -5.25 -2.62 1.31 -2.00 0.073
B*C -9.25 -4.63 1.31 -3.53 0.005

Regression Equation in Coded Units:

2 2 2
y=87.36+5.83 x 1+1.36 x 2−6.05 x 3−5.058 x 1−7.886 x 2+ 0.776 x3 +2.88 x 1 x 2−2.62 x 1 x 3−4.63 x2 x 3
2
where x 2, x 3 are not significant at the α =0.05 level. To maintain hierarchy in the model, we leave x 2 in
the model since interaction effects with x 2 are significant.

Consider contour plots of the response. Stat > DOE > Response Surface > Contour Plot. Select
“Generate plots for all pairs of continuous variables”. Click “View Model” to ensure the correct
model is considered.
Contour Plots of y
B*A C*A y
< 40
1 1
40 – 50
50 – 60
0 0
60 – 70
70 – 80
-1 -1
80 – 90
90 – 100
-1 0 1 -1 0 1
C*B > 100
Hold Values
1
A0
B0
0
C 0

-1

-1 0 1

Stat > DOE > Response Surface > Response Optimizer


In Setup, let Goal=maximize

Using Minitab Optimizer, we can determine the optimal levels of the factors to maximize y.

From the plots and the optimizer, setting x 1 in a range from 1 to +1.682, setting x 2 between 0.5 and 1.2,
and setting x 3 to its smallest value will maximize viscosity.

8.19. Reconsider the crystal growth experiment from Exercise 8.18. Suppose that x 3=z is now a noise variable,
and that the modified experimental design shown in Table 8E.9 has been conducted. The experimenters
want the growth rate to be as large as possible, but they also want the variability transmitted from z to be
small. Under what set of conditions is growth greater than 90 with minimum variability achieved?

x1 x2 z y
-1 -1 -1 66
-1 -1 1 70
-1 1 -1 78
-1 1 1 60
1 -1 -1 80
1 -1 1 70
1 1 -1 100
1 1 1 75
-1.682 0 0 65
1.682 0 0 82
0 -1.682 0 68
0 1.682 0 63
0 0 0 83
0 0 0 90
0 0 0 87
0 0 0 88
0 0 0 91
0 0 0 85

Enter the data in the worksheet. Stat > DOE > Response Surface > Define Custom Response Surface
Design. Enter the three variables x 1 , x 2 , z . To analyze the experiment, go to Stat > DOE > Response Surface
> Analyze Response Surface Design. Select the response variable y. Select “Terms”. For robust parameter
design, include main factors of all variables, and all two way interactions between variables.

Response Surface Regression: y versus x1, x2, z

Analysis of Variance

Source DF Adj SS Adj MS F-Value P-Value


Model 8 2034.85 254.356 16.88 0.000
Linear 3 789.26 263.088 17.45 0.000
x1 1 463.84 463.837 30.77 0.000
x2 1 25.30 25.303 1.68 0.227
z 1 300.12 300.125 19.91 0.002
Square 2 953.21 476.604 31.62 0.000
x1*x1 1 313.14 313.136 20.78 0.001
x2*x2 1 783.03 783.035 51.95 0.000
2-Way Interaction 3 292.38 97.458 6.47 0.013
x1*x2 1 66.13 66.125 4.39 0.066
x1*z 1 55.12 55.125 3.66 0.088
x2*z 1 171.12 171.125 11.35 0.008
Error 9 135.65 15.072
Lack-of-Fit 4 90.32 22.580 2.49 0.172
Pure Error 5 45.33 9.067
Total 17 2170.50

Coded Coefficients

Term Effect Coef SE Coef T-Value P-Value


Constant 87.36 1.54 56.68 0.000
x1 11.66 5.83 1.05 5.55 0.000
x2 2.72 1.36 1.05 1.30 0.227
z -12.25 -6.12 1.37 -4.46 0.002
x1*x1 -9.73 -4.86 1.07 -4.56 0.001
x2*x2 -15.38 -7.69 1.07 -7.21 0.000
x1*x2 5.75 2.88 1.37 2.09 0.066
x1*z -5.25 -2.62 1.37 -1.91 0.088
x2*z -9.25 -4.62 1.37 -3.37 0.008
Normal Probability Plot Versus Fits
(response is y) (response is y)
99 5.0

95
90 2.5
80
70

Residual
Percent

60
50
0.0
40
30
20
-2.5
10
5

-5.0
1
-5.0 -2.5 0.0 2.5 5.0 7.5 60 70 80 90 100
Residual Fitted Value

The normality assumption is reasonable, as is the assumption of constant variance.

Regression equation in coded units:

2 2
y=87.361+ 5.8275 x 1 +1.3611 x 2−4.8642 x 1−7.6919 x 2+2.8750 x 1 x 2−6.1250 z−2.6250 x 1 z−4.6250 x 2 z

Where the mean effect of x 2 is not statistically significant, but we leave it in the model to maintain hierarchy in the
model since interaction effects with x 2 are significant at α =0.10 .

Now specify the mean and variance model to determine when the growth is greater than 90 with minimum
variability.
2 2
y=β 0 + β 1 x 1 + β 2 x 2 + β 11 x1 + β 22 x 2 + β 12 x 1 x 2+ γ z 1+ δ 11 x 1 z 1 +δ 21 x 2 z 1 +ε

E [ y ] =β 0 + β 1 x 1+ β2 x 2+ β 11 x 1+ β 22 x 2+ β 12 x 1 x 2
2 2

Var ( y )=var ( β 0 + β 1 x 1+ β2 x 2+ β 11 x 21+ β22 x22 + β 12 x 1 x 2 +γ z1 +δ 11 x 1 z 1+ δ 21 x 2 z 1+ ε )


2 2 2
Var ( y )=( γ 1 +δ 11 x 1+ δ 21 x 2 ) σ z +σ

Replace the unknown regression coefficients in the mean and variance models and replace σ 2 in the variance model
by the residual mean square found when fitting the response model. Assume that the low and high values of the
noise variable z have been run at one standard deviation on either side of its typical or average value.

σ z =1 , σ^ =MSE=15.072
2

E [ y ] =87.361+5.875 x 1+1.3611 x 2−4.8642 x 1−7.6919 x 2 +2.8750 x 1 x 2


2 2

2 2 2
Var ( y )=(−6.1250−2.6250 x 1−4.6250 x 2 ) σ z +σ
2 2
Var ( y )=52.5876+32.1563 x 1 +6.8906 x 1+56.6563 x 2 +21.3906 x 2+ 24.2813 x 1 x2

MTB > Stat > DOE > Response Surface > Surface Plot, Contour Plot
Surface Plot of Growth vs x1, x2 Contour Plot of Growth vs x1, x2
60
1.5
50 80

1.0

0.5

x2
0.0
80

y
60 -0.5

1
40 -1.0
0
x2 60
-2 -1
-1 70 60
0 -2 -1.5
x1 1

-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5


x1

To reach target growth of 90 and minimum variability, x 1 values near 0.50 and x 2 values near 0.2 are appropriate.
CHAPTER 9

Reliability
Learning Objectives

After completing this chapter you should be able to:


1. Know the definition of reliability
2. Explain how reliability is one of the eight dimensions of quality
3. Understand the probability distributions associated with reliability analysis
4. Understand the concept of failure rate
5. Perform reliability calculations for systems based on knowledge of the reliability of components
6. Understand the concepts of maintainability and availability
7. Understand failure mode and effects analysis

Important Terms and Concepts


Availability Life distribution Risk priority number (RPN)
Exponential distribution Maintainability Series system
Failure mode and effects analysis Mean time between failures Stand-by redundant system
(FMEA) (MTBF) Survival distribution
Failure rate Mean time to failure (MTTF) Testing with replacement
Failure-terminated tests Mean time to repair (MTTR) Testing without replacement
Hazard function Parallel system Time-terminated tests
k-out-of-n system Reliability Time-to-failure distribution
Life cycle reliability Reliability function Weibull distribution

Exercises

9.1. An electronic component in a dental x-ray system has an exponential time to failure distribution with
λ=0.00004 . What are the mean and variance of the time to failure? What is the reliability at 30,000
hours?

t exp ( λ=0.00004 )

1 1
Mean time to failure: E ( t )= = =25,000 hours
λ 0.00004

1 1 8 2
Variance of the time to failure: V ( t )= 2
= 2
=6.25∗10 hours
λ 0.00004
Reliability at 30,000 hours:

R ( 30,000 )=P ( t >30,000 )=e−λt =e−0.00004∗30,000 =e−6 /5=0.3012


9.2. Reconsider the component described in Exercise 9.1. What is the reliability of the component at 25,000
hours?
Reliability at 25,000 hours:

−λt −0.00004∗25,000 −1
R ( 25,000 )=P ( t >25,000 )=e =e =e =0.3679

9.3. A component in an automobile door latch has an exponential time to failure distribution with λ=0.00125
cycles. What are the mean and variance of the number of cycles to failure? What is the reliability at 8,000
cycles?

t exp ( λ=0.00125 )

1 1
Mean time to failure: E ( t )= = =800 cycles
λ 0.00125

1 1 2
Variance of the time to failure: V ( t )= 2
= 2
=640,000 cycles
λ 0.00125
Reliability at 8,000 cycles:

−λt −0.00125∗8,000 −10 −5


R ( 8,000 ) =P ( t >8,000 ) =e =e =e =4.54∗10
9.4. You are designing a system that must have reliability at least 0.95 at 10,000 hours of operation. If you can
reasonably assume that the time to failure is exponential, what MTTF must you achieve in order to satisfy
the reliability requirement?

−λ∗10000
R ( 10000 ) ≥ 0.95⟹ P ( t>10000 ) ≥ 0.95 ⟹ e ≥ 0.95

⟹− λ∗10000≥ ln ⁡(0.95)

−ln ⁡(0.95)
⟹ λ≤
10000
−6
⟹ λ ≤ 5.129∗10

1 1
Mean time to failure: = =194,957 hours
λ 5.129∗10−6

NOTE: The mean time to failure is sensitive to the number of significant digits used in
intermediate calculations.

9.5. A synthetic fiber is stressed by repeatedly applying a particular load. Suppose that the number of cycles to
failure has an exponential distribution with mean 3,000 cycles. What is the probability that the fiber will
break at 1,500 cycles? What is the probability that the fiber will break at 2,500 cycles?

t exp ( λ=1/3,000 )

Probability that the fiber will break at 1,500 cycles:

−λt −1 /3,000∗1,500 −1 /2
1−R ( 1,500 )=1−P ( t>1,500 )=1−e =1−e =e ≈ 0.3935
Probability that the fiber will break at 2,500 cycles:

−λt −1/ 3,000∗2,500 −5/ 6


1−R ( 2,500 )=1−P ( t> 2,500 )=1−e =1−e =e ≈ 0.5654

9.6. An electronic component has an exponential time to failure distribution with λ=0.0002 hours. What are
the mean and variance of the time to failure? What is the reliability at 7,000 hours?

t exp ( λ=0.0002)

1 1
Mean Time to failure: E ( t )= = =5,000 hours
λ 0.0002

1 1 2
Variance of the time to failure: V ( t )= 2
= 2
=25,000,000 hours
λ ( 0.0002 )

Reliability at 7,000 hours:

− λt −0.0002∗(7000) −1.4
R ( 7000 )=Pr ( t>7000 )=e =e =e =0.2466

9.7. The life in hours of a mechanic assembly has a Weibull distribution with δ=5,000 and β=1/2. Find the
mean and variance of the time to failure. What is the reliability of the assembly at 4,000 hours? What is
the reliability at 7,500 hours?

t Weibull ( δ=5,000 ; β=1/2 )

Mean time to failure:

E ( t )=δ Γ ( 1+1 /β )=5,000∗Γ 1+ ( 1


1/2 )
=5,000∗Γ (3 )=5,000∗2!=10,000 hours

Variance of the time to failure:

2
V ( t )=δ Γ ( 1+2/ β )−[ δ Γ (1+1 / β ) ] =5,000 ∗Γ ( 5 )−10,000 =5∗10 hours
2 2 2 8 2

Reliability at 4,000 hours:


( tδ ) (5,000 )
β 1/2
4,000
− −
R ( 4,000 )=P ( t>8,000 )=e =e ≈ 0.4088

Reliability at 7,500 hours:


( δt ) ( 7,500
5,000 )
β 1/2
− −
R ( 7,500 )=P ( t >7,500 )=e =e ≈ 0.2938

9.8. The life in hours of subsystem in an appliance has a Weibull distribution with δ=7,000 cycles and
β=1.5. What is the reliability of the appliance subsystem at 10,000 hours?

t Weibull ( δ=7000 , β =1.5 )

Reliability at 10,000 hours:


β 1.5
t 10000
−( ) −( )
R ( 10,000 )=P ( t >10,000 )=e δ
=e 7000
≈ 0.1813

9.9. Suppose that a lifetime of a component has a Weibull distribution with shape parameter β=2. If the
system should have reliability 0.99 at 7,500 hours of use, what value of the scale parameter is required?

t Weibull ( δ=? ; β=2 )

( δt ) ( 7,500
δ )
β 2
− −
R ( 7,500 )=e =e =0.99

( )
2
7,500
− =ln ( 0.99 )
δ

δ 2=
√ 75002
−ln ( 0.99 )

δ=74,811.9502

9.10. A component has a Weibull time-to-failure distribution. The value of the scale parameter is δ=200.
Calculate the reliability at 250 hours for the following values of the shape parameter:
β=0.5 , 1 ,1.5 ,2 ,∧2.5. For the fixed value of the scale parameter, what impact does changing the shape
parameter have on the reliability?

t Weibull (δ=200 , β)

Reliability at 250 hours; β=0.5 :


β 0.5
t 250
−( ) −( )
R ( 250 )=P ( t >250 )=e δ
=e 200
≈ 0.3269

Reliability at 250 hours; β=1.0 :


β 1
t 250
−( ) −( )
R ( 250 )=P ( t >250 )=e δ
=e 200
≈ 0.2865

Reliability at 250 hours; β=1.5 :


β 1.5
t 250
−( ) −( )
R ( 250 )=P ( t >250 )=e δ
=e 200
≈ 0.2472

Reliability at 250 hours; β=2.0 :


β 2.0
t 250
−( ) −( )
R ( 250 )=P ( t >250 )=e δ
=e 200
≈ 0.2096

Reliability at 250 hours; β=2.5 :


β 2.5
t 250
−( ) −( )
R ( 250 )=P ( t >250 )=e δ
=e 200
≈ 0.1743
As β increases, the reliability of the component decreases.

9.11. A component has a Weibull time-to-failure distribution. The value of the scale parameter is δ=500 .
Suppose that the shape parameter is β=3.4 . Find the mean and variance of the time to failure. What is
the reliability at 600 hours?

t Weibull ( δ=500 ; β=3.4 )

Mean time to failure:



E ( t )=δ Γ ( 1+1 /β )=500∗Γ 1+
1
3.4
=500∗Γ (
22
17
=500 ∫ t
0
)
22/17−1 −t
e dt =500∗0.898382=4 ( )
Variance of the time to failure:

[ )] =500 Γ ( 2717 )−449.1910


2
2
V ( t )=δ Γ ( 1+2/ β )−[ δΓ ( 1+1/ β ) ] =500 ∗Γ 1+
2 2
3.4
2
− 500∗Γ 1+
1
3.4 ( ) ( 2

2
¿ 21,288.5684 hours
Reliability at 600 hours:

( δt ) ( 600
500 )
β 3.4
− −
R ( 600 )=P ( t >600 ) =e =e ≈ 0.1559

9.12. Continuation of Exercise 9.11. If a Weibull distribution has a shape parameter of β=3.4 , it can be
reasonably well approximated by a normal distribution with the same mean and variance. For the situation
of Exercise 9.11, calculate the reliability at 600 hours using the normal distribution. How close is this to the
reliability value calculated from the Weibull distribution in Exercise 9.11?

2
X Normal( μ=449.1910 , σ =21288.5684)

Reliability at 600 hours:

R ( 600 )=P ( X > 600 )=P


( X−μ
σ
>
√21288.5684 )
600−449.1910
=1−P ( Z ≤ 1.0336 )=1−0.849338=0.150662

This approximation is close to the reliability value calculated in Exercise 9.11.

9.13. Suppose that a unit had a Weibull time-to-failure distribution. The value of the scale parameter is δ=250.
Graph the hazard function for the following values of the shape parameter: β=0.5 , 1 ,1.5 ,2 , 2.5. For
the fixed values of the scale parameter, what impact does changing the shape parameter have on the hazard
function?

() ( )
β −1 β−1
β t β t
h ( t )=f ( t ) / R ( t )= =
δ δ 250 250

Sample values for h ( t ) using 5 ≤ t ≤250 :

∆ 250 250 250 250 250


t /
0.5 1 1.5 2 2.5
β
0.004
5 0.0141 0.0008 0.0002 0.0000
0
0.004
25 0.0063 0.0019 0.0008 0.0003
0
0.004
50 0.0045 0.0027 0.0016 0.0009
0
0.004
75 0.0037 0.0033 0.0024 0.0016
0
0.004
100 0.0032 0.0038 0.0032 0.0025
0
0.004
125 0.0028 0.0042 0.0040 0.0035
0
0.004
150 0.0026 0.0046 0.0048 0.0046
0
0.004
175 0.0024 0.0050 0.0056 0.0059
0
0.004
200 0.0022 0.0054 0.0064 0.0072
0
0.004
225 0.0021 0.0057 0.0072 0.0085
0
0.004
250 0.0020 0.0060 0.0080 0.0100
0

0.45
0.40
0.35

0.30
Beta = 0.5
0.25
Beta = 1
h(t)

0.20 Beta = 1.5


Beta = 2
0.15
Beta = 2.5
0.10
0.05
0.00
0 500 1000 1500 2000 2500 3000
t (Hours)

The slope increases as β increases.

9.14. Suppose that you have 15 observations on the number of hours to failure of a unit. The observations are
442, 381, 960, 571, 1861, 162, 334, 825, 2562, 324, 312, 368, 367, 968, and 15. Is the exponential
distribution a reasonable choice for the time to failure distribution? Estimate MTTF.

MTB > Graph > Probability Plot > Single


Distribution > Exponential
Probability Plot of Hours to Failure
Exponential - 95% CI
99
Mean 696.8
90 N 15
80
70 AD 0.599
60
50
P-Value 0.350
40

Percent
30
20

10

3
2

1
10 100 1000 10000
t

15
1 442+⋯+15
Estimate of MTTF: ∑
15 i=1
t i=
15
=697

The points do not form a straight line in the probability plot, indicating that the assumption that the times to
failure follow an exponential distribution is not reasonable.

9.15. Suppose that you have 15 observations on the number of hours to failure of a unit. The observations are
259, 53, 536, 1320, 341, 667, 538, 1713, 428, 152, 29, 445, 677, 637, 696, 540, 1392, 192, 1871, and 2469.
Is the exponential distribution a reasonable choice for the time to failure distribution? Estimate the MTTF.

MTB > Graph > Probability Plot > Single


Distribution > Exponential

Probability Plot of Time to Failure


Exponential - 95% CI
99
Mean 747.7
90 N 20
80
70 AD 0.389
60
50
P-Value 0.642
40
Percent

30
20

10

3
2

1
10 100 1000 10000
t

20
t
Estimate of MTTF: ∑ 20i = 259+⋯+2469
20
=747.75
i=1

The assumption that the times to failure follow an exponential distribution is reasonable.

9.16. Twenty observations on the time to failure of a system are as follows: 1054, 320, 682, 1440, 1085, 938,
871, 471, 1053, 1103, 780, 665, 1218, 659, 393, 913, 566, 439, 533, and 813. Is the Weibull distribution a
reasonable choice for the time to failure distribution? Estimate the scale and shape parameters.

MTB > Graph > Probability Plot > Single


Distribution > Weibull
Probability Plot of Time to Failure
Weibull - 95% CI
99
Shape 2.983
90 Scale 898.1
80
70 N 20
60
50
AD 0.174
40 P-Value >0.250

Percent
30
20

10

3
2

1
100 1000
t

The assumption that the times to failure follow a Weibull distribution is reasonable. From the Minitab
output, we can estimate the scale and shape parameters:

Estimate of scale parameter: ^


δ=898.1

Estimate of shape parameter: ^β=2.983

9.17. Consider the following 10 observations on time to failure: 50, 191, 63, 174, 71, 62, 119, 89, 123, and 175.
Is either the exponential or the Weibull a reasonable choice of the time to failure distribution?

MTB > Graph > Probability Plot > Single


Distribution > Exponential, Weibull

Probability Plot of Time to Failure Probability Plot of Time to Failure


Weibull - 95% CI Exponential - 95% CI
99 99
Shape 2.435 Mean 111.7
90 Scale 126.7 90 N 10
80 80
70 N 10 70 AD 1.460
60
50
AD 0.464 60
50
P-Value 0.028
40 P-Value 0.232 40
Percent

Percent

30 30
20 20

10 10

5 5

3 3
2 2

1 1
10 100 1000 1 10 100 1000
t t

Based on the probability plots for the exponential and Weibull distribution, the Weibull distribution is a
more adequate fit.

9.18. Fifteen observations on the time to failure of a unit are as follows: 173, 235, 379, 439, 462, 455, 617, 41,
454, 1083, 371, 359, 588, 121, and 1066. Is the Weibull distribution a reasonable choice for the time to
failure distribution? Estimate the scale and shape parameters.

MTB > Graph > Probability Plot > Single


Distribution > Weibull
Probability Plot of Time to Failure
Weibull - 95% CI
99
Shape 1.614
90 Scale 508.7
80
70 N 15
60
50
AD 0.408
40 P-Value >0.250

Percent
30
20

10

3
2

1
10 100 1000
t

The assumption that the times to failure follow a Weibull distribution is reasonable. From the Minitab
output, we can estimate the scale and shape parameters:

Estimate of scale parameter: ^


δ=508.7

Estimate of shape parameter: ^β=1.614

9.19. Consider the following 20 observations on time to failure: 702, 507, 664, 491, 514, 323, 350, 681, 281,
599, 495, 254, 185, 608, 626, 622, 790, 248, 610, and 537. Is either exponential or the Weibull a
reasonable choice of the time to failure distribution?

MTB > Graph > Probability Plot > Single


Distribution > Exponential, Weibull

Probability Plot of Time to Failure Probability Plot of Time to Failure


Exponential - 95% CI Weibull - 95% CI
99 99
Mean 504.3 Shape 3.481
90 N 20 90 Scale 562.6
80 80
70 AD 3.986 70 N 20
60
50
P-Value <0.003 60
50
AD 0.655
40 40 P-Value 0.080
Percent

Percent

30 30
20 20

10 10

5 5

3 3
2 2

1 1
1 10 100 1000 10000 100 1000
t t

Based on the probability plots for the exponential and Weibull distribution, the Weibull distribution is a
more adequate fit.

9.20. Consider the time to failure data in Exercise 9.19. Is the normal distribution a reasonable model for these
data? Why or why not?

MTB > Graph > Probability Plot > Single


Distribution > Normal
Probability Plot of Time to Failure
Normal - 95% CI
99
Mean 504.4
95 StDev 174.0
90
N 20
AD 0.587
80
P-Value 0.111
70

Percent
60
50
40
30
20

10
5

1
0 200 400 600 800 1000 1200
t

The assumption that the times to failure follow a normal distribution is reasonable.

The points on the normal probability plot are well approximated by a straight line. Note also that in
Exercise 9.19, the estimate for the shape parameter was 3.481. In Exercise 9.12, we saw that if a Weibull
distribution has a shape parameter of β=3.4 , it can be reasonably well approximated by a normal
distribution with the same mean and variance.

9.21. A simple series system is shown in the accompanying figure. The reliability of each component is shown in
the figure. Assuming that the components operate independently, calculate the system reliability.

4
R S=∏ R ( Ci ) =0.9995∗0.9999∗0.9875∗0.9980=0.9849
i=1

9.22. A series system is shown in the accompanying figure. The reliability of each component is shown in the
figure. Assuming that the components operate independently, calculate the system reliability.

4
R s=∏ R ( C i )=0.9950∗0.9999∗0.9875=0.9825
i=1

9.23. A simple series system in shown in the accompanying figure. The lifetime of each component is
exponentially distributed and the λ for each component is shown in the figure. Assuming that the
components operate independently, calculate the system reliability at 10,000 hours.

t C exp ( λ i )
i
−1
∗10,000
− λt
RC 1 ( 10,000 )=e =e 12,000
=e−5 /6 =0.4346
−1
∗10,000
− λt
RC 2 ( 10,000 )=e =e 15,000
=e−2 /3=0.5134
−1
∗10,000
− λt
RC 3 ( 10,000 )=e =e 20,000
=e−1 /2=0.6065
3
R S ( 10,000 )=∏ R C =0.4346∗0.5134∗0.6065=0.1353
i
i=1

9.24. A series system has four independent identical components. The reliability for each component is
exponentially distributed. If the reliability of the system at 8,000 hours must be at least 0.999, what MTTF
must be specified for each component in the system?

4
R S ( 8,000 )=∏ R C ≥ 0.999 i
i=1

4
R S ( 8,000 )=∏ e−λt =e−4∗8,000∗ λ ≥ 0.999
i=1

−8
λ ≤−ln ( 0.999 ) /32,000 ≈ 3.1266∗10

1 1 7
MTTF= ≥ ≈ 3.1984∗10
λ 3.1266∗10−8

9.25. Consider the parallel system shown in the accompanying figure. The reliability of each component is
provided in the figure. Assuming the components operate independently, calculate the system reliability.

2
R s ( t )=1−∏ [1−R i (t) ¿ ]=1−(1−0.9999)∗(1−0.9850)=1−1.5∗10−6=0.9999985 ¿
i=1

9.26. Consider the parallel system shown in the accompanying figure. The reliability for each component is
shown in the figure. Assuming the components operate independently, calculate the system reliability.
4
R S=1−∏ ( 1−RC )=1−( 1−0.995 )∗( 1−0.999 )∗( 1−0.985 )=1−7.5∗10−8
i
i=1

¿ 0.999999925
9.27. Consider the stand-by system in the accompanying figure. The components have an exponential lifetime
distribution, and the decision switch operates perfectly. The MTTF of each component is 1,000 hours.
Assuming that the components operate independently, calculate the system reliability at 2,000 hours.

n=3 , λ=1 /1,000


n−1 i
(λt )
R s ( t )=e − λt
∑ i!
i=0

2 i
(t /1,000)
R s ( t )=e −t / 1,000
∑ i!
i=0

−t
1,000 t t2
R s ( t )=e (1+ + )
1,000 2,000,000
−2,000
1,000 2,000 2,0002
R s ( 2000 )=e ( 1+ + )
1,000 2,000,000

R s ( 2,000 )=0.6767

9.28. Consider the stand-by system shown in the accompanying figure. The components have an exponential
lifetime distribution, and the decision switch operates perfectly. The MTTF of each component is 500
hours. Assuming that the components operate independently, calculate the system reliability at 1,500
hours.
n=3 , λ=1 /500
n−1 i
(λt )
R s ( t )=e − λt
∑ i!
i=0

2 i
(t /500)
R s ( t )=e −t / 500
∑ i!
i=0

−t
500 t t2
R s ( t )=e (1+ + )
500 500,000
−1,500
500 1,500 1,5002
R s ( 1,500 ) =e (1+ + )
500 500,000

R s ( 2,000 )=0.4232

9.29. Consider the system shown in the accompanying figure. The reliability of each component is shown in the
figure. Assuming that the components operate independently, calculate the system reliability.

Reliability of Subsystem 1:

2
R SS1 =1−∏ ( 1−RC ) =1−( 1−0.999 )∗(1−0.985)=0.999985
i
i=1
Reliability of Subsystem 2:

3
R SS2 =1−∏ ( 1−RC ) =1−( 1−0.995 )∗(1−0.980 )∗(1−0.975)=0.9999975
i
i=1

Reliability of Subsystem 3:

R SS3 =0.999

Reliability of System:

R S=R SS1∗RSS 2∗R SS 3=0.999985∗0.9999975∗0.999=0.99898

9.30. Consider the system shown in the accompanying figure. The reliability of each component is provided in
the figure. Assuming that the components operate independently, calculate the system reliability.

Reliability of subsystem 1:

2
R SS1 =1−∏ ( 1−RC ) =1−( 1−0.995 )2=0.999975
i
i=1
Reliability of subsystem 2:

2
R SS2 =1−∏ ( 1−RC ) =1−( 1−0.980 ) (1−0.950 ) =0.9990
i
i=1

Reliability of subsystem 3:

2
R SS3 =∏ R SS =¿ 0.999975∗0.9990=0.998975¿
i
i=1

System reliability:

R S=1−( 1−RSS 3 ) ( 1−0.999 )=1− (1−0.998975 ) ( 1−0.999 )=0.999999

9.31. Consider the system shown in the accompanying figure. The reliability of each component is provided in
the figure. Assuming that the components operate independently, calculate the system reliability.
Reliability of Subsystem 1:
2
R SS1 =1−∏ ( 1−RC ) =¿ 1−( 1−0.995 )∗( 1−0.995 )=0.999975¿
i
i=1

Reliability of Subsystem 2:

3
R SS2 =1−∏ ( 1−RC ) =¿ 1−( 1−0.990 )∗( 1−0.999 )∗( 1−0.995 ) =0.99999995 ¿
i
i=1

Reliability of Subsystem 3:
2
R SS3 =∏ R S ¿ R SS 1∗RSS 2=0.999975∗0.99999995=0.99997495
i
i=1

Reliability of Subsystem 4:
R SS 4=1 ( 1−R SS 3 )∗( 1−0.985 )=1−( 1−0.99997495 )∗( 1−0.985 )=0.9999996

System Reliability:
R s=RSS 4∗0.995=0.994999

9.32. Suppose that n=20 units are placed on test and that five of them fail at 20, 25, 40, 75, and 100 hours,
respectively. The test is terminated at 100 hours without replacing any of the failed units. Find the failure
rate.

Failure Failed
time units
20 1
25 1
40 1
75 1
100 1
≥ 100 15

r r 5 5
h ( t )= = = = =0.0028
Q r
20+25+ 40+75+100+15∗100 1760
∑ t i +( n−r ) T
i=1

9.33. Continuation of Exercise 9.32. For the data in Exercise 9.32, find a 95% confidence interval on the MTTF
and the reliability at 200 hours, assuming that the lifetime has an exponential distribution.

n=20 , r=5 , T =100


r

∑ t i+ ( n−r ) T 20+ 25+40+75+ 100+ ( 25−5 )∗100 1760


^ Q = i=1
θ= = = =352
r r 5 5
^
^
R ( 200 )=e−t /θ=e−200/352 =0.5665

95% CI for MTTF:

2Q 2Q 2∗1760 2∗1760 3520 3520


( , )=( , 2 )=( , )=(150.81 , 800)
χ
2
α /2 ,2 r +2 χ
2
1−α/2 , 2 r+2
2
χ .025 ,12 χ .975 , 12 23.34 4.40

θ^ l =150.81, θ^ u =800

95% CI for R ( 200 ) :

( e−t /θ^ ,e−t / θ^ )=( e−200/150.81 ,e−200/ 800 )=(0.2655 , 0.7788)


l u

NOTE: Confidence intervals endpoints are highly dependent on the number of significant digits used throughout the
calculations.

9.34. Suppose that 50 units are tested for 1,000 cycles of use. At the end of the 1,000-cycle test period, 5 of units
have failed. When a unit failed, it was replaced with a new one and the test was continued. Estimate the
failure rate.

r r 5 −4
h ( t )= = r = =10
Q 50∗1,000
∑ ti
i=1

9.35. Suppose that 50 units are placed on test, and when 3 of them have failed, the test is terminated. The failure
times in hours for the 3 units that failed are 950, 1050, and 1525. Estimate the failure rate for these units.
r 3 3
h ( t )= = = =0.000851
r
950+1050+1525 3525
∑ ti
i=1

9.36. Suppose that n=25 units are placed on test and that two of them fail at 200 and 400 hours, respectively.
The test is terminated at 1,000 hours without replacing any of the failed units. Determine the failure rate.

Failure Failed
time units
200 1
400 1
≥ 1,000 23

r r 2 2
h ( t )= = = = =8.4746∗10−5
Q r
200+400+ 23∗1000 23,600
∑ t i +( n−r ) T
i=1

9.37. Suppose that n=25 units are placed on test and that three of them fail at 150, 300, and 500 hours,
respectively. The test is terminated at 2,000 hours, and no failed units are replaced during the test. Estimate
the MTTF and the reliability of these units at 5,000 hours. Assuming that the lifetime has an exponential
distribution, find 95% confidence interval on these quantities.

n=25 , r=3 , T =2,000


r

∑ t i+ ( n−r ) T 150+300+ 500+ ( 25−3 )∗2000 44950


^ Q = i=1
θ= = = =14,983.33
r r 3 3
^
^
R ( 5000 )=e−t /θ=e−5000/ 14983.33=0.71626

95% CI for MTTF:

( χ
2
2Q
α /2 ,2 r +2
,
χ
2
2Q
1−α / 2, 2 r+2
)(
=
2∗44950 2∗44950
2
χ .025 ,8
, 2
)(
χ .975 ,8
=
89,900 89,900
,
17.53 2.18 )
¿(5,128.35 , 41,238.53)

θ^ l =5,128.35 , θ^ u=41,238.53

95% CI for R ( 5000 ) :

( e−t /θ^ ,e−t / θ^ )=( e−5000/5,128.35 ,e−5000/ 41,238.53 ) =(0.3772 ,0.8858)


l u

NOTE: Confidence intervals endpoints are highly dependent on the number of significant digits used throughout the
calculations.

You might also like