100% found this document useful (1 vote)
603 views

ReliabilityToolkit PDF

This document summarizes the contents of the Rome Laboratory Reliability Engineer's Toolkit. It provides a quick reference index of over 100 questions related to reliability, maintainability, and testability topics and how to address them. The Toolkit can be downloaded for free from the Quanterion website. It contains tools and methods for reliability engineers to develop requirements, analyze data, perform predictions and assessments, and manage reliability programs. An appendix also provides more resources for obtaining training, data, and standards.

Uploaded by

Anderson
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
603 views

ReliabilityToolkit PDF

This document summarizes the contents of the Rome Laboratory Reliability Engineer's Toolkit. It provides a quick reference index of over 100 questions related to reliability, maintainability, and testability topics and how to address them. The Toolkit can be downloaded for free from the Quanterion website. It contains tools and methods for reliability engineers to develop requirements, analyze data, perform predictions and assessments, and manage reliability programs. An appendix also provides more resources for obtaining training, data, and standards.

Uploaded by

Anderson
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 274

Copies of this Toolkit may be downloaded free from quanterion.com.

Many of the tools in


this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ROME LABORATORY
RELIABILITY
ENGINEER'S TOOLKIT

April 1993

An Application Oriented
Guide for the
Practicing Reliability Engineer

Systems Reliability Division


Rome Laboratory
Air Force Materiel Command (AFMC)
525 Brooks Rd.
Griffiss AFB, NY 13441-4505

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
QUICK REFERENCE

Quick Reference Application Index


How Do I . . . ?
• Understand the Principles of TQM.............................................................. 2
• Understand Basic DoD R&M Policy and Procedures ................................. 7
• Develop Quantitative Requirements
Reliability (R) ........................................................................................ 11
Maintainability (M) ................................................................................ 17
Testability (T)........................................................................................ 20
• Tailor R&M Task Requirements.................................................................. 23
• R&M Task Application/Priorities ................................................................. 25
• Develop a Contract Data Requirements List .............................................. 26
• Specify Information To Be Included in Proposals ....................................... 28
• Evaluate Contractor Proposals ................................................................... 31
• Specify Part Stress Derating....................................................................... 37
• Determine the Limitations of Common Cooling Techniques ...................... 44
• Understand Basic Parts Control ................................................................. 46
• Identify Key R&M&T Topics for Evaluation at Design Reviews.................. 55
• Evaluate Contactor's Method of Managing Critical Items ........................... 62
• Understand Design Concerns Associated with Dormant Conditions ......... 63
• Understand Basic SMT Design Issues ....................................................... 66
• Evaluate Power Supply Reliability .............................................................. 67
• Determine Part Failure Modes and Mechanisms ....................................... 69
• Evaluate Fiber Optic Reliability................................................................... 73
• Understand R&M&T Analysis Types and Purposes ................................... 77
• Understand Reliability Prediction Methods ................................................. 80
• Understand Maintainability Prediction Methods ......................................... 81
• Understand Testability Analysis Methods................................................... 84
• Evaluate a Reliability Prediction Report...................................................... 85
• Evaluate Existing Reliability Data ............................................................... 86
• Evaluate a Maintainability/Testability Analysis Report ............................... 87
• Evaluate a Failure Modes, Effects and Criticality Analyses Report............ 88
• Approximate the Reliability of Redundant Configurations .......................... 89
• Perform a Quick (Parts Count) Reliability Prediction.................................. 92
• Adjust Reliability Data for Different Conditions ........................................... 105
• Predict the Reliability of SMT Designs........................................................ 108
• Understand Finite Element Analysis Application ........................................ 113
• Estimate IC Junction Temperatures for Common Cooling Techniques ..... 115
• Understand Sneak Circuit Analysis Application.......................................... 119

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT i

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
QUICK REFERENCE

• Estimate Reliability for Dormant Conditions ............................................... 122


• Estimate Software Reliability ...................................................................... 124
• Develop an Environmental Stress Screening (ESS) Program ............. 129
• Select a Reliability Qualification Test.......................................................... 134
• Select a Maintainability Qualification Test .................................................. 136
• Select a Testability Demonstration Test ..................................................... 137
• Evaluate a Failure Reporting and Corrective Action System ..................... 138
• Evaluate a Reliability Demonstration Test Plan.......................................... 140
• Evaluate a Reliability Demonstration Test Procedure ................................ 144
• Evaluate a Maintainability Test Plan and Procedure .................................. 145
• Participate in R&M Testing ......................................................................... 146
• Evaluate R&M Demonstration Test Reports............................................... 147
• Understand Basic Design of Experiments Concepts.................................. 148
• Understand Basic Accelerated Life Testing Concepts ............................... 153
• Become Aware of Time Stress Measure Devices ...................................... 159

For More Help Appendices


How Do I . . . ?
• Translate User Needs to R&M Requirements ...................................... A-1
• Develop SOW and Specification Requirements (Example).................. A-7
• Become Aware of Available R&M Software Tools................................ A-17
• Develop Design Guidelines (Example) ................................................. A-23
• Select a MIL-HDBK-781 Test Plan ....................................................... A-37
• Calculate Confidence Intervals ............................................................. A-43
• Calculate the Probability of Failure Occurrence ................................... A-46
• Understand Reliability Growth Testing ................................................. A-51
• Select a MIL-STD-471 Test Plan .......................................................... A-61
• Find More R&M Data ............................................................................ A-67
• Find R&M Related Electronic Bulletin Boards ...................................... A-72
• Obtain R&M Training ............................................................................ A-75
• Obtain R&M Periodicals........................................................................ A-76
• Become Aware of R&M Symposia and Workshops ............................. A-76
• Become Aware of R&M Specifications, Standards, Handbooks and
Rome Laboratory Technical Reports ............................................ A-81
• Understand Common Acronyms........................................................... A-95

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT ii

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
FOREWORD

FOREWORD
The original RADC (now Rome Laboratory) Reliability Engineer's Toolkit, July
1988, proved to be a best seller among military, industry and academic
reliability practitioners. Over 10,000 copies were distributed and the Toolkit
and its authors received the 1989 Federal Laboratory Consortium Special
Award for Excellence in Technology Transfer.

This updated version, completed in-house at the Systems Reliability Division,


contains new topics on accelerated testing, thermal analysis, surface mount
technology, design of experiments, hardware/software reliability, component
failure modes/mechanisms, dormancy, and sneak analysis. Revisions and
updates in most other areas were also made.

This revision was led by a project team consisting of Bruce Dudley, Seymour
Morris, Dan Richard and myself. We acknowledge the fine support we
received from technical contributors Frank Born, Tim Donovan, Barry
McKinney, George Lyne, Bill Bocchi, Gretchen Bivens, Doug Holzhauer, Ed
DePalma, Joe Caroli, Rich Hyle, Tom Fennell, Duane Gilmour, Joyce Jecen,
Jim Ryan, Dr. Roy Stratton, Dr. Warren Debany, Dan Fayette, and Chuck
Messenger. We also thank typists Elaine Baker and Wendy Stoquert and the
Reliability Analysis Center's MacIntosh Whiz, Jeanne Crowell.

Your comments are always welcome. If you wish to throw bouquets, these
people should receive them. If it's bricks you're heaving, aim them at Bruce,
Seymour, or me at the address below.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT iii

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TABLE OF CONTENTS

Table of Contents
Introduction ................................................................................................. 1

Requirements
R1 Quantitative Reliability Requirements............................................ 11
R2 Quantitative Maintainability Requirements .................................... 17
R3 Quantitative Testability/Diagnostic Requirements ......................... 20
R4 Program Phase Terminology ......................................................... 23
R5 Reliability and Maintainability Task Application/Priorities .............. 25
R6 Contract Data Requirements ......................................................... 26
R7 R&M Information for Proposals...................................................... 28

Source Selection
S1 Proposal Evaluation for Reliability and Maintainability .................. 31

Design
D1 Part Stress Derating.......................................................................
37
D2 Thermal Design.............................................................................. 44
D3 Parts Control .................................................................................. 46
D4 Review Questions .......................................................................... 55
D5 Critical Item Checklist .................................................................... 62
D6 Dormancy Design Control.............................................................. 63
D7 Surface Mount Technology (SMT) Design .................................... 66
D8 Power Supply Design Checklist..................................................... 67
D9 Part Failure Modes and Mechanisms ............................................ 69
D10 Fiber Optic Design Criteria ............................................................ 73

Analysis
A1 Reliability and Maintainability Analyses ......................................... 77
A2 Reliability Prediction Methods........................................................ 80
A3 Maintainability Prediction Methods ................................................ 81
A4 Testability Analysis Methods ......................................................... 84
A5 Reliability Analysis Checklist ......................................................... 85
A6 Use of Existing Reliability Data...................................................... 86
A7 Maintainability/Testability Analysis Checklist................................. 87
A8 FMECA Analysis Checklist ............................................................ 88
A9 Redundancy Equations.................................................................. 89
A10 Parts Count Reliability Prediction .................................................. 92
A11 Reliability Adjustment Factors ....................................................... 105
A12 SMT Assessment Model ................................................................ 108
A13 Finite Element Analysis ................................................................. 113
A14 Common Thermal Analysis Procedures ........................................ 115
A15 Sneak Circuit Analysis ................................................................... 119
A16 Dormant Analysis........................................................................... 122
A17 Software Reliability Prediction and Growth ................................... 124

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT v

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TABLE OF CONTENTS

Testing
T1 ESS Process.................................................................................. 129
T2 ESS Placement.............................................................................. 130
T3 Typical ESS Profile ........................................................................ 131
T4 RGT and RQT Application ............................................................. 133
T5 Reliability Demonstration Plan Selection ....................................... 134
T6 Maintainability Demonstration Plan Selection ............................... 136
T7 Testability Demonstration Plan Selection ...................................... 137
T8 FRACAS (Failure Reporting and Corrective Action System) ........ 138
T9 Reliability Demonstration Test Plan Checklist ............................... 140
T10 Reliability Test Procedure Checklist .............................................. 144
T11 Maintainability Demonstration Plan and Procedure Checklist....... 145
T12 Reliability and Maintainability Test Participation Criteria ............... 146
T13 Reliability and Maintainability Demonstration Reports Checklist... 147
T14 Design of Experiments................................................................... 148
T15 Accelerated Life Testing ................................................................ 153
T16 Time Stress Measurement............................................................. 159

Appendices
1 Operational Parameter Translation................................................ A-1
2 Example R&M Requirement Paragraphs ...................................... A-7
3 R&M Software Tools ...................................................................... A-17
4 Example Design Guidelines........................................................... A-23
5 Reliability Demonstration Testing .................................................. A-37
6 Reliability Growth Testing .............................................................. A-51
7 Maintainability/Testability Demonstration Testing ......................... A-59
8 Reliability and Maintainability Data Sources.................................. A-65
9 Reliability and Maintainability Education Sources ......................... A-73
10 R&M Specifications, Standards, Handbooks and Rome Laboratory
Technical Reports ........................................................................ A-79
11 Acronyms ....................................................................................... A-95

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT vi

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
INTRODUCTION

Introduction
Purpose
This Toolkit is intended for use by a practicing reliability and maintainability (R&M)
engineer. Emphasis is placed on his or her role in the various R&M activities of an
electronic systems development program. The Toolkit is not intended to be a
complete tutorial or technical treatment of the R&M discipline but rather a
compendium of useful R&M reference information to be used in everyday practice.

Format
The format of the Toolkit has been designed for easy reference. Five main sections
are laid out to follow the normal time sequence of a military development program.

Descriptions of the "how to" of the R&M engineer's activities have been designed to
take the form of figures, tables, and step-by-step procedures as opposed to
paragraphs of text. Appendices are included to give a greater depth of technical
coverage to some of the topics as well as to present additional useful reference
information.

The Toolkit also includes a "Quick Reference Application Index" which can be used
to quickly refer the R&M engineer to the portion of a section that answers specific
questions. A quick reference "For More Help Appendices" index is also included for
the more in-depth topics of the appendices.

Ordering information for the military documents and reports listed in the Toolkit is
located in Appendix 10.

Terminology
The term "Reliability" used in the title of this document is used in the broad sense to
include the field of maintainability. The content of the report addresses reliability
and maintainability (R&M) because they are usually the responsibility of one
government individual in a military electronics development program. In this
context, testability is considered as a part of maintainability and is, therefore,
inherently part of the "M" of "R&M." Where testability issues, such as development
of quantitative requirements, are appropriate for separation from "M" discussion,
they are and have been labeled accordingly.

Underlying Philosophy
The development and application of a successful reliability program requires a
number of tasks and coordination steps. Key ingredients include:

• Aggressive Program Manager Support • Thorough Technical Reviews


• Firm and Realistic Requirements • Complete Verification
• Effective Built-in-Test • Parts Control
• Failure Reporting & Corrective Action

Total Quality Management

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 1

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
INTRODUCTION

Total Quality Management (TQM) is an approach which puts quality first as the
means to long-term survival and growth. It employs teamwork to improve the
processes used by an organization in providing products and services. One could
argue that TQM encompasses Reliability Engineering or that Reliability Engineering
encompasses many TQM activities. Either way, the reliability engineer may well
get involved in TQM. For example, he/she may be asked to evaluate a contractor's
TQM approach, assist process improvement teams with statistical analyses, or
serve as a member of a process improvement team looking at his/her own agency's
processes. It, therefore, behooves the reliability professional to have some
knowledge of TQM.

Principles of TQM

• Management Leadership: For successful TQM, the company management


must create a cultural change from authoritarian management focused on
short-term goals to using the full potential of all employees for long-term
benefit. This means the agency executives must be consistent, persistent
and personally involved in the pursuit of quality.

• Focus on Customer: It is easy to appreciate the need to focus on the


external customer. Less obvious is the concept of internal customer
satisfaction. Reliability engineering, for example, may be asked by Design
Engineering (the customer) to review a proposed design for reliability. If an
incomplete or shoddy evaluation is done, the ultimate design may not meet
specifications. Output suffers and so does the efficiency of the project team.
A TQM oriented organization seeks to understand and delight its customers,
both external and internal.

• Constant Improvement: It is estimated that about 25% of operating costs of


a typical manufacturing agency go for rework and scrap. Service
organizations pay an even higher penalty for not doing things right the first
time. Reducing these costs is a potential source of vast profit. Hence, TQM
agencies seek to constantly improve their processes. The usual change
agent is a team with members from all offices involved in the process, and
including those who actually perform the work. Besides the measurable
benefits, process improvements mean fewer defects going to customers, with
an unmeasurable but significant effect on the bottom line.

• Use of Measurements and Data: TQM agencies seek to measure quality


so that improvements can be tracked. Every process will have some
operational definition of quality. The overall agency progress can be
measured by calculating the "cost of quality" (money spent for preventing
defects, appraising quality, rework and scrap). Typically, as more money is
spent on preventing defects, savings made in scrap and rework reduce the
overall cost of quality. Another common approach is to score the agency
using the criteria for the Malcolm Baldrige National Quality Award as a
measure. For Government agencies, the scoring criteria for the Office of
Management and Budget (OMB) Quality Improvement Prototype Award is
used in lieu of the Malcolm Baldrige criteria. R&M engineers should use
Statistical Process Control, Statistical Design of Experiments, Quality
Function Deployment, Taguchi Methods, and other available quality tools.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 2

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
INTRODUCTION

Design of Experiments is explained in Topic T14. Statistical Process Control


techniques are described in this topic.

• Employee Involvement: A TQM agency recognizes the value of a skilled


work force cooperating to satisfy the customer. Extensive education and
training programs exist. Training in job skills, quality methods, and team
building techniques is widely available. Cooperation between offices is the
norm (e.g. concurrent engineering). Employees on all levels are widely
involved in process improvement teams. Management looks for ways of
reducing the hassle created by bureaucratic rules and regulations.
Employees are trusted and empowered to do their jobs.

• Results: In a TQM agency, improvement is continuous and measured.


Image building measurements like the number of improvement teams
formed, are of less value than measures of cost of quality or increase in
production which show real results. Management is not concerned with filling
squares, but with making worthwhile changes.

TQM Tools

• Process Flow Chart: A diagram showing all the major steps of a process.
The diagram also shows how the various steps in the process relate to each
other.

Process Flow Chart

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 3

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
INTRODUCTION

• Pareto Chart: A bar graph of identified causes shown in descending order of


frequency used to prioritize problems and/or data. The Pareto Principle
states that a few causes typically account for most problems (20% of the
serial numbered units account for 80% of the failures; 20% of the people do
80% of the work; etc.) Pareto diagrams help analyze operational data and
determine modes of failure. They are especially useful when plotted before
and after an improvement project or redesign to show what progress has
been made.

Pareto Chart

• Fishbone Chart: A cause and effect diagram for analyzing problems and
the factors that contribute to them, or, for analyzing the factors that result in a
desired goal. Also called an Ishikawa Chart. This tool requires the listing of
all possible factors contributing to a result and the subsequent detailed
investigation of each factor. It is usually developed in brainstorming sessions
with those that are familiar with the process in question.

Fishbone Chart

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 4

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
INTRODUCTION

• Control Chart: A method of monitoring the output of a process or system


through the sample measurement of a selected characteristic and the
analysis of its performance over time. There are two main types: control
charts for attributes (to plot percentages of "go/no go" attribute data) and
control charts for variables (to plot measurements of a variable characteristic
such as size or weight). Control charts identify changes in a process as
indicated by drift, a shift in the average value, or, increased variability. The
upper and lower control limits are based on the sample mean (x), sample
standard deviation (s) and the sample size (n).

+ 3 (s/ n) Upper Control


Limit

Mean

Lower Control
- 3 (s/ n)
Limit

Control Chart

• Shewhart Cycle: A method, created by Walter A. Shewhart, for attacking


problems.

Shewhart Cycle

The cycle starts with the planning phase: defining the particular problem,
deciding what data are needed and determining how to obtain the data; that
is via test, previous history, external sources, etc. The process flow charts
and Ishikawa diagrams are very useful at this point.
After planning it is necessary to do something (D on the chart); Getting the
data needed, running a test, making a change, or, whatever the plan calls for.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 5

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
INTRODUCTION

The next step, C on the chart, is to check the results. In some instances, this
would be done by a control chart. In any event the results are evaluated and
causes of variation investigated. Histograms, Pareto Charts and
Scattergrams can be helpful.

The last step, A, stands for Analyze and Act. What did the data in step C
indicate? Based on the analysis, appropriate action is taken. This could be a
process change or a decision that a new plan is needed. In any event, after
you act, you go back to P and start another cycle. Even if the first trip around
worked wonders, there are always more opportunities waiting to be
discovered. The cycle is really a spiral going upward to better and better
quality.

Reliability TQM Tasks


Many corporations have considered or utilized TQM principles. The reliability tasks
most frequently used in producing a quality product are assembled in the following
Pareto chart:

Pareto Chart

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 6

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
INTRODUCTION

Department of Defense R&M Policy and Procedures


Department of Defense (DoD) Directive 5000.1, Defense Acquisition (23 Feb 91),
establishes management policies and procedures for acquiring systems which
satisfy all aspects of user operational needs. It is based on the principles contained
in the Defense Management Report to the President (prepared by the Secretary of
Defense, Jul 89). DoD Directive 5000.1 cancels 63 other DoD directives and policy
memorandum, and replaces them with a single reference; DoD Instruction 5000.2,
Defense Acquisition Policies and Procedures (23 Feb 91). The following R&M
related documents are included in these cancellations: (1) DoD Instruction 3235.1,
"Test and Evaluation of System Reliability, Availability and Maintainability", 1 Feb
82, (2) DoD Instruction 4120.19, "DoD Parts Control Program", 6 Jul 89. and (3)
DoD Directive 5000.40, "Reliability and Maintainability", 8 Jul 80.

DoD Instruction 5000.2 establishes an integrated framework for translating broadly


stated mission needs into an affordable acquisition program that meets those
needs. It defines an event oriented management process that emphasizes
acquisition planning, understanding of user needs and risk management. It is
several hundred pages long and has 16 separate parts covering everything from
Requirements Evolution and Affordability to the Defense Acquisition Board
Process. Part 6, Engineering and Manufacturing, Subsection C, Reliability and
Maintainability, establishes DoD R&M policy. The basic R&M policies and
procedures described in this seven page section can be summarized as follows:

Policies

• Understand user needs and requirements.

• Actively manage all contributors to system unreliability.

• Prevent design deficiencies and the use of unsuitable parts.

• Develop robust systems insensitive to use environments.

Procedures

• Define both mission and logistics R&M objectives based on operational


requirements and translate them into quantitative contractual requirements.

• Perform R&M allocations, predictions, and design analysis as part of an


iterative process to continually improve the design.

• Establish parts selection and component derating guidelines.

• Preserve reliability during manufacturing through an aggressive


environmental stress screening program.

• Establish a failure reporting and corrective action system.

• Perform reliability growth and demonstration testing.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 7

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
INTRODUCTION

• Use MIL-STD-785 (Reliability Program for Systems & Equipment,


Development and Production) and MIL-STD-470 (Maintainability Program for
Systems & Equipment) for R&M program guidance.

This Toolkit, although not structured to address each policy and procedure per se,
addresses the practical application of the procedures to the development of military
electronic hardware.

For More Information


"Total Quality Improvement." Boeing Aerospace Co., PO Box 3999, Seattle WA
98124; 1987.

"Total Quality Management, A Guide For Implementation." DoD 500.51-6; OASD


(P&L) TQM, Pentagon, Washington DC; February 1989.

"Total Quality Management (TQM), An Overview." RL-TR-91-305; ADA 242594;


Anthony Coppola, September 1991.

"A Rome Laboratory Guide to Basic Training in TQM Analysis Techniques." RL-TR-
91-29; ADA 233855; Anthony Coppola, September 1989.

DoD Directive 5000.1, "Defense Acquisition," 23 February 1991.

DoD Instruction 5000.2, "Defense Acquisition Policies and Procedures," 23


February 1991.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 8

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R1

Section R
Requirements

Contents

R1 Quantitative Reliability Requirements................ 11

R2 Quantitative Maintainability Requirements ........ 17

R3 Quantitative Testability/Diagnostic
Requirements .................................................... 20

R4 Program Phase Terminology .............................. 23

R5 R&M Task Application/Priorities......................... 25

R6 Contract Data Requirements............................... 26

R7 R&M Information for Proposals .......................... 28

Related Topics
Appendix 2 Example R&M Requirements
Paragraphs ........................................................ A-7

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 9

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R1

Insight
Requirement development is critical to program success. Military standards (MIL-
STDs) cannot be blindly applied. Requirements must be tailored to the individual
program situation considering the following:
• Mission Criticality
• Operational Environment
• Phase of Development
• Other Contract Provisions (incentives, warranties, etc.)
• Off-The-Shelf Versus Newly Designed Hardware

For More Information


MIL-STD-470 "Maintainability Program for Systems and Equipment"

MIL-STD-721 "Definition of Terms for Reliability and Maintainability"

MIL-STD-785 "Reliability Program for Systems and Equipment Development


and Production"

MIL-STD-2165 "Testability Programs for Electronic Systems and Equipment"

DODD 5000.1 "Defense Acquistion"

DODI 5000.2 "Defense Acquisition Management Policies and Procedures"

RADC-TR-89-45 "A Government Program Manager's Testability/Diagnostic


Guide"

RADC-TR-90-31 "A Contractor Program Manager's Testability Diagnostic


Guide"

RADC-TR-90-239 "Testability/Diagnostics Design Encyclopedia"

RL-TR-91-200 "Automated Testability Decision Tool"

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 10

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R1

Topic R1: Quantitative Reliability Requirements


Scope of Requirements
Reliability parameters expressed by operational users and ones specified in
contractual documents take many forms. Tables R1-1 and R1-2 identify the
characteristics of reliability parameters.

Table R1-1: Logistics (Basic) and Mission Reliability


Characteristics
Logistics (Basic) Reliability Mission Reliability
• Measure of system's ability to • Measure of system's ability to
operate without logistics support complete mission

• Recognize effects of all • Consider only failures that


occurrences that demand support cause mission abort
without regard to effect on mission

• Degraded by redundancy • Improved by redundancy

• Usually equal to or lower than • Usually higher than logistics


mission reliability reliability

Table R1-2: Operational and Contractual Reliability


Characteristics
Contractual Reliability Operational Reliability
• Used to define, measure and • Used to describe reliability
evaluate contractor's program performance when operated
in planned environment
• Derived from operational needs
• Not used for contract reliability
• Selected such that achieving them requirements (requires
allows projected satisfaction of translation)
operational reliability
• Used to describe needed level
• Expressed in inherent values of reliability performance

• Account only for failure events • Include combined effects of


subject to contractor control item design, quality,
installation environment,
• Include only design and maintenance policy, repair,
manufacturing characteristics etc.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 11

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R1

Contractual Reliability Operational Reliability


• Typical terms • Typical terms
- MTBF(mean-time-between-failures) - MTBM (mean-time-between-
maintenance)
- Mission MTBF (sometimes also
called MTBCF) - MTBD (mean-time-between-
demand)

- MTBR (mean-time-between-
removal)

- MTBCF (mean-time-between-
critical-failure)

Operational Constraints
• Mission Criticality
• Availability Constraints
• Self-Sufficiency Constraints
• Attended/Unattended Operation
• Operational Environment
• Use of Off-the-shelf or Newly Designed Equipment

How to Develop Requirements


Figure R1-1 defines the general reliability requirement development process. Key
points to recognize from this process are:

1. User requirements can be expressed in a variety of forms that include


combinations of mission and logistics reliability, or they may combine
reliability with maintainability in the form of availability. Conversion to
commonly used operational terms such as mean-time-between-maintenance
(MTBM) and mean-time-between-critical-failure (MTBCF) must be made from
terms such as operational availability (A0) and break-rate, etc., to enable
translation to parameters which can be specified in contracts.

An example is:

MTBM
A0 = MTBM + MDT

(Solve for MTBM using mean downtime (MDT) which includes the actual
repair time plus logistics delay time.)

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 12

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R1

2. Since operational reliability measures take into account factors beyond the
control of development contractors, they must be translated to contractual
reliability terms for which contractors can be held accountable. (Appendix 1
provides one means of accomplishing this translation.)

3. The process cannot end with the translation to a contractual value.


Evaluation of the realism of the translated requirements is a necessary step.
Questions that have to be answered are: are the requirements compatible
with the available technology and do the requirements unnecessarily drive
the design (conflict with system constraints such as weight and power).
Addressing these issues requires reviewing previous studies and data for
similar systems. Adjustment factors may be appropriate for improvement of
technology and for different operating environments, duty cycles, etc. See
Topic A11 for Reliability Adjustment Factors.

4. Systems with mission critical requirements expressed by the user present


difficulties in the requirement development process. Translation models don't
account for the nonexponential situations that exist with redundant systems.
Because the reliabilities of redundant paths are high compared to serial ones,
an approximation can be made that these paths have an equivalent failure
rate of zero so that only the remaining serial elements need to be translated.

5. The requirement process involves allocation of values to lower levels. In


some cases, this is an iterative process requiring several tries to satisfy all
requirements. For other cases, the requirements can't be satisfied and
dialogue and tradeoffs with the user are required.

6. For cases where user needs are not specified it still makes sense to invoke at
least a logistics (basic) reliability requirement. In so doing, the contractor has
a degree of accountability and is likely to put more effort into designing a
reliable system.

7. Table R1-3 indicates typical ranges of MTBF for different types of electronic
systems.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 13

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R1

Table R1-3: Typical MTBF Values

Radar Systems MTBF (Hours)


Ground Rotating Search Radar...................................................... 100-200
Large Fixed Phase Array Radar..................................................... 5-10
Tactical Ground Mobile Radar........................................................ 50-100
Airborne Fighter Fire Control Radar ............................................... 50-200
Airborne Search Radar................................................................... 300-500
Airborne Identification Radar.......................................................... 200-2,000
Airborne Navigation Radar ............................................................. 300-4,500

Communications Equipment MTBF (Hours)


Ground Radio ................................................................................. 5,000-20,000
Portable Ground Radio................................................................... 1,000-3,000
Airborne Radio................................................................................ 500-10,000
Ground Jammer.............................................................................. 500-2,000

Ground Computer Equipment MTBF (Hours)


Workstation..................................................................................... 2,000-4,500
Personal Computer (CPU) 286/386/486 ........................................ 1,000-5,000
Monochrome Display...................................................................... 10,000-15,000
Color Display .................................................................................. 5,000-10,000
40-100 Megabyte Hard Disk Drive ................................................. 10,000-20,000
Floppy Disk/Drive ........................................................................... 12,000-30,000
Tape Drive ...................................................................................... 7,500-12,000
CD/ROM ......................................................................................... 10,000-20,000
Keyboard ........................................................................................ 30,000-60,000
Dot Matrix, Low Speed, Printer ...................................................... 2,000-4,000
Impact, High Speed, Printer ........................................................... 3,000-12,000
Thermal Printer............................................................................... 10,000-20,000
Plotter ............................................................................................. 30,000-40,000
Modem............................................................................................ 20,000-30,000
Mouse............................................................................................. 50,000-200,000
Clock .......................................................................................... 150,000-200,000

Miscellaneous Equipment MTBF (Hours)


Airborne Countermeasures System ............................................... 50-300
Airborne Power Supply................................................................... 2,000-20,000
Ground Power Supply .................................................................... 10,000-50,000
IEEE Bus ........................................................................................ 50,000-100,000
Ethernet .......................................................................................... 35,000-50,000

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 14

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R1

Figure R1-1: Quantitative Reliability Requirement


Development Process

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 15

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R1

Figure R1-1 Notes:


1. User Needs Cases

Case Logistics Reliability Mission Reliability Comments


1 Specified Specified

2 Specified Not specified Delete steps D, H, I

3 Not specified Specified

4 Not specified Not specified Delete steps D, H, I

2. A 10-20% reliability improvement factor is reasonable for advancement of


technology.

3. Adjustment of data to use environment may be required (see Topic A11). See
Appendix 8 for R&M data sources.

4. Reliability requirements necessitating redundancy add weight, cost and


power.

5. Alternate forms of user requirements should be converted to MTBM's to


enable translation.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 16

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R2

Topic R2: Quantitative Maintainability Requirements


Scope of Requirements
Unique maintainability parameters need to be specified for three basic levels of
repair:

• Organizational Level: Repair at the system location. Usually involves


replacing plug-in modules and other items with relatively short isolation and
replacement times.

• Intermediate Level: Repair at an intermediate shop facility which has more


extensive capabilities to repair lower hardware indenture levels.

• Depot Level: Highly specialized repair facility capable of making repairs at


all hardware indenture levels. Sometimes the original equipment
manufacturer.

Recent Air Force policy has promoted the concept of two level maintenance in
place of the traditional three level system. Under this concept the classification is:

• On-equipment: Maintenance actions accomplished on complete end items.

• Off-equipment: In-shop maintenance actions performed on removed


components.

Parameters which need to be specified vary with the level of repair being
considered. Key maintainability parameters include:

• Mean time to repair (MTTR): Average time required to bring system from a
failed state to an operational state. Strictly design dependent. Assumes
maintenance personnel and spares are on hand (i.e., does not include
logistics delay time). MTTR is used interchangeably with mean corrective
maintenance time (Mct).

• Mean maintenance manhours (M-MMH): Total manpower per year


(expressed in manhours) required to keep the system operating (not
including logistics delay time).

• Mean time to restore system (MTTRS): The average time it takes to


restore a system from a failed state to an operable state, including logistics
delay time MTTRS = logistics delay time + MTTR). Logistics delay time
includes all time to obtain spares and personnel to start the repair.

• Preventive maintenance (PM): Time associated with the performance of all


required preventive maintenance. Usually expressed in terms of hours per
year.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 17

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R2

Operational Constraints
Basic maintainability requirements are determined through an analysis of user
operational constraints. Operational constraints include:

• Operating hours per unit calendar time and/or per mission

• Downtime, maintenance time, or availability constraints

• Mobility requirements

• Attended/unattended operation

• Self-sufficiency constraints

• Reaction time

• Operational environment (e.g., chemical, biological and nuclear)

• Skill levels of maintenance personnel

• Manning

• Types of diagnostics and maintenance support equipment which can be


made available or implemented (built-in test, manual test equipment, external
automatic test equipment, etc.).

• Levels at which repair takes place

• Use of off-the-shelf equipment versus newly designed equipment

How to Develop Requirements


The best guidance available is to provide a range of typical values usually applied
for each parameter.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 18

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R2

Table R2-1: Typical Maintainability Values

Organizational Intermediate Depot


MTTR .5 - 1.5 hr .5 - 3 hr 1 -4 hr
M-MMH Note 1 Note 1 Note 1
MTTRS 1 - 8 Hrs (Note 2) NA NA
PM 2 - 15 hr/yr NA NA

Notes:

1. M-MMH depends on the number of repair visits to be made, the MTTR for
each repair visit and the number of maintenance personnel required for each
visit. Typical calculations of the mean maintenance manhours per year
include:

a. Immediate maintenance of a continuously operated system: M-MMH =


(8760 hr/yr)/(MTBF) x (MTTR) x (maintenance personnel per repair) +
(PM hours per year) x (Maintenance personnel).

b. Delayed maintenance of a fault tolerant system: M-MMH = (number of


expected repair visits) x (time for each visit) x (maintenance personnel
per visit) + (PM hours per year) x (Maintenance personnel).

c. Maintenance of a continuously operated redundant system allowed to


operate until failure. M-MMH = (8760 hr/yr)/(MTBCF) x (time for each
visit) x (maintenance personnel per visit) + (PM hours per year) x
(Maintenance personnel).

Time for each visit is the number of repairs to be made times the MTTR for
each repair if repairs are made in series.

2. For unique systems that are highly redundant, MTTRS may be specified as
the switch time.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 19

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R3

Topic R3: Quantitative Testability/Diagnostic


Requirements
Scope of Requirements
Testability/Diagnostics functions and parameters that apply to each repair level:

• Fault Detection: A process which discovers the existence of faults.

• Fault Isolation: Where a fault is known to exist, a process which identifies


one or more replaceable units where the fault(s) may be located.

• False Alarms: An indication of a fault where no fault exists such as operator


error or Built-in Test (BIT) design deficiency.

Testability/Diagnostic requirements are sometimes expressed in the form of rates


or fractions such as:

• Fraction of Faults Detected (FFD): The quantity of faults detected by BIT


or External Test Equipment (ETE) divided by the quantity of faults detected
by all fault detection means (including manual).

- System and Equipment Level - FFD is usually weighted by the


measured or predicted failure rates of the faults or replaceable units.

- Microcircuit Level - FFD is called fault coverage or fault detection


coverage, and all faults are weighted equally. In the fault-tolerant design
community, "fault coverage" almost invariably refers to fault recovery
coverage. This is usually expressed as the conditional probability that,
given a fault has occurred and has been detected, the system will
recover.

• Fault Isolation Resolution (FIR): The probability that any detected fault
can be isolated by BIT or ETE to an ambiguity group of size "x" or less.
(Typically specified for several values of "x").

• False Alarm Rate (FAR): The frequency of occurrence of false alarms.

Scope of Diagnostics
• Embedded: Defined as any portion of the weapon system's diagnostic
capability that is an integral part of the prime system.

• External: Any portion of the diagnostic capability that is not embedded.

• Manual: Testing that requires the use of technical manuals, troubleshooting


procedures, and general-purpose test equipment (e.g., voltmeter) by a
maintenance technician.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 20

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R3

• Test Program Set (TPS): The complete collection of data and hardware
necessary to test a specific Unit Under Test (UUT) on a specific Automatic
Test Equipment (ATE). As a minimum, a TPS consists of:

- Test vector sets (for a digital UUT)

- Test application programs (software that executes on the ATE and


applies the vectors under the necessary conditions)

- Test fixtures and ATE configuration files

- Documentation

A major element of external diagnostics involves the following:

• Automatic Test Equipment (ATE): The apparatus with which the actual
UUT will be tested. ATE for digital UUTs has the capability to apply
sequences of test vectors under specified timing, loading, and forcing
conditions.

How to Develop Requirements


In theory, weapon system diagnostic requirements should be developed as an out-
growth of the user developed mission and performance requirements contained in
a Mission Need Statement (MNS), Operational Requirements Document (ORD) or
similar type document.

The following should also be considered:

• Diagnostic capability realistically achievable with the selected hardware


technology and software complexity.

• Tradeoffs involving reliability, maintainability, logistics, weight, power


requirements, and system interruption.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 21

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R3

Table R3-1: Typical Testability Values

% Capability Repair Level


Fault Detection (All Means) 90-100 Organizational
100 Intermediate
100 Depot

Fault Detection: BIT & ETE 90-98 Organizational


BIT & ETE 95-98 Intermediate
BIT & ETE 95-100 Depot

Fault Isolation Resolution


Three or fewer LRUs 100 Organizational
One LRU 90-95 Organizational
Four or fewer SRUs 100 Intermediate
One SRU 75-85 Intermediate

Notes:

LRU - Line-Replaceable Unit (e.g., Box, Power Supply, etc.)


SRU - Shop-Replaceable Unit (e.g., Circuit Card)
BIT - Built-in-Test
ETE - External Test Equipment

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 22

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R4

Topic R4: Program Phase Terminology


The R&M tasks required on a program are based on the program's development
phase and intended application (ground, airborne, space, etc.).

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 23

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R4

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 24

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R5

Topic R5: Reliability and Maintainability Task


Application/Priorities

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 25

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R6

Topic R6: Contract Data Requirements


In order for the government to receive outputs from the required contractor
performed tasks, the appropriate deliverables must be specified in the Contract
Data Requirements List (CDRL). The content of these CDRL items is specified by
reference to standard Data Item Descriptions. The timing and frequency of the
required reports must be specified in the CDRL.

Table R6-1: Data Items & Delivery Dates


Title Recommended Delivery Date

Reliability
DI-R-7079 Reliability Program Plan 90 days prior to PDR

DI-R-7080 Reliability Status Report 90 days prior to PDR &


bimonthly

DI-R-7083 Sneak Circuit Analysis Report 30 days prior to PDR & CDR

DI-R-7085A FMECA Report 30 days prior to CDR

DI-R-7086 FMECA Plan 90 days prior to PDR

DI-R-7094 Reliability Block Diagram & 30 days prior to PDR & CDR
Math Model Report

DI-R-7095 Reliability Prediction & 30 days prior to PDR & CDR


Documentation of Supporting
Data

DI-R-7100 Reliability Report for 30 days prior to end of contract


Exploratory Development
Models

DI-RELI-80247 Thermal Survey Report 30 days prior to PDR & after


testing
DI-RELI-80248 Vibration Survey Report 90 days prior to start of testing

DI-RELI-80249 Burn-in Test Report 60 days after end of testing

DI-RELI-80250 Reliability Test Plan 90 days prior to start of testing

DI-RELI-80251 Reliability Test Procedures 30 days prior to start of testing

DI-RELI-80252 Reliability Test Report 60 days after end of testing

DI-RELI-80253 Failed Item Analysis Report As required

DI-RELI-80254 Corrective Action Plan 30 days after end of testing

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 26

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R6

Title Recommended Delivery Date


DI-RELI-80255 Failure Summary & Analysis Start of testing, monthly
Report

DI-RELI-80685 Critical Item Control Plan 30 days prior to PDR

DI-MISC-80071 Part Approval Request As Required

Maintainability
DI-MNTY-80822 Maintainability Program Plan 90 days prior to PDR

DI-MNTY-80823 Maintainability Status Report 90 days prior to PDR &


bimonthly
DI-MNTY-80824 Data Collection, Analysis & As Required
Corrective Action System
Reports

DI-MNTY-80825 Maintainability Modeling Report 30 days prior to PDR & CDR

DI-MNTY-80826 Maintainability Allocations 30 days prior to PDR & CDR


Report

DI-MNTY-80827 Maintainability Predictions 30 days prior to PDR & CDR


Report

DI-MNTY-80828 Maintainability Analysis Report 30 days prior to PDR & CDR

DI-MNTY-80829 Maintainability Design Criteria 90 days prior to PDR


Plan

DI-MNTY-80830 Inputs to the Detailed As required


Maintenance Plan & Logistics
Support

DI-MNTY-80831 Maintainability Demonstration 90 days prior to start of testing


Test Plan

DI-MNTY-80832 Maintainability Demonstration 30 days after end of testing


Report

Testability
DI-R-7080 & (See Reliability & Maintainability Data Item List)
DI-RELI-80255

DI-MNTY-80831 (See Maintainability Data Item List)


& 80832
DI-T-7198 Testability Program Plan 90 Days prior to PDR
DI-T-7199 Testability Analysis Report 30 days prior to PDR & CDR

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 27

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
REQUIREMENTS - TOPIC R7

Topic R7: R&M Information for Proposals


Proposal preparation guidance should be provided in the request for proposal
(RFP) package to guide the contractor in providing the information most needed to
properly evaluate the R&M area during source selection. This is part of the
requirements definition process.

Depending on the scope of the R&M requirements specified, information such as


the following should be requested for inclusion in the proposal:

• Preliminary R&M analysis/models and estimates of values to be achieved (to


at least the line replaceable unit (LRU) level)

• Design approach (including thermal design, parts derating, and parts control)

• R&M organization and its role in the overall program

• Key R&M personnel experience

• Schedules for all R&M tasks

• Description of R&M design guidelines/criteria to be used and trade studies


and testing to be performed

Note:

It is critical that qualified R&M personnel take part in the actual evaluation of
technical proposals. The R&M engineer should make sure this happens by
agreement with program management.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 28

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Section S
Source Selection

Contents

Proposal Evaluation for Reliability ...................... 31


and Maintainability

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 29

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Insight
The criteria for evaluation of contractor proposals has to match the requirements
specified in the Request for Proposal (RFP). Contractors must be scored by
comparing their proposals to the criteria, not to each other. R&M are generally
evaluated as parts of the technical area. The total source selection process
includes other nontechnical areas. Air Force policy has emphasized the
importance of R&M in the source selection process.

For More Information

AFR 70-15 "Source Selection Policy and Procedures"

AFR 70-30 "Streamlined Source Selection Procedures"

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 30

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
SOURCE SELECTION - TOPIC S1

Topic S1: Proposal Evaluation for Reliability


and Maintainability
Understanding

• Does the contractor show understanding of the importance of designing in


R&M&T in the effort?

• Does the contractor show a firm understanding of R&M&T techniques,


methodology, and concepts?

• Does the contractor indicate understanding of the role of


testability/diagnostics on maintainability and maintenance?

• Does the contractor understand integrated diagnostics design principles?

• Does the contractor note similar successful R&M&T efforts?

Approach

• Management
- Is an R&M&T manager identified, and are his/her experience and
qualifications adequate in light of the scope of the overall program?

- Are the number and experience of R&M&T personnel assigned to the


program, and the number of manhours adequate, judged in
accordance with the scope of the overall program?

- Does the R&M&T group have adequate stature and authority in the
organizational framework of the program (e.g., they should not fall
under direct control of the design group)?

- Does the R&M&T group have an effective means of crosstalk and


feedback of information between design engineers and higher
management?

- Does the R&M&T manager have adequate control over R&M&T for
subcontractors and vendors?

- Is the testability diagnostics function integrated into the R&M program?

Does the contractor utilize concurrent engineering practices and is the


R&M&T group represented on the team?

• Design
- Are design standards, guidelines and criteria such as part derating,
thermal design, modular construction, Environmental Stress Screening
(ESS), and testability cited?

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 31

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
SOURCE SELECTION - TOPIC S1

- Is the contractor's failure reporting and corrective action system


(FRACAS) a closed loop controlled process?

- Is there a commitment to the required parts control program (e.g., MIL-


M-38510, MIL-STD-883, etc.)? Are approval procedures described/
proposed for nonstandard parts?

- Are system design reviews (internal and external) required regularly?

- Are tradeoff studies proposed for critical design areas?

- Is a time-phasing of R&M&T tasks provided along with key program


milestones?

- Are areas of R&M&T risk identified and discussed?

- Does the contractor include consideration of software reliability?

- Does the contractor describe his plan for testability/diagnostics design


and the potential impacts on reliability and maintainability?

- Does the contractor identify tools to be used to generate test vectors


and other diagnostic procedures for BIT and ATE (automatic test
equipment)?

• Analysis/Test
- Are methods of analysis and math models presented?

- Are the R&M&T prediction and allocation procedures described?

- Has the time phasing of the R&M&T testing been discussed, and is it
consistent with the overall program schedule?

- Is adequate time available for the test type required (such as maximum
time for sequential test)?

- Is the ESS program consistent with the requirements in terms of


methodology and scheduling?

- Does the contractor make a commitment to predict the design


requirement MTBF prior to the start of testing?

- Are the resources (test chambers, special equipment, etc.) needed to


perform all required testing identified and, is a commitment made to
their availability?

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 32

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
SOURCE SELECTION - TOPIC S1

Compliance

• Design
- Does the contractor indicate compliance with all required military
specifications for reliability, maintainability and testability?

- Is adequate justification (models, preliminary estimates, data sources,


etc.) provided to backup the claims of meeting R&M&T requirements?

- Is there an explicit commitment to meet any ease of maintenance and


preventive maintenance requirements?

- Is there an explicit commitment to meet the Built-in-Test (BlT)/Fault-


isolation Test (FIT) requirements (Fraction of Faults Detected (FFD),
Fault Isolation Resolution (FIR) and False Alarm Rate (FAR) )?

- Is each equipment environmental limitation specified and do these


conditions satisfy the system requirements?

- Are all removable modules keyed?

- Will derating requirements be adhered to and are methods of verifying


derating requirements discussed?

• Analysis/Test
- Is a commitment made to perform a detailed thermal analysis?

- Will the contractor comply with all R&M&T required analyses?

- Is there an explicit commitment to perform all required environmental


stress screening?

- Does the contractor comply with all system level R&M&T test
requirements? Will the contractor demonstrate the R&M&T figures of
merit (MTBF, MTTR, FFD, FIR and FAR) using the specified
accept/reject criteria?

- Does the contractor comply with the specification (or other commonly
specified) failure definitions?

- Does the contractor agree to perform thermal verification tests and


derating verification tests?

• Data
- Is there an explicit commitment to deliver and comply with all of the
required R&M&T data items?

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 33

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
SOURCE SELECTION - TOPIC S1

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 34

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Section D
Design

Contents

D1 Part Stress Derating........................................ 37

D2 Thermal Design ............................................... 44

D3 Parts Control ................................................... 46

D4 Review Questions ........................................... 55

D5 Critical Item Checklist..................................... 62

D6 Dormancy Design Control .............................. 63

D7 Surface Mount Technology (SMT) Design..... 66

D8 Power Supply Design Checklist..................... 67

D9 Part Failure Modes and Mechanisms ............ 69

D10 Fiber Optic Design Criteria............................. 73

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 35

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Insight
Proven design approaches are critical to system R&M success. For many
programs the government requires that certain approaches be used (such as a
particular level of part stress derating). Other programs allow the contractor to
develop and use his own design criteria as long as his end product design meets
the government requirements or is subject to provisions of product performance
agreements (guarantees, warranties, etc.). Regardless of the situation, the R&M
engineer must actively evaluate the contractor design progress.

For More Information

MIL-STD-883 "Test Methods and Procedures for Microelectronics"

MIL-STD-965 "Parts Control Program"

MIL-STD-1521 "Technical Reviews and Audits for Systems,


Equipments, and Computer Software"

MIL-HDBK-251 "Reliability/Design Thermal Applications"

MIL-HDBK-338 "Electronic Reliability Design Handbook"

MIL-HDBK-978 "NASA Parts Application Handbook"

MIL-M-38510 "Microcircuits, General Specification for"

MIL-S-19500 "Semiconductor Devices, General Specification for"

RADC-TR-82-172 "RADC Thermal Guide for Reliability Engineers"

RADC-TR-88-69 "R/M/T Design for Fault Tolerance, Program Manager's


Guide"

RADC-TR-88-110 "Reliability/Maintainability/Testability Design for Dormancy"

RADC-TR-88-124 "Impact of Fiber Optics on System Reliability/Maintainability"

RL-TR-91-39 "Reliability Design for Fault Tolerant Power Supplies"

RL-TR-92-11 "Advanced Technology Component Derating"

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 36

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D1

Topic D1: Part Stress Derating


The practice of limiting electrical, thermal and mechanical stresses on parts to
levels below their specified ratings is called derating. If a system is expected to be
reliable, one of the major contributing factors must be a conservative design
approach incorporating realistic derating of parts. Table D1-1 defines the key
factors for determining the appropriate level of derating for the given system
constraints. Table D1-2 indicates the specific derating factors for each part type.

Table D1-1: Part Derating Level Determination


Factors Score
• For proven design, achievable with standard 1
Reliability
parts/circuits
Challenge
• For high reliability requirements, special design features 2
needed
3
• For new design challenging the state-of-the-art, new
concept

System Repair • For easily accessible, quickly and economically repaired 1


systems
• For high repair cost, limited access, high skill levels 2
required, very low downtimes allowable
3
• For nonaccessible repair, or economically unjustifiable
repairs

Safety • For routine safety program, no expected problems 1


• For potential system or equipment high cost damage 2
3
• For potential jeopardization of life of personnel

Size, Weight • For no significant design limitation, standard practices 1


• For special design features needed, difficult 2
requirements
3
• For new concepts needed, severe design limitation

Life Cycle • For economical repairs, no unusual spare part costs 1


expected
• For potentially high repair cost or unique cost spares 2
3
• For systems that may require complete substitution

Instructions: Select score for each factor, sum and determine derating level or parameter.

Derating Level Total Score


I 11 - 15
II 7 - 10
III 6 or less

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 37

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D1

Table D1-2: Part Derating Levels


All of the percentages provided are of the rated value for the derating parameter,
unless otherwise labeled. Temperature derating is from the maximum rated.

Derating Level
Part Type Derating Parameter I II III
Capacitors
• Film, Mica, Glass DC Voltage 50% 60% 60%
Temp from Max Limit 10°C 10°C 10°C

• Ceramic DC Voltage 50% 60% 60%


Temp from Max Limit 10°C 10°C 10°C

• Electrolytic Aluminum DC Voltage -- -- 80%


Temp from Max Limit -- -- 20°C

• Electrolytic Tantalum DC Voltage 50% 60% 60%


Temp from Max Limit 20°C 20°C 20°C

• Solid Tantalum DC Voltage 50% 60% 60%


Max Operating Temp 85°C 85°C 85°C

• Variable Piston DC Voltage 40% 50% 50%


Temp from Max Limit 10°C 10°C 10°C

• Variable Ceramic DC Voltage 30% 50% 50%


Temp from Max Limit 10°C 10°C 10°C

Connectors

Voltage 50% 70% 70%

Current 50% 70% 70%

Insert Temp from Max Limit 50°C 25°C 25°C

Diodes

• Signal/Switch Forward Current 50% 65% 75%


(Axial Lead) Reverse Voltage 70% 70% 70%
Max Junction Temp 95°C 105°C 125°C

• Voltage Regulator Power Dissipation 50% 60% 70%


Max Junction Temp 95°C 105°C 125°C

• Voltage Reference Max Junction Temp 95°C 105°C 125°C

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 38

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D1

Derating Level
Part Type Derating Parameter I II III
Diodes (cont'd)

• Transient Suppressor Power Dissipation 50% 60% 70%


Average Current 50% 65% 75%
Max Junction Temp 95°C 105°C 125°C

• Microwave Power Dissipation 50% 60% 70%


Reverse Voltage 70% 70% 70%
Max Junction Temp 95°C 105°C125°C

• Light Emitting Diode Average Forward Current 50% 65% 75%


(LED) Max Junction Temp 95°C 105°C 125°C

• Schottky/Positive Power Dissipation 50% 60% 70%


Intrinsic Negative Reverse Voltage 70% 70% 70%
(PIN) (Axial Lead) Max Junction Temp 95°C 105°C 125°C

• Power Rectifier Forward Current 50% 65% 75%


Reverse Voltage 70% 70% 70%
Max Junction Temp 95°C 105°C 125°C

Fiber Optics

• Cable Bend Radius 200% 200% 200%


(% of Minimum Rated)
Cable Tension 50% 50% 50%
(% Rated Tensile Strength)
Fiber Tension 20% 20% 20%
(% Proof Test)

Inductors

• Pulse Transformers Operating Current 60% 60% 60%


Dielectric Voltage 50% 50% 50%
Temp from Max Hot Spot 40°C 25°C 15°C

• Coils Operating Current 60% 60% 60%


Dielectric Voltage 50% 50% 50%
Temp from Max Hot Spot 40°C 25°C 15°C

Lamps

• Incandescent Voltage 94% 94% 94%

• Neon Current 94% 94% 94%

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 39

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D1

Microcircuits: This derating criteria is based on available data and is limited to:
60,000 gates for digital devices, 10,000 transistors for linear devices, and 1 Mbit for
memory devices. Microcircuits should not exceed supplier minimum or maximum
rating for supply voltage, 125°C junction temperature (except GaAs), or supplier
maximum.

Derating Level
Part Type Derating Parameter I II III
Microcircuits
• MOS Digital Supply Voltage +/-3% +/-5% +/-5%
Frequency (% of Max Spec) 80% 80% 80%
Output Current 70% 75% 80%
Fan Out 80% 80% 90%
Max Junction Temp 80°C 110°C 125°C

• MOS Linear Supply Voltage +/-3% +/-5% +/-5%


Input Voltage 60% 70% 70%
Frequency (% of Max Spec) 80% 80% 80%
Output Current 70% 75% 80%
Fan Out 80% 80% 90%
Max Junction Temp 85°C 110°C 125°C

• Bipolar Digital Supply Voltage +/-3% +/-5% +/-5%


Frequency (% of Max Spec) 75% 80% 90%
Output Current 70% 75% 80%
Fan Out 70% 75% 80%
Max Junction Temp 80°C 110°C 125°C

• Bipolar Linear Supply Voltage +/-3% +/-5% +/-5%


Input Voltage 60% 70% 70%
Frequency (% of Max Spec) 75% 80% 90%
Output Current 70% 75% 80%
Fan Out 70% 75% 80%
Max Junction Temp 85°C 110°C 125°C

Microprocessors
• MOS Supply Voltage +/-3% +/-5% +/-5%
Frequency (% of Max Spec) 80% 80% 80%
Output Current 70% 75% 80%
Fan Out 80% 80% 90%
Max Junction Temp, 8-BIT 120°C 125°C 125°C
Max Junction Temp, 16-BIT 90°C 125°C 125°C
Max Junction Temp, 32-BIT 60°C 100°C 125°C

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 40

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D1

Derating Level
Part Type Derating Parameter I II III
Microprocessors (cont'd)
• Bipolar Supply Voltage +/-3% +/-5% +/-5%
Frequency (% of Max Spec) 75% 80% 90%
Output Current 70% 75% 80%
Fan Out 70% 75% 80%
Max Junction Temp, 8-BIT 80°C 110°C 125°C
Max Junction Temp, 16-BIT 70°C 110°C 125°C
Max Junction Temp, 32-BIT 60°C 100°C 125°C

Memory/PROM
• MOS Supply Voltage +/-3% +/-5% +/-5%
Frequency (% of Max Spec) 80% 80% 90%
Output Current 70% 75% 80%
Max Junction Temp 125°C 125°C 125°C
Max Write Cycles (EEPROM) 13,000 105,000 300,000

• Bipolar Fixed Supply Voltage +/-3% +/-5% +/-5%


Frequency (% of Max Spec) 80% 90% 95%
Output Current 70% 75% 80%
Max Junction Temp 125°C 125°C 125°C

Microcircuits, GaAs

• MMIC/Digital Max Channel Temp 90°C 125°C 150°C

Miscellaneous
• Circuit Breakers Current 75% 80% 80%

• Fuses Current 50% 50% 50%

Optoelectronic Devices
• Photo Transistor Max Junction Temp 95°C 105°C 125°C

• Avalanche Photo Max Junction Temp 95°C 105°C 125°C


Diode (APD)

• Photo Diode, PIN Reverse Voltage 70% 70% 70%


(Positive Intrinsic Max Junction Temp 95°C 105°C 125°C
Negative)

• Injection Laser Diode Power Output 50% 60% 70%


Max Junction Temp 95°C 105°C 110°C

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 41

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D1

Derating Level
Part Type Derating Parameter I II III
Relays
Resistive Load Current 50% 75% 75%
Capacitive Load Current 50% 75% 75%
Inductive Load Current 35% 40% 40%
Contact Power 40% 50% 50%
Temp from Max Limit 20°C 20°C 20°C

Resistors
• Composition Power Dissipation 50% 50% 50%
Temp from Max Limit 30°C 30°C 30°C

• Film Power Dissipation 50% 50% 50%


Temp from Max Limit 40°C 40°C 40°C

• Variable Power Dissipation 50% 50% 50%


Temp from Max Limit 45°C 35°C 35°C

• Thermistor Power Dissipation 50% 50% 50%


Temp from Max Limit 20°C 20°C 20°C

• Wirewound Accurate Power Dissipation 50% 50% 50%


Temp from Max Limit 10°C 10°C 10°C

• Wirewound Power Power Dissipation 50% 50% 50%


Temp from Max Limit 125°C 125°C 125°C

• Thick/Thin Film Power 50% 50% 50%


Voltage 75% 75% 75%
Max Operating Temp 80°C 80°C 80°C

Transistors (Power)
• Silicon Bipolar Power Dissipation 50% 60% 70%
Vce, Collector-Emitter 70% 75% 80%
Voltage
Ic, Collector Current 60% 65% 70%
Breakdown Voltage 65% 85% 90%
Max Junction Temp 95°C 125°C 135°C

• GaAs MESFET Power Dissipation 50% 60% 70%


Breakdown Voltage 60% 70% 70%
Max Channel Temp 85°C 100°C 125°C

• Silicon MOSFET Power Dissipation 50% 65% 75%


Breakdown Voltage 60% 70% 75%
Max Junction Temp 95°C 120°C 140°C

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 42

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D1

Derating Level
Part Type Derating Parameter I II III
Transistors (RF Pulse)
• Silicon Bipolar Power Dissipation 50% 60% 70%
Vce, Collector-Emitter 70% 70% 70%
Voltage
Ic, Collector Current 60% 60% 60%
Breakdown Voltage 65% 85% 90%
Max Junction Temp 95°C 125°C 135°C

• GaAs MESFET Power Dissipation 50% 60% 70%


Breakdown Voltage 60% 70% 70%
Max Channel Temp 85°C 100°C 125°C

Transistors (Thyristors)
• SCR & TRIAC On-State Current 50% 70% 70%
Off-State Voltage 70% 70% 70%
Max Junction Temp 95°C 105°C 125°C

Tubes
Power Output 80% 80% 80%
Power Reflected 50% 50% 50%
Duty Cycle 75% 75% 75%

Rotating Devices
Bearing Load 75% 90% 90%
Temp from Max Limit 40°C 25°C 15°C

Surface Acoustic Wave Device (SAW)


Input Power from Max Limit 13dBm 13dBm 13dBm
(Freq > 500 MHz)
Input Power from Max Limit 18dBm 18dBm 18dBm
(Freq < 500 MHz)
Operating Temperature 125°C 125°C 125°C

Switches
Resistive Load Current 50% 75% 75%
Capacitive Load Current 50% 75% 75%
Inductive Load Current 35% 40% 40%
Contact Power 40% 50% 50%

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 43

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D2

Topic D2: Thermal Design


One of the important variables in system reliability is temperature. Therefore, the
thermal design of a system must be planned and evaluated. Full discussion of this
topic is beyond the scope of this document but it is important to point out to a
reliability engineer what limitations there are for common thermal design
approaches. Table D2-1 summarizes fundamental thermal design issues which
should be addressed during system development. Table D2-2 summarizes the
most common cooling techniques for electronics and their limitations. Analysis
Topic A14 provides a basic method of estimating microcircuit junction temperatures
for these cooling techniques.

Table D2-1: Thermal Design Issues


Issue Concern

• Thermal Requirements: Has a If not specified, a formal analysis


thermal analysis requirement been probably will not be performed and
incorporated into the system there will be no possibility of
specification? independent review.

• Cooling Allocation: Has cooling Cooling allocations should be made to


been allocated down to each the box level (or below) and refined as
subsystem, box and LRU. the thermal design matures.

• Preliminary Thermal Analysis: This usually represents the worst case


Has a preliminary analysis been because manufacturers specify
performed using the manufacturer's maximum power dissipations.
specifications for power outputs?
• Detailed Thermal Analysis: Has The preliminary analysis needs to be
a detailed analysis been performed refined using actual power dissipations.
using actual power dissipations? Results need to feed into reliability
predictions and derating analysis.

• Thermal Analysis Assumptions:


- Have junction-to-case thermal Optimistic values can have a significant
resistance values been fully effect on results. Thermal resistances
justified? from MIL-M-38510 should be used
unless other values are justified.

- Does the thermal analysis make Junction-to-ambient values should not


use of junction-to-ambient be used since they are highly
thermal resistances? dependent on coolant flow conditions.

- Are all modes and paths of heat The three modes are convection,
transfer considered in the conduction, and radiation. Rationale
analysis? should be provided for omitting any
heat transfer modes or paths.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 44

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D2

Table D2-2: Cooling Technique Limitations


Cooling Technique Maximum Cooling Description
Capacity

Impingement

Free Convection
Circuit Cards .5 W/in2

Well Ventilated Box 300 W/ft3

Poorly Ventilated Box 100 W/ft3

Forced Air
Circuit Cards 2 W/in2
Box 1000 W/ft3

Coldwall 1 W/in2

Flow-Through 2 W/in2

Example: A 9" x 5" printed circuit board using free convection cooling would be
limited to about 22.5 watts.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 45

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

Topic D3: Parts Control

Managing a parts control program is a highly specialized activity and does not
typically fall under the system's R&M engineer's responsibility. However, because
of the interrelationship of parts control and good system reliability, it is important
that R&M engineers and program managers have a general understanding of the
parts control discipline. Parts control questions which are often asked include:

• Why do parts control?

• What are the various "tools" to accomplish parts control?

• What is a military specification "Mil-Spec" qualified part, a MIL-STD-883


part, a Standard Military Drawing (SMD) part, and a vendor equivalent
part?

Why do parts control? Since the invention of semiconductors, users could never
be sure that a device purchased from one manufacturer would be an exact
replacement for the same device obtained from another supplier. Major differences
in device processing and electrical testing existed among suppliers. Because of
the importance of semiconductors to military programs, the government introduced
standard methods of testing and screening devices in 1968. Devices which were
tested and screened to these methods were then placed on a government approval
list called the qualified parts list (QPL). Through this screening and testing process,
a part with known quality and performance characteristics is produced. The
philosophy for assuring quality product has evolved since 1968 and now there are
two methodologies in place, the original QPL program and the new Qualified
Manufacturer's List (QML) program (established 1985). The QML approach defines
a procedure that certifies and qualifies the manufacturing processes and materials
of potential vendors as opposed to the individual qualification of devices (QPL).
Hence, all devices produced and tested using the QML certified/qualified
technology flow are qualified products. Part's technology flows qualified to this
system are listed on the Qualified Manufacturer's List. Both Hybrids as well as
monolithic microcircuits are covered under this system.

What are the various "tools" to accomplish parts control? The government
has subdivided parts into three basic classifications: (1) microelectronics, (2)
semiconductors (e.g. transistors, diodes, etc.) and (3) electrical parts (e.g.
switches, connectors, capacitors, resistors, etc.). For each class, part specification
and test method documents have been developed. Table D3-1 summarizes key
documents and their content.

What is a military specification "Mil-Spec" qualified part, a MIL-STD-883 part,


a Standard Military Drawing (SMD) part, and a vendor equivalent part? The
primary difference in these descriptions is that each of these part classes has
undergone different levels of screening and certification. Certification involves
specifying and documenting the part manufacturing process. It also involves
government and manufacturer agreement on a detailed part specification. This
ensures consistent part quality and known performance. Table D3-2 summarizes

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 46

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

common classes of parts and what these classifications signify. Table D3-3
summarizes MIL-STD-883D screening procedures and is included to give the
reader a feel for the wide range of tests required. These screening requirements
are similar for the respective systems defined in Table D3-2. Topic A11, Table
A11-1 shows the impact of the various part designations on system reliability.

Table D3-1: Key Parts Control Documents and Their Content

Document Title Content

MIL-M-38510 General Specification Provides detailed specification requirements


for Microcircuits in the form of "slash sheets" for several
hundred of the most commonly used
microcircuits. Covers screening requirements
(referenced to MIL-STD-883), electrical
testing, quality conformance, physical
dimensions, configuration control for critical
manufacturing processing steps and
production line certification.

MIL-I-38535 General Specification Provides detailed specification requirements


for Integrated Circuits in the form of standard military drawings
(Microcircuits) (SMDs). Quality assurance requirements
Manufacturing are defined for all microcircuits built on a
manufacturing line which is controlled
through a manufacturer's quality manage-
ment program and has been certified and
qualified in accordance with the require-
ments specified. The manufacturing line
must be a stable process flow for all
microcircuits. Two levels of product
assurance (including radiation hardness
assurance) are provided for in this
specification, avionics and space. The
certification and qualification sections
specified outline the requirements to be met
by a manufacturer to be listed on a Qualified
Manufacturer's List (QML). After listing of a
technology flow on a QML, the manufacturer
must continually meet or improve the
established baseline of certified and qualified
procedures through his quality management
program and the technology review board.

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 47

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

Document Title Content

MIL-H-38534 General Specification Provides detailed specification requirements


for Hybrid Microcircuits in the form of Standard Military Drawings
(SMDs) for standard hybrid products, and
Source Control Drawings (SCDs) using the
SMD boilerplate for custom hybrids. Covers
requirements for screening (referenced to
MIL-STD-883) quality conformance
inspections, configuration control, rework
limitations and manufacturing line
certification procedures.

MIL-STD-883 Test Methods and Provides uniform methods and procedures


Procedures for for testing microelectronic devices.
Microelectronics Structured into five classes of test methods:
1000 class addresses environmental tests,
2000 class addresses mechanical tests,
3000 class addresses electrical tests for
digital circuits, 4000 class addresses
electrical tests for linear circuits, and 5000
class addresses test procedures. The tests
covered include moisture resistance, seal
test, neutron irradiation, shock and
acceleration tests, dimensional tests,
input/output current tests, and screening test
procedures to name a few. Two test levels
are described: Class B (Class H, MIL-H-
38534/Class Q, MIL-I-38535) and Class S
(Class K, MIL-H-38534/Class V, MIL-I-
38535). Class S is geared toward space
qualified parts and requires a host of tests
not performed on Class B parts (e.g. wafer
lot acceptance, 100% nondestructive bond
pull, particle impact noise detection,
serialization, etc.).

MIL-S-19500 General Specification Provides detailed specification sheets


for Semiconductors establishing general and specific
requirements including electrical
characteristics, mechanical characteristics,
qualification requirements, inspection
procedures and test methods.

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 48

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

Document Title Content

MIL-STD-750 Test Methods for Provides uniform methods and procedures


Semiconductor Devices for testing semiconductors. Structured into
five classes of test methods: 1000 class
addresses environmental tests, 2000 class
addresses mechanical characteristics, 3000
class addresses electrical characteristics,
3100 class addresses circuit performance
and thermal resistance measurements, and
the 3200 class addresses low frequency
tests.

MIL-STD-202 Test Methods for Provides uniform methods for testing


Electronic and Electrical electronic and electrical parts. Structured
Parts into three classes of test methods: 100 class
addresses environmental tests, 200 class
addresses physical characteristic tests and
300 class addresses electrical characteristic
tests. These tests are not tied to a single
part specification document as with
microelectronics and semiconductors, but
rather, numerous specifications for various
component types.

MIL-STD-965 Parts Control Program Provides control procedures to be used in


the design and development of military
equipment, including the submission, review
and approval of a Program Parts Selection
List. Generally, an overall guide for the
implementation and management of a parts
control program. The document provides for
two basic management procedures.
Procedure I is applicable to a majority of
programs and does not make use of a formal
parts control board. Procedure II requires a
formal parts control board and is
recommended for consideration where there
is an aggregation of contractor/
subcontractors.

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 49

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

Table D3-2: Microelectronics Classifications and Descriptions

Part Classification Part Classification Description

JAN or MIL-M-38510 Parts These parts have a detailed specification (slash


sheet) in MIL-M-38510 which controls all
mechanical, electrical, and functional parameters
of the part. Additionally, the manufacturing
process flow is certified by DoD's Defense
Electronics Supply Center (DESC), the devices are
screened to MIL-STD-883 test method
requirements, and are subjected to rigorous quality
conformance testing. A manufacturer, once
certified by DESC, can then qualify products to the
specification and have these products listed on the
qualified products list. The product specification
(performance and mechanical) is contained in a
M38510/0000 "slash sheet" or one part number
SMD. Standardization is achieved through many
manufacturers building product to the same "one
part SMD" or "slash sheet" and testing them using
the standard test methods found in MIL-STD-883.

QML (Qualified Manufacturers Listing) Device performance requirements (electrical,


or MIL-I-38535 Parts thermal, and mechanical) are detailed in the
Standard Military Drawing (SMD). The qualifying
activity or its agent certifies and qualifies the
manufacturers process flows. Once certified and
qualified, the manufacturer may produce multiple
device types on that flow as MIL-I-38535 compliant
parts. Since the process is considered qualified,
individual products do not have to be qualified
individually for selected quality conformance
inspections, except Class V (Space) product.
Where standard tests are used by the
manufacturer to qualify the process, the use of
American Society for Testing and Materials
(ASTM), MIL-STD-883 or Joint Electron Device
Engineering Council (JEDEC) specifications are
suggested. The manufacturer may also document
and use new tests developed to improve quality
and reliability. Manufacturers are required to
identify a Technical Review Board (TRB) within
their company. It is the duty of the TRB to approve
all changes in the process and report to DESC on
a regular basis. Changes in the process and
products are reviewed annually by a team of users,
the qualifying activity and the preparing activity.
Progress in meeting company established yield,
Statistical Process Control (SPC), and reliability
goals are reported at this meeting. Parts produced
under MIL-I-38535 are listed on the QML.

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 50

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

Part Classification Part Classification Description

QML (Hybrids) / CH or MIL-H-38534 The requirements for a hybrid microcircuit are set
Parts forth in Standard Military Drawings (SMDs) or
Source Control Drawings (SCDs). The qualifying
activity qualifies the manufacturer's process flows
and once certified and qualified may produce
multiple device types on that flow as MIL-H-38534
compliant parts. Test methods are defined in MIL-
STD-883. All major changes to the process flows
require qualifying activity approval. Parts produced
under this system are listed in the Qualified
Manufacturer's List.

Standard Military Drawing (Class M) This system evolved from various manufacturer's
and MIL-STD-883 Compliant Devices in-house versions of Test Methods 5004 and 5005
of MIL-STD-883. It was an informal and
inconsistent system in the late 70's and early 80's
known as MIL equivalent, or look alikes.
Manufacturers were falsely advertising these parts
as equivalent to JAN parts, without basis, because
most critical JAN requirements (e.g. audits,
qualification, quality conformance inspection tests)
were not followed. In some cases, not all the
required JAN testing was being performed by the
manufacturer. This resulted in the government
incorporating a truth in advertising paragraph in
MIL-STD-883 (i.e. Paragraph 1.2.1). This required
the manufacturer to self-certify that all 1.2.1
requirements, a subset of the MIL-M-38510
requirements, were being met if advertised as
meeting MIL-STD-883 requirements. DESC has
begun an audit program to verify the
manufacturers self compliance to MIL-STD-883,
Paragraph 1.2.1 compliant product. The primary
difference between Standardized Military Drawing
(SMD) product and MIL-STD-883 compliant
product is that SMD (Class M) sources are
approved by the Defense Electronics Supply
Center (DESC). DESC manages the procurement
document (SMD) and approves the sources by
accepting their certificate of compliance to the
Paragraph 1.2.1 requirements. The MIL-STD-883
compliant product is produced to uncontrolled
vendor data books and the government has no
control over compliancy claims. Certification and
qualification by DESC is not required for either
system.

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 51

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

Part Classification Part Classification Description

Vendor Equivalent Parts Each parts supplier has a set of commercial


specifications which they use for manufacturing
product for general sale. Usually the product
specifications are included on a data sheet which
is then collected into a catalog for sale to the
general public. There is a wide spectrum of quality
available depending on the quality standards
applied by the company. Generally, these parts
have been tested to the vendor's equivalent MIL-
STD-883 test methodology. The vendor may or
may not modify the scope of the tests and a careful
analysis is required to determine how similar the
vendor's tests are to MIL-STD-883 tests.

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 52

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 53

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 54

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

Topic D4: Review Questions


Program and design reviews are key vehicles for measuring development progress
and preventing costly redesigns. Participation by government individuals
knowledgeable in R&M is critical to provoking discussions that bring out the issues
important to R&M success. Of course, the questions to be posed to the
development contractor depend on the timing of the review as indicated below.
Action Items should be assigned at the reviews based on open R&M issues and the
reliability engineer must follow-up to ensure that they're resolved satisfactorily.

Table D4-1: Major Program Reviews


Review Purpose R&M Engineers Role
System To ensure a complete Discuss the performance of all
Requirements understanding of system required R&M tasks and
Review (SRR) specification and statement of work requirements with contractor R&M
requirements. This is usually done personnel. Topics such as the
by means of a detailed expansion contractor's overall reliability
and review of the contractor's program plan, data items and
technical proposal. delivery schedule are usually
discussed.
Preliminary To evaluate progress and technical Review preliminary R&M modeling,
Design adequacy of the selected design allocations and predictions to
Review (PDR) approach prior to the detailed ensure adequacy in meeting R&M
design process. requirements. Discuss status of
other R&M tasks such as parts
control, derating, thermal design
and reliability critical items.
Critical Design To ensure that the detailed design Review the final reliability analysis
Review (CDR) satisfies the requirements of the and modeling to ensure R&M
system specification before freezing requirements are met. Discuss
the design for production or field parts control program status and
testing. military part procurement lead time
requirements. Review adequacy of
the final thermal analysls and
derating. Discuss R&M testing.
Test Readiness To ensure that all CDR problems Review R&M test plans and
Review (TRR) have been satisfactorily resolved procedures to ensure acceptable
and to determine if the design is ground rules and compliance with
mature enough to start formal requirements.
testing.
Production To review test results and Discuss R&M testing results and
Readiness determine whether or not the ensure any design deficiencies
Review (PRR) design is satisfactory for found during testing have been
production. corrected. Discuss production
quality assurance measures.

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 55

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

Table D4-2: Design Review Checklist

Review Where Usually


Most Applicable
Question SRR PDR CDR TRR PRR Remarks
R&M Management
What are the avenues of X X R&M engineering should
technical interchange participate at all engineering
between the R&M group and group meetings where R&M
other engineering groups is effected. Easy avenues of
(e.g., Design, Systems technical interchange
Engineering, ILS, between the electrical design
Procurement, and Test and group and other groups such
Evaluation)? as thermal engineering must
exist.

Does the reliability group X X X X Membership or an option to


have membership and a voice an opinion is essential
voice in decisions of the if the failure tracking and
Material Review Board, corrective action loop is to be
Failure Review Board, and completed.
Engineering Change
Review Board?

Is the contractor and X X X Incoming part types should


subcontractor(s) a member be checked against the
of the Government Industry GIDEP ALERT data base
Data Exchange Program and incoming ALERTS
(GIDEP)? What is the should be checked against
procedure for comparing the system parts list. (GIDEP
parts on the ALERT list to ALERTS are notices of
parts used in the system? deficient parts, materials or
processes).

Are reliability critical items X X Critical parts are usually


given special attention in the defined by contract or by
form of special analysis, MIL-STD-785. Methods of
testing or destructive tracking critical parts must be
laboratory evaluation? identified by the contractor.
See Topic D5 for a critical
items checklist.

Do the purchase orders X X Requirements should include


require vendors to deliver verification by analysis or
specified levels of R&M&T test.
based on allocation of higher
level requirements?

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 56

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

Review Where Usually


Most Applicable
Question SRR PDR CDR TRR PRR Remarks
Does the reliability group X X X Failure analysis is essential
have access to component to determine the cause and
and failure analysis experts effect of failed components.
and how are they integrated
into the program?

Is there adequate X X
communication between
testability design engineers
and the electrical design
group to ensure that
testability considerations are
worked into the upfront
design?

Are JAN microcircuits (MIL- X X Part quality in order of


M-38510 or MIL-I-38535) and preference: MIL-M-38510, or
semiconductors (MIL-S- MIL-I-38535 devices; MIL-
19500) being used wherever STD-883 Class B; MIL-STD-
possible and are 883 vendor equivalent;
procurement lead times for commercial hermetically
these devices adequate? sealed. JAN parts usually
require longer procurement
times (3 to 6 months) which
sometimes causes
commercial parts to be
forced into the design.

Where nonstandard parts are X X X Specification control


used, are they procured via a drawings should specify
specification control drawing reliability, environment and
(SCD) and do they have at testing requirements.
least two suppliers? Are
methods for nonstandard
part approval clearly
established and is there a
clear understanding of what
constitutes a standard and
nonstandard part?

Has an up-to-date preferred X X DESC and DISC establish


parts selection list (PPSL) baseline PPSLs which
been established for use by should be the basis of the
designers? contractor's list.

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 57

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

Review Where Usually


Most Applicable
Question SRR PDR CDR TRR PRR Remarks

R&M Design
Do the R&M&T models X X
accurately reflect the system
configuration, its modes of
operation, duty cycles, and
implementation of fault
tolerance?

Do predictions meet X X X If not, better cooling, part


numerical R&M specification quality and/ or redundancy
requirements? Are prediction should be considered.
procedures in accordance
with requirements?

Have R&M allocations been X X Weighted reliability allo-


made to the LRU level or cations should be made to
below? Do reliability pre- lower levels based on the
dictions compare favorably to upper test MTBF (θ0), or
the allocation? similar measure.

Does the testability analysis X X If not, alternate design


show that numerical concepts must consider
testability requirements will including more automated
be met for the organizational, features.
intermediate and depot repair
levels?

Have tradeoff studies been X X Typical tradeoffs might


performed in the areas of include redundancy levels,
R&M&T? weight, power, volume,
complexity, acquisition cost,
life cycle cost.

Has a thermal analysis been X X Thermal analysis is essential


performed to ensure an to a complete program.
adequate cooling technique
is used and have the
temperature results been
factored into the reliability
analysis?

Has piece part placement X X For example, high power


been analyzed to ensure that dissipation components such
high dissipating parts are as large power resistors,
placed away from heat diodes and transformers
sensitive parts? should be investigated.

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 58

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

Review Where Usually


Most Applicable
Question SRR PDR CDR TRR PRR Remarks
Have methods been X X Reference environmental
established to ensure that requirements in the system
operating temperatures of specification.
off-the-shelf equipment will
be within specified limits?

Do parts used in the design X X Temperature range for most


meet system environmental military parts is - 55°C to +
requirements? 125°C. Temperature range
for most commercial parts
(plastic) is 0°C to 70°C.

Is there a clearly established X X X The part derating levels are a


derating criteria for all part function of program type but
types used in the design and should be at least Level III in
is there a clear procedure for Topic D1.
monitoring and enforcing this
criteria?

Are temperature overheat X X


sensors included in the
system design?

Is there a clear procedure for X X X A tradeoff analysis should be


the identification of parts not performed on parts not
meeting the derating criteria? meeting derating criteria to
determine if a redesign to
lower stress is appropriate.

Will part derating verification X Depending on system


tests be performed? criticality, 3 to 7 percent of
the system's parts should
undergo stress verification.
No more than 30 percent of
the tested parts should be
passive parts (resistors,
capacitors, etc.).

Have limited life parts and X X For example, inspection


preventive maintenance items may include waveguide
tasks been identified, and couplers, rotary joints,
inspection and replacement switches, bearings, tubes
requirements specified? and connectors. Typical
Preventive Maintenance
(PM) items include air filters,
lubrication, oil changes,
batteries, belts, etc.

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 59

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

Review Where Usually


Most Applicable
Question
SRR PDR CDR TRR PRR Remarks
Have single points of failure X X Important for identifying
been identified, and their areas where redundancy
effects determined? should be implemented and
to assist in ranking the most
serious failure modes for
establishing a critical items
list.

Have compensating features X X Compensating features could


been identified for those include increased part
single points of failure where quality, increased testability,
complete elimination of the additional screening, fail safe
failure mode is impractical? design provisions, etc.

Have areas where fault X X Additional test nodes must


ambiguity may exist been be considered to break
identified? Have alternative ambiguity groups.
methods of isolation and
checkout (e.g., semi-
automatic, manual, repetitive
replacement, etc.) been
identified for these areas?

For each maintenance level, X X


has a decision been made
for each item on how built-in-
test, automatic test
equipment, and general
purpose electronic test
equipment will support fault
detection and isolation?

Are features being X X Typical features might


incorporated into the include definition of test
testability design to control tolerances, transient monitor-
false alarms? ing and control, multiple run
decision logic, environmental
effects filtering and
identification, etc.

R&M Testing
Is there a failure reporting X X X FRACAS should include data
and corrective action system from incoming inspection,
(FRACAS) in place, and development testing, equip-
does it account for failures ment integration testing and
occurring during all phases of R&M testing. FRACAS
testing? should be "closed loop"
emphasizing corrective
action.

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 60

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D4

Review Where Usually


Most Applicable
Question SRR PDR CDR TRR PRR Remarks
Is there a failure analysis X X X Contractor should identify
capability and will failures be criteria used to determine
subjected to a detailed which failures will be
analysis? analyzed.

Are subcontractors subjected X X X


to the same FRACAS
requirements, and will their
failure analysis reports be
included with the prime
contractor's reports?

Does the reliability demon- X X The test must simulate the


stration test simulate the operational profile and
operating profile seen in the modes to have valid results.
field and will all modes of
equipment operation be
tested over the required
environmental extremes?

Does the maintainability and X X Candidate lists should be


testability demonstration test four to ten times the size of
simulate realistic failures and the test sample.
is the candidate task list
sufficient to reduce bias?

Are relevant and nonrelevant X X See Topic T9 for failure


failure definitions clear and definitions.
agreed upon?

Are equipment performance X Items such as temperature


checks to be performed variations, start/stop of
during testing clearly defined vibration, event occurrence
and has the information to be times and a detailed des-
recorded in the test log been cription of system recovery
clearly defined and agreed after failure should be
upon? included as a minimum.

Do preliminary plans for ESS X X Temp. and random vibration


meet the required needs? are the most effective
screens. At module level,
perform 20 to 40 temp.
cycles per module. At higher
assembly levels, perform 4 to
20 cycles. (See RADC-TR-
86-149, "ESS" and DOD-
HDBK-344, "Environmental
Stress Screening of Elect-
ronic Equipment," and Topics
T1-T3 for guidance).

ROME LABORATORY'S RELIABILITY ENGINEER'S TOOLKIT 61

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D5

Topic D5: Critical Item Checklist

Major Concerns Comments


• Has the contractor developed • Policies should be distributed to
formal policies and procedures for design, manufacturing, inspection
identification and control? and test personnel.

• Are the procedures implemented at • The program has to start early so


the initial design stage and do they that safety related items can be
continue through final acceptance minimized.
period?

• Are periodic reviews planned to • Reviews at SRR, PDR, and CDR


update the list and controls? must be considered.

• Has an FMEA been performed on • Failure modes need to be identified


each critical item? so that control procedures can be
developed.

• Are compensating features • Features such as safety margins,


included in the design? overstress testing, special
checkouts should be considered.

• Does the contractor's control plan • Development of a list of critical


eliminate or minimize the reliability items is only half the solution;
risk? controls such as stress tests,
design margins, duty cycles, and
others must be considered.

• As a minimum, are the following • A list of critical items, personnel


criticality factors considered: responsible for monitoring and
controlling, and review procedures
- Failures jeopardizing safety must be established. Other
application unique critical items
- Restrictions on limited useful life should be identified by the
procuring activity.
- Design exceeding derating limits

- Single sources for parts

- Historically failure prone items

- Stringent tolerances for


manufacturing or performance

- Single failure points that disrupt


mission performance

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 62

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D6

Topic D6: Dormancy Design Control


Dormancy design control is important in the life cycle of a weapon system because,
after an equipment has been installed or stored in an arsenal, the predominant
portion of its life cycle is in the dormant mode. The main problems are the lack of
dormancy related design guides and control methods to maintain or assure system
reliability in storage. Questions often asked and seldom answered are:
• Most important stresses? Mechanical, chemical, and low thermal; the
synergism of these three stresses is critical.

• Most significant failure mechanisms? Failures related to latent


manufacturing defects, corrosion , and mechanical fracture, with most failures
being the result of latent manufacturing defects rather than specific aging
mechanisms.

• Types of failure? Most failures that occur during nonoperating periods are
of the same basic kind as those found in the operating mode, though
precipitated at a slower rate.

• Most important factor? Moisture is the single most important factor


affecting long term nonoperating reliability. All possible steps should be
taken to eliminate it from electronic devices. Hygroscopic materials should
be avoided or protected against accumulation of excess moisture.

• Materials to avoid? Avoid materials sensitive to cold flow and creep as well
as metalized and non-metallic finishes which have flaking characteristics.
Avoid the use of lubricants; if required, use dry lubricants such as graphite.
Do not use teflon gaskets in lieu of conventional rubber gaskets or better yet,
use silicone based rubber gaskets.

Storage Guidelines
• Do not test the equipment: Periodic testing results in failures rather than
higher states of readiness. Historical data on missile systems that were
stored and tested periodically shows that failures were introduced into the
equipment as a result of the testing process. Causes of the failures were test
procedures, test equipment and operator errors. Main guidelines are:

- Disconnect all power

- Ground all units and components

- Pressurize all coax waveguides: Use nitrogen to prevent moisture and


corrosion.

- Maintain temperature at 50°F +/- 5°F: At least drain all equipment of


water to prevent freezing or broken pipes.

- Control relative humidity to 50% +/- 5%: Reduces corrosion and


prevents electrostatic discharge failure.

- Periodically recharge batteries

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 63

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D6

- Protect against rodents: Squirrels have chewed cables, mice have


nested in electronic cabinets and porcupines have destroyed support
structures (wood). Door/window seals, traps/poison and frequent inspection
protect against these rodents.

Protective and Control Measures

Materials

• Mechanical items: Use proper finishes for materials, nonabsorbent


materials for gasketing, sealing of lubricated surfaces and assemblies, and
drain holes for water run-off.

• Electronic and electrical items: Use nonporous insulating materials,


impregnate cut edges on plastic with moisture resistant varnish or resin, seal
components with moving parts and perforate sleeving over cabled wire to
avoid the accumulation of condensation.

• Electromagnetic items: Impregnation of windings with moisture proof


varnish, encapsulation, or hermetic sealing, and use of alumina insulators.

• Thermal items: Use nonhygroscopic materials and hermetic sealing.

• Finishes: Avoidance of hygroscopic or porous materials; impregnate all


capillary edges with wax, varnish or resin.

Parts

• Use parts with histories of demonstrated successful aging.

• Use only hermetically sealed semiconductors.

● Do not use semiconductors and microcircuits that contain nichrome-


deposited resistors.

• Select parts that use mono-metallization to avoid galvanic corrosion.

• Do not seal chlorine or other halogen-containing materials within any circuitry


components.

• Avoid the use of variable resistors, capacitors, inductors, or potentiometers.

• Avoid the use of electromechanical relays.

• Avoid attachments and connections that depend on spring action for


effectiveness.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 64

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D6

Table D6-1: Dormant Part Failure Mechanisms

Type Mechanism % Failure Mode Accelerating


Factor

Microcircuit
MOS Surface Anomolies 35-70 Degradation Moisture, Temp.
Wire Bond 10-20 Open Vibration
Bipolar Seal Defects 10-30 Degradation Shock, Vibration
Wire Bond 15-35 Open Vibration

Transistor
Signal Contamination 15-45 Degradation Moisture, Temp.
Header Defects 10-30 Drift Shock, Vibration
FET Contamination 10-50 Degradation Moisture, Temp.
Corrosion 15-25 Drift Moisture, Temp.

Diode
Zener Header Bond 20-40 Drift Shock, Vibration
Corrosion 20-40 Intermittent Moisture, Temp.
Signal Lead/Die Contact 15-35 Open Shock, Vibration
Header Bond 15-35 Drift Shock, Vibration

Resistor
Film Corrosion 30-50 Drift Moisture,Temp.
Film Defects 15-25 Drift Moisture,Temp.
Wirewound Corrosion 35-50 Drift Moisture, Temp.
Lead Defects 10-20 Open Shock, Vibration

Capacitor
Ceramic Connection 10-30 Open Temp.,Vibration
Corrosion 25-45 Drift Moisture, Temp.
Tantalum Mechanical 20-40 Short Shock, Vibration
Oxide Defect 15-35 Drift Temp., Cycling

RF Coil Lead Stress 20-40 Open Shock, Vibration


Insulation 40-65 Drift Moisture, Temp.

Transformer Insulation 40-80 Short Moisture, Temp.

Relay Contact Resistance 30-40 Open Moisture, Temp.


Contact Corrosion 40-65 Drift Moisture

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 65

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
January 21, 2005 DESIGN - TOPIC D7

Topic D7: Surface Mount Technology (SMT) Design


SMT involves placing a component directly onto the surface of a printed circuit
board (PCB) and soldering its connections in place. SMT components can be
active (integrated circuits) or passive devices (resistors), and can have different
lead designs as presented below. In either case, the solder connection is both an
electrical and mechanical connection, thus replacing the mechanical connection
associated with plated through holes (PTH). Maximizing the integrity of SMT
designs centers around minimizing the thermal and mechanical fatigue of both the
component's solder connection and the board's PTHs.

Common Lead Designs

Leadless Chip Carriers (LCCs): Attaching component to board directly with solder
alone.

Leaded Chip Carrier: Attaching a leaded component to board with solder.

CTE: Coefficient of Thermal Expansion is the change in length per unit length
when heated through one degree. It directly effects the thermal strain and thus the
stress in the solder joint.

Design Guidelines
• Use the largest allowable standard size for passive components to minimize
manufacturing flaws.

• Carefully consider the application for active devices when electing to use
leadless versus leaded components.

• Use special CTE matching to preclude stress cracking in LCC solder joints.

• Minimize PCB to 13" x 13" size to avoid warp and twist problems.

• Provide an adequate clearance at the board's edge in order to provide space


for the board mounting and wave solder conveyor fingers.

• Locate components on a grid to ease programming of automatic dispensing


or placement equipment.

• Allow proper spacing between components to account for visual inspection,


rework, and engineering changes to assembly.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 66

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D8

Topic D8: Power Supply Design Checklist


For many years power supply reliability has fallen short of expectations especially
when used in adverse environments. Today the situation is even worse as power
supplies are being designed to exceed three watts per cubic inch - a challenge to
construction and packaging techniques and part technology. And, since high
density means more concentrated heat - the enemy of all components - power
supply reliability problems will prevail. Following are design considerations and
possible solutions to review:

Table D8-1: Design Checklist (New Designs)


Items to be Addressed Solutions/Recommendations

• Transient effects • Apply resistor-triac technique, thermistor


- In-rush current technique

- High-voltage spikes • Apply metal oxide varistor (MOV) transient


voltage suppressor

- Short circuits • Apply constant current and current foldback


protection

- Switching voltage transients • Apply snubber circuits

• Effects of AC ripple current • Consider use of MIL-C-39006/22 capacitors

• Corrosion due to leakage • Avoid wet slug tantalum capacitors and use
plating and protective finishes

• Aluminum electrolytic capacitors • Epoxy end-seals minimize external


contamination

• Temperature stability • Use low temperature coefficient capacitors


(mica or ceramic)

• Packaging techniques • Enhance heat transfer, control


electromagnetic interference, decrease
parasitic capacitance

• Saturation • Use antisaturation diodes (Baker Clamps) in


conjunction with a switching transistor

• Potentiometers • Replace with precision fixed resistor

• Short mounting leads • Derate the operating voltage below 50% to


prevent hot spots

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 67

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D8

Items to be Addressed Solutions/Recommendations


• Static discharge damage • Use antistatic grounds for manufacturing and
maintenance

• Field effect transistor (FET) • FET's increase switching speeds but reduce
versus bipolar device drive capability

• Junction temperatures • Do not exceed 110°C

• Mechanical stresses • Use of vibration isolators/shock mountings,


parts spaced to prevent contact during shock
& vibration

• Solder joint process • 95%(goal) of solder joints should be made


via automated process

• Cooling • Conductive cooling to a heat exchanger is


preferred

Table D8-2: Design Checklist (Commercial Designs)

Items to be Addressed Solutions/Recommendations


• Part quality • Vendor selects military equivalent parts
• Vendor selects prescreened parts
• Vendor screens/tests in-house

• Unit quality • Vendor burns-in all units at higher temps

• Part derating • Vendor has in-house standards

• Electrical parameters • Vendor values exceed needs at temp


extremes

• Failure analysis • Vendor has failure tracking program

• Protection circuits • Vendor has built-in voltage and current


sensors

• Fault flags • Vendor has built-in failure indicators

• Reliability experience • Successful operation in similar environments

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 68

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D9

Topic D9: Part Failure Modes and Mechanisms


To properly apply electronic parts in complex and high density equipment designs,
the engineer needs to know what factors are significant. With knowledge about the
failure modes, mechanisms, and frequency of occurrence design changes can be
instituted to eliminate or degrade the accelerating factors thereby increasing the
equipment reliability. Table D9-1 presents these factors for a representative group
of electronic components. For further information on part construction and
operation, consult MIL-HDBK-978B, "NASA Parts Application Handbook," or MIL-
HDBK-338, "Electronic Reliability Design Handbook."

Table D9-1: Part Failure Modes and Mechanisms


Type Failure Mechanisms % Failure Modes Accelerating Factors

Microcircuits
Digital Oxide Defect 9 Short/Stuck High Electric Field, Temp.
Electromigration 6 Open/Stuck Low Power, Temp.
Overstress 18 Short then Open Power
Contamination 16 Short/Stuck High Vibration, Shock,
Moisture, Temp.
Mechanical 17 Stuck Low Shock, Vibration
Elec. Parameters 33 Degraded Temp., Power

Memory Oxide Defect 17 Short/Stuck High Electric Field, Temp.


Overstress 22 Short then Open or Power, Temp.
Stuck Low
Contamination 25 Short/Stuck High Vibration, Shock
Moisture, Temp.
Mechanical 9 Stuck Low Shock, Vibration
Elec. Parameters 26 Degraded Temp., Power

Linear Overstress 21 Short then Open or Power, Temp.


Stuck Low
Contamination 12 Short/Stuck High Vibration, Shock
Mechanical 2 Stuck Low Shock, Vibration
Elec. Parameters 48 Degraded Temp., Power
Unknown 16 Stuck High or Low

Hybrid Overstress 17 Short then Open Power, Temp


Contamination 8 Short Vibration, Shock
Mechanical 13 Open Shock, Vibration
Elec. Parameters 20 Degraded Temp., Power
Metallization 10 Open Temp., Power
Substrate Fracture 8 Open Vibration
Miscellaneous 23 Open

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 69

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D9

Type Failure Mechanisms % Failure Modes Accelerating Factors

Diodes
Signal Elec. Parameter 48 Degraded Temp., Power
Die Fracture 10 Open Vibration
Seal Leak 3 Open Moisture, Temp.
Overstress 17 Short then Open Power, Temp.
Unknown 21 Open

Zener Elec. Parameter 32 Degraded Temp., Power


Leakage Current 7 Degraded Power
Mechanical 1 Open Shock, Vibration
Overstress 33 Short then Open Voltage, Temp.
Unknown 26 Open

Transistors
Bipolar Overstress 54 Short then Open Power, Temp.
Elec. Parameters 25 Degraded Temp., Power
Leakage Current 10 Degraded Power
Miscellaneous 10 Open

Field Effect Overstress 51 Short then Open Power, Temp.


Elec. Parameters 17 Degraded Temp., Power
Contamination 15 Short Vibration, Shock
Miscellaneous 16 Open

Resistors
Composition Moisture Intrusion 45 Resistance (R) Moisture, Temp.
Change
Non-uniform Material 15 R Change, Open Voltage/Current,
Contamination Temp.
14 R Change Voltage/Current,
Lead Defects Temp.
25 Open Moisture, Temp.,
Voltage/Current

Film Moisture Intrusion 31 R Change Moisture, Temp.,


Contamination
Substrate Defects 25 R Change Temp., Voltage/
Current
Film Imperfections 25 R Change, Open Temp., Voltage/
Current
Lead Termination 9 Open Shock, Vibration,
Temp., Voltage/
Current
Film Material Damage 9 R Change, Open Temp., Voltage/
Current

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 70

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D9

Type Failure Mechanisms % Failure Modes Accelerating Factors

Resistor (cont'd)
Wirewound Wire Imperfection 32 Open Voltage/Current,
Temp.
Wire Insulation Flaw 20 R Change, Short Voltage/Current,
Temp.
Corrosion 31 R Change, Short Temp., Moisture
Lead Defects 10 Open Shock, Vibration,
Voltage/Current
Intrawinding 6 R Change, Short Temp., Voltage/
Insulation Breakdown Current

Capacitors
Ceramic Dielectric Breakdown 49 Short Voltage, Temp.
Connection Failure
Surface 18 Open Temp., Cycling
Contamination 3 Capacitance Drift Temp., Voltage
Low Insulation
Resistance 29 Short Temp., Voltage

Plastic/Paper Connection Failure 46 Open Temp., Cycling


Cracked Dielectric 11 Short Temp., Voltage
Capacitance Change 42 Degraded Temp., Voltage

Tantalum Loss of Electrolyte 17 Capacitance Drift Temp., Voltage


(Nonsolid) Leakage Current 46 Short Voltage, Temp.
Intermittent High 36 Open Temp., Cycling
Impedance

Inductive Devices
Transformer Wire Overstress 25 Open Voltage, Current
Faulty Leads 5 Open Vibration, Shock
Corroded Windings 24 Short Moisture, Temp.
Insulation Breakdown 25 Short Voltage, Moisture,
Insulation Temp.
Deterioration 20 Short Moisture, Temp.

RF Coil Wire Overstress 37 Open Voltage, Current


Faulty Leads 16 Open Vibration, Shock
Insulation Breakdown 14 Short Voltage, Moisture,
Temp.
Insulation 32 Short Moisture, Temp.
Deterioration

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 71

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D9

Type Failure Mechanisms % Failure Modes Accelerating Factors

Switch
General Contact Resistance 30 Open Temp., Moisture,
Current
Mechanical 23 Open Vibration, Shock
Overstress 18 Short Power, Temp.
Elec. Parameters 13 Degraded Temp., Power
Intermittent 15 Degraded Temp., Vibration

Relay
General Contact Resistance 53 Open Temp., Moisture
Contact 18 Open Moisture, Temp.
Contamination
Overstress 11 Short Current
Intermittent 12 Degraded Temp., Vibration
Mechanical 5 Open Vibration

Connector
General Contact Resistance 9 Resistance Change Temp., Moisture
Intermittent 22 Open Vibration, Shock
Mechanical 24 Open Vibration, Shock
Overstress 9 Short Power, Contamination
Temp., Vibration,
Miscellaneous 35 Open Wear

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 72

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D10

Topic D10: Fiber Optic Design Criteria


Fiber optics are relatively new when compared with most electronic devices. With
the increased use of fiber optics comes the need to address fiber optic reliability so
that preventive design measures can be instituted. This section will present
specific failure modes/mechanisms and their causes and prevention to aid
designers/planners in establishing a reliable system. Tables D10-1 thru D10-3
depict those failure modes/mechanisms associated with Transmitters, Receivers
and Fiber & Cable. Table D10-4 presents reliability figures of merit with an 80%
confidence bound except connectors.

Table D10-1: Common Failure Mechanisms (Transmitters)


Mode Causes Prevention

Facet Damage Pulse width & optical power Apply anti-reflection coat to facets
density

Laser Wear-Out Photo-Oxidation, contact Coat facets, reduce temperature


degradation & crystal growth & current density & use high
defects quality materials

Laser Instability Reflection of laser output Apply antireflection coat, defocus


power the graded index coupling
element

Shorted Outputs Whisker formation Anticipate system lifetime &


temperature solder tolerances

Dark Line Defects Non-Radiating centers Material selection & quality


control

Table D10-2: Common Failure Mechanisms (Receivers)


Mode Causes Prevention

Open Circuit Fracture of lead-bond plated Use evaporated contacts


contacts

Short or Open Circuit Electro-Chemical oxidation, Use hermetically sealed package


humidity

Positive Intrinsic Accumulation of mobile ions InGaAs or In layer grown on


Negative (PIN) Dark active region & reduce the
Current temperature

Avalanche Photo Diode Thermal deterioration of the Select an APD at 1.3µm & reduce
(APD) Dark Current metal contact the temperature

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 73

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
DESIGN - TOPIC D10

Table D10-3: Common Failure Mechanisms (Fiber & Cable)


Mode Causes Prevention

Cable Open Stress corrosion or fatigue Residual or threshold tension less


Circuit Fracture due to microcracks than 33% of the rated proof
tested tensile strength

Cable Intermittent Hydrogen migrates into the Design cables with materials that
core of the fiber do not generate hydrogen

Cable Open Temperature cycling, Design a jacket that can prevent


Circuit Breakage ultraviolet exposure, water & shrinking, cracking, swelling or
fluid immersion splitting

Cable Opaque Circuit Radiation Design to be nuclear radiation


Inoperative hardened

Table D10-4: Fiber Optic Component Failure Rates


Component Type Failure Rate (10-6 Hrs.) MTBF (Hrs.)

Fiber 4.35 - 5.26 210,000

Cable 1.15 - 1.81 750,000

Splices .022 - .64 27,000,000

Connectors # of Matings
MIL-T-29504 1000
MIL-C-28876 500 N/A
MIL-C-38999 500
MIL-C-83522 500
MIL-C-83526 1000
FC-Style 1000
Light Emitting Diodes (LEDS)
AlGaAs/GaAs .13 - .88 4,000,000
InGaAsP/InP .78 - 1.92 850,000
AlGaAs/Si 2.08 - 8.33 320,000
Laser Diodes
AIGaAs/GaAs 1.27 - 9.1 410,000
- 1.3µm wavelength .79 - 9.1 620,000
InGaAsP/InP .13 - 2.4 3,700,000
Photodetectors
APD .12 - 1.54 4,000,000
PIN .57 - 3.58 1,000,000

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 74

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Section A
Analysis
Contents

A1 Reliability and Maintainability Analyses.......... 77

A2 Reliability Prediction Methods ......................... 80

A3 Maintainability Prediction Methods.................. 81

A4 Testability Analysis Methods ........................... 84


A5 Reliability Analysis Checklist ........................... 85

A6 Use of Existing Reliability Data ........................ 86


A7 Maintainability/Testability Analysis Checklist. 87
A8 FMECA Analysis Checklist ............................... 88

A9 Redundancy Equations..................................... 89

A10 Parts Count Reliability Prediction................... 92


A11 Reliability Adjustment Factors ........................ 105

A12 SMT Assessment Model .................................. 108


A13 Finite Element Analysis ................................... 113

A14 Common Thermal Analysis Procedures......... 115


A15 Sneak Circuit Analysis..................................... 119

A16 Dormant Analysis............................................. 122


A17 Software Reliability Prediction and Growth ... 124

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 75

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Insight
Reliability and maintainability analyses are a necessary part of most development
programs. They provide a means of determining how well the design is
progressing towards meeting the program's goals and requirements. They also
provide means of evaluating the impact of important design decisions such as
cooling approaches, classes of part quality being used, and areas of fault tolerance.
In order for the government to receive the outputs of contractor performed
analyses, appropriate contract deliverable data items must be required.

For More Information

MIL-STD-756 "Reliability Modeling and Prediction"


MIL-STD-1629 "Procedures for Performing a Failure Mode, Effects and
Criticality Analysis"
MI-HDBK-217 "Reliability Prediction of Electronic Equipment"
MIL-HDBK-472 "Maintainability Prediction"
RADC-TR-87-55 "Predictors of Organizational-Level Testability Analysis"
RADC-TR-77-287 "A Redundancy Notebook"
RADC-TR-89-223 "Sneak Circuit Analysis for the Common Man"
RADC-TR-89-276 "Dormant Missile Test Effectiveness"
RADC-TR-89-281 "Reliability Assessment Using Finite Element Techniques"
RADC-TR-90-109 "Integration of Sneak Analysis with Design"
RL-TR-91-29 "A Rome Laboratory Guide to Basic Training in TQM
Analysis Techniques"
RL-TR-91-87 " A Survey of Reliability, Maintainability, Supportability and
Testability Software Tools"
RL-TR-91-155 "Computer Aided Assessment of Reliability Using Finite
Element Methods"
RL-TR-92-197 "Reliability Assessment of Critical Electronic Components"

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 76

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A1

Topic A1: Reliability and Maintainability Analyses

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 77

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A1

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 78

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A1

Table A1-2: Summary of Failure Effects Analysis Characteristics

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 79

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A2

Topic A2: Reliability Prediction Methods

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 80

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A3

Topic A3: Maintainability Prediction Methods

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 81

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A3

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 82

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A3

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 83

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A4

Topic A4: Testability Analysis Methods

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 84

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A5

Topic A5: Reliability Analysis Checklist


Major Concerns Comments

Models
Are all functional elements included in the System design drawings/diagrams must be
reliability block diagrams/model? reviewed to be sure that the reliability
model/diagram agrees with the hardware.
Are all modes of operation considered in the Duty cycles, alternate paths, degraded
math model? conditions and redundant units must be
defined and modeled.
Do the math model results show that the Unit failure rates and redundancy equations
design achieves the reliability requirement? are used from the detailed part predictions
in the system math model.
Allocation
Are system reliability requirements allocated Useful levels are defined as: equipment for
(subdivided) to useful levels? subcontractors, assemblies for
subcontractors, circuit boards for designers.

Does the allocation process consider Conservative values are needed to prevent
complexity, design flexibility and safety reallocation at every design change.
margins?

Prediction
Does the sum of the parts equal the value of Many predictions conveniently neglect to
the module or unit? include all the parts producing optimistic
results (check for solder connections,
connectors, circuit boards).

Are the environmental conditions and part Optimistic quality levels and favorable
quality representative of the requirements? environmental conditions are often assumed
causing optimistic results.

Are the circuit and part temperatures Temperature is the biggest driver of part
defined and do they represent the design? failure rates; low temperature assumptions
will cause optimistic results.

Are equipment, assembly, subassembly and Identification is needed so that corrective


part reliability drivers identified? actions for reliability improvement can be
considered.

Are part failure rates from acceptable Use of generic failure rates require
sources (i.e., MIL-HDBK-217)? submission of backup data to provide
credence in the values.

Is the level of detail for the part failure rate Each component type should be sampled
models sufficient to reconstruct the result? and failure rates completely reconstructed
for accuracy.

Are critical components such as VHSIC, Prediction methods for advanced parts
Monolithic Microwave Integrated Circuits should be carefully evaluated for impact on
(MMIC), Application Specific Integrated the module and system.
Circuits (ASIC) or Hybrids highlighted?

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 85

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A6

Topic A6: Use of Existing Reliability Data


System development programs often make use of existing equipment (or
assembly) designs, or designs adapted to a particular application. Sometimes, lack
of detailed design information prevents direct prediction of the reliability of these
items making use of available field and/or test failure data the only practical way to
estimate their reliability. If this situation exists, the following table summarizes the
information that is desired.

Table A6-1: Use of Existing Reliability Data

Equipment Equipment Piece Part


Information Required Field Data Test Data Data

Data collection time period X X X


Number of operating hours per equipment X X
Total number of part hours X
Total number of observed maintenance X
actions
Number of "no defect found" maintenance X
actions
Number of induced maintenance actions X
Number of "hard failure" maintenance X
actions
Number of observed failures X X
Number of relevant failures X X
Number of nonrelevant failures X X
Failure definition X X
Number of equipment or parts to which X X X
data pertains
Similarity of equipment of interest to X X
equipment for which data is available
Environmental stress associated with data X X X
Type of testing X
Field data source X

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 86

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A7

Topic A7: Maintainability/Testability Analysis Checklist

Major Concerns Comments

Are the maintainability/testability prediction


techniques and data used clearly
described?
Is there a clear description of the Repair level, LRU/module definition,
maintenance concept and all ground rule spares availability assumptions, test
assumptions? equipment availability assumptions, tools
availability assumptions, personnel
assumptions, environmental conditions.

Are worksheets provided which show how The breakout of repair time should
LRU repair times were arrived at? include: fault isolation, disassembly,
interchange, reassembly and checkout.
Are step-by-step repair descriptions
provided to back up repair time estimates?

Are fault isolation time estimates realistic? Overestimating BIT/FIT capability is the
primary cause of optimistic repair time
estimates.

Are fault isolation ambiguity levels


considered in the analysis?

Can repair times be reconstructed from the Checking is mundane but often results in
worksheets and is addition, subtraction, errors and inconsistencies being found.
multiplication and division correct?

Are preventive maintenance tasks This includes frequency, maintenance


described? time and detailed task description.

Is all the equipment included in the


prediction?

Has the best procedure been selected to Because of the number of variables
provide estimates for the testability which effect testability and the number of
attributes? different procedures available to effect
analyses, there must be rationale and
logic provided to explain why the
particular approach was taken.

Are the numerical values of the testability


attributes within specified tolerances?

Does the test equipment, both hardware All test points should be accessible.
and software, meet all design requirements.

Are the simulation and emulation procedure


to be used to simulate/emulate units of the
system, for diagnostics development,
reasonable and practical?

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 87

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A8

Topic A8: FMECA Analysis Checklist


Major Concerns Comments

● Is a system definition/description provided


compatible with the system specification?

● Are ground rules clearly stated? ● These include approach, failure


definition, acceptable degradation
limits, level of analysis, clear
description of failure causes, etc.

● Are block diagrams provided showing ● This diagram should graphically show
functional dependencies at all equipment what items (parts, circuit cards, sub-
indenture levels? systems, etc.) are required for the
successful operation of the next higher
assembly.

● Does the failure effect analysis start at the ● The analysis should start at the lowest
lowest hardware level and systematically level specified in the SOW (e.g. part,
work to higher indenture levels? circuit card, subsystem, etc.)

● Are failure mode data sources fully ● Specifically identify data sources per
described? MIL-HDBK-338, Para 7.3.2, include
relevant data from similar systems.

● Are detailed FMECA worksheets ● Worksheets should provide an item


provided? Do the worksheets clearly name indenture code, item function, list
track from lower to higher hardware of item failure modes, effect on next
levels? Do the worksheets clearly higher assembly and system for each
correspond to the block diagrams? Do failure mode, and a criticality ranking.
the worksheets provide an adequate In addition, worksheets should account
scope of analysis? for multiple failure indenture levels for
Class I and Class II failures.

● Are failure severity classes provided? ● Typical classes are:


Are specific failure definitions - Catastrophic (life/death)
established? - Critical (mission loss)
- Marginal (mission degradation)
- Minor (maintenance/repair)

● Are results timely? ● Analysis must be performed "during"


the design phase not after the fact.

● Are results clearly summarized and are ● Actions for risk reduction of single point
clean comprehensive recommendations failures, critical items, areas needing
provided? BIT/FIT, etc.

● Are the results being submitted (shared) ● BIT design, critical parts, reliability
to enhance other program decisions? prediction, derating, fault tolerance.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 88

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A9

Topic A9: Redundancy Equations


Many military electronic systems readiness and availability requirements exceed
the level of reliability to which a serial chain system can be practically designed.
Use of high quality parts, a sound thermal design and extensive stress derating
may not be enough. Fault tolerance, or the ability of a system design to tolerate a
failure or degradation without system failure, is required. The most common form
of fault tolerance is redundancy where additional, usually identical, units are added
to a system in parallel with the other units. Because this situation is very common,
the reliability equations for common redundancy situations are included below.

The following represents a sample list of specific redundancy relationships which


define failure rate as a function of the specific type of redundancy employed. For a
comprehensive treatment of redundancy concepts and the reliability improvements
achievable through their applications see RADC-TR-77-287, "A Redundancy
Notebook."

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 89

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A9

Table A9-1: Redundancy Equation Approximations Summary

Redundancy Equations
With Repair Without Repair
All units are active on-line with equal unit
failure rates. (n-q) out of n required for
success.

Equation 1 Equation 4
q+1 λ
n! (λ) λ(n-q)/n =
λ(n-q)/n = n 1
(n-q-1)!(µ)q ∑ i
i=n-q

Two active on-line units with different failure


and repair rates. One of two required for
success.

Equation 2 Equation 5

λ1/2 =
[
λ Aλ B ( µ A + µ B ) + ( λ A + λ B ) ] λA2λB+λAλB2
λ1/2 = 2
λA +λB2+λAλB
( µ A )( µ B ) + ( µ A + µ B )( λ A + λ B )

One standby off-line unit with n active on-


line units required for success. Off-line
spare assumed to have a failure rate of
zero. On-line units have equal failure rates.

Equation 3 Equation 6
n[nλ + (1 − P ) µ ]λ nλ
λn/n+1 = P+1
λn/n+1 =
µ + n( P + 1)λ

Key:
λx/y is the effective failure rate of the redundant configuration where x of y units are
required for success
n = number of active on-line units. n! is n factorial (e.g., 5!=5x4x3x2x1=120,
1!=1,0!=1)
λ = failure rate of an individual on-line unit (failures/hour)
q = number of on-line active units which are allowed to fail without system failure
µ = repair rate (µ=1/Mct, where Mct is the mean corrective maintenance time in
hours)
P = probability switching mechanism will operate properly when needed (P=1 with
perfect switching)
Notes:
1. Assumes all units are functional at the start
2. The approximations represent time to first failure
3. CAUTION: Redundancy equations for repairable systems should not be applied if
delayed maintenance is used.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 90

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A9

Example 1: A system has five active units, each with a failure rate of 220 f/106
hours, and only three are required for successful operation. If one unit fails, it takes
an average of three hours to repair it to an active state. What is the effective failure
rate of this configuration?

Solution: Substituting the following values into Equation 1:


n = 5

q = 2

µ = 1/3

λ(5-2)/5 = λ3/5

5!(220 • 10−6 )3
λ3/5 = = 5.75 ● 10-9 f/hour
(5 − 2 − 1)!(1 / 3) 2

λ3/5 = .00575 f/106 hours

Example 2: A ground radar system has a 2 level weather channel with a failure
rate of 50 f/106 hours and a 6 level weather channel with a failure rate of 180 f/106
hours. Although the 6 level channel provides more comprehensive coverage, the
operation of either channel will result in acceptable system operation. What is the
effective failure rate of the two channels if one of two are required and the Mct is 1
hour?

Solution: Substituting the following values into Equation 2:

λA = 50 ● 10-6

λB = 180 ● 10-6

µA = µB = 1/Mct = 1

λ1/2 =
[
(50 • 10 −6 )(180 • 10 −6 ) (1 + 1) + (50 • 10 −6 + 180 • 10 −6 ) ] = 1.8 ● 10-8 f/hour
(1)(1) + (1 + 1)(50 • 10− 6 + 180 • 10 − 6 )

λ1/2 = .018 f/106 hours

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 91

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A10

Topic A10: Parts Count Reliability Prediction


A standard technique for predicting reliability when detailed design data such as
part stress levels is not yet available is the parts count reliability prediction
technique. The technique has a "built-in" assumption of average stress levels
which allows prediction in the conceptual stage or source selection stage by
estimation of the part types and quantities. This section contains a summary of the
MIL-HDBK-217F, Notice 1 technique for eleven of the most common operational
environments:

GB Ground Benign
GF Ground Fixed
GM Ground Mobile
NS Naval Sheltered
NU Naval Unsheltered
AIC Airborne Inhabited Cargo
AIF Airborne Inhabited Fighter
AUC Airborne Uninhabited Cargo
AUF Airborne Uninhabited Fighter
ARW Helicopter (Both Internal and External Equipment)
SF Space Flight

Assuming a series reliability model, the equipment failure rate can be expressed
as:

n
λEQUIP = ∑ (Ni)(λgi)(πQi)
i=1

where
λEQUIP = total equipment failure rate (failures/106 hrs)

λgi = generic failure rate for the ith generic part type (failures/106 hrs)
πQi = quality factor for the ith generic part type
Ni = quantity of the ith generic part type
n = number of different generic part types

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 92

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A10

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 93

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A10

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 94

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A10

Microcircuit Quality Factors - πQ

Description πQ

Class S Categories:

1. Procured in full accordance with MIL-M-38510, Class S requirements.

2. Procured in full accordance with MIL-I-38535 and Appendix B thereto .25


(Class V).

3. Hybrids: (Procured to Class S requirements (Quality Level K) of MIL-H-


38534.

Class B Categories:

1. Procured in full accordance with MIL-M-38510, Class B requirements.

2. Procured in full accordance with MIL-I-38535, (Class Q). 1.0

3. Hybrids: Procured to Class B requirements (Quality Level H) of MIL-H-


38534.

Class B-1 Category:

Fully compliant with all requirements of paragraph 1.2.1 of MIL-STD-883


and procured to a MIL drawing, DESC drawing or other government 2.0
approved documentation. (Does not include hybrids). For hybrids use
custom screening section on the following page.

Microcircuit Learning Factor - πL


Years in Production, Y πL

_ .1 2.0
.5 1.8
1.0 1.5
1.5 1.2
_ 2.0 1.0

πL = .01 exp(5.35 - .35Y)

Y = Years generic device type has been in production

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 95

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A10

Microcircuit Quality Factors (cont'd): πQ Calculation for Custom Screening Programs

Group MIL-STD-883 Screen/Test (Note 3) Point


Valuation
TM 1010 (Temperature Cycle, Cond B Minimum) and TM
2001 (Constant Acceleration, Cond B Minimum) and TM
1* 5004 (or 5008 for Hybrids) (Final Electricals @ Temp 50
Extremes) and TM 1014 (Seal Test, Cond A, B, or C) and
TM 2009 (External Visual)
TM 1010 (Temperature Cycle, Cond B Minimum) or TM 2001
(Constant Acceleration, Cond B Minimum)
2* TM 5004 (or 5008 for Hybrids) (Final Electricals @ Temp 37
Extremes) and TM 1014 (Seal Test, Cond A, B, or C) and
TM 2009 (External Visual)
Pre-Burn in Electricals
3 TM 1015 (Burn-in B-Level/S-Level) and TM 5004 (or 5008 for 30 (B Level)
Hybrids) (Post Burn-in Electricals @ Temp Extremes) 36 (S Level)

4* TM 2020 Pind (Particle Impact Noise Detection) 11

5 TM 5004 (or 5008 for Hybrids) (Final Electricals @ 11 (Note 1)


Temperature Extremes)

6 TM 2010/17 (Internal Visual) 7

7* TM 1014 (Seal Test, Cond A, B, or C) 7 (Note 2)

8 TM 2012 (Radiography) 7

9 TM 2009 (External Visual) 7 (Note 2)

10 TM 5007/5013 (GaAs) (Wafer Acceptance) 1

11 TM 2023 (Non-Destructive Bond Pull) 1

87
πQ = 2 +
∑ Point Valuations
∗ΝΟΤ ΑΠΠΡΟΠΡΙΑΤΕ ΦΟΡ ΠΛΑΣΤΙΧ ΠΑΡΤΣ

NOTES:
1. Point valuation only assigned if used independent of Groups 1, 2 or 3.
2. Point valuation only assigned if used independent of Groups 1 or 2.
3. Sequencing of tests within groups 1, 2 and 3 must be followed.
4. TM refers to the MIL-STD-883 Test Method.
5. Nonhermetic parts should be used only in controlled environments (i.e., GB and other
temperature/humidity controlled environments).

EXAMPLES:
87
1. Mfg. performs Group 1 test and Class B burn-in: πQ = 2 + 50+30 = 3.1
2. Mfg. performs internal visual test, seal test and final electrical test: π = 2 + = 5.5

Other Commercial or Unknown Screening Levels πQ = 10

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 96

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A10

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 97

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A10

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 98

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A10

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 99

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A10

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 100

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A10

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 101

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A10

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 102

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A10

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 103

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A10

Table A10-7: πQ Factor for Use with Inductive,


Electromechanical and Miscellaneous Parts

Established
Part Type Reliability MIL-SPEC Non-MIL

Inductive Devices .25 1.0 10

Rotating Devices N/A N/A N/A

Relays, Mechanical .60 3.0 9.0

Relays, Solid State and Time N/A 1.0 4


Delay (Hybrid & Solid State)

Switches, Toggle, Pushbutton, N/A 1.0 20


Sensitive

Switches, Rotary Wafer N/A 1.0 50

Switches, Thumbwheel N/A 1.0 10

Circuit Breakers, Thermal N/A 1.0 8.4

Connectors N/A 1.0 2.0

Interconnection Assemblies N/A 1.0 2.0

Connections N/A N/A N/A

Meters, Panel N/A 1.0 3.4

Quartz Crystals N/A 1.0 2.1

Lamps, Incandescent N/A N/A N/A

Electronic Filters N/A 1.0 2.9

Fuses N/A N/A N/A

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 104

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A11

Topic A11: Reliability Adjustment Factors


"What if" questions are often asked regarding reliability figures of merit. For a rapid
translation, tables for different quality levels, various environments and
temperatures are presented to make estimates of the effects of the various
changes. The data base for these tables is a grouping of approximately 18000
parts from a number of equipment reliability predictions performed in-house on
military contracts. The ratios were developed using this data base and MIL-HDBK-
217F algorithms. The relative percentages of the part data base are shown as
follows:

Table A11-1: Part Quality Factors (Multiply MTBF by)

To Quality Class
Space Full Ruggedized Commercial
Military
Space X 0.8 0.5 0.2
From Full Military 1.3 X 0.6 0.2
Quality Ruggedized 2.1 1.6 X 0.4
Class Commercial 5.3 4.1 2.5 X

IC Class S Class B Class B-1 Class D


Semiconductor JANTXV JANTX JAN NONMIL
Passive Part ER(S) ER(R) ER(M) NONMIL

CAUTION: Do not apply to Mean-Time-Between-Critical-Failure (MTBCF).

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 105

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A11

Table A11-2: Environmental Conversion Factors


(Multiply MTBF by)

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 106

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A11

Table A11-3: Temperature Conversion Factors


(Multiply MTBF by)

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 107

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A12

Topic A12: Surface Mount Technology (SMT)


Assessment Model
The SMT Model was developed to assess the life integrity of leadless and leaded
devices. It provides a relative measure of circuit card wearout due to thermal
cycling fatigue failure of the "weakest link" SMT device. An analysis should be
performed on all circuit board SMT components. The component with the largest
failure rate value (weakest link) is assessed as the overall board failure rate due to
SMT. The model assumes the board is completely renewed upon failure of the
weakest link and the results do not consider solder or lead manufacturing defects.
This model is based on the techniques developed in the Rome Laboratory technical
report RL-TR-92-197, "Reliability Assessment of Critical Electronic Components."

λSMT = Average failure rate over the expected equipment life cycle due to
surface mount device wearout. This failure rate contribution to the
system is for the Surface Mount Device on each board exhibiting
the highest absolute value of the strain range:

[( αs ∆T - αCC (∆T + TRISE) ) x 10-6 ]


ECF
λSMT =
αSMT

ECF = Effective cumulative number of failures over the Weibull


characteristic life.

Table A12-1: Effective Cumulative Failures - ECF

LC
ECF
αSMT

0 - .1 .13
.11 - .20 .15
.21 - .30 .23
.31 - .40 .31
.41 - .50 .41
.51 - .60 .51
.61 - .70 .61
.71 - .80 .68
.81 - .90 .76
> .9 1.0

LC = Design life cycle of the equipment in which the circuit board is


operating.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 108

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A12

αSMT = The Weibull characteristic life. αSMT is a function of device and


substrate material, the manufacturing methods, and the application
environment used.

Nf
αSMT = CR

where:
CR = Temperature cycling rate in cycles per calendar hour

Nf = Average number of thermal cycles to failure

⎛ d ⎞ -2.26
Nf = 3.5 ⎜ (α S ∆T − α cc (∆T + TRISE )) x10 −6 ⎟ (πLC)
⎝ .65h ⎠

where:
d = Distance from center of device to the furthest solder joint in
mils (thousandths of an inch)

h = Solder joint height in mils for leadless devices. Use h = 8


for all leaded configurations.

αS = Circuit board substrate thermal coefficient of expansion


(TCE)

∆T = Use environment temperature difference

αCC = Package material thermal coefficient of expansion (TCE)

TRISE = Temperature rise due to power dissipation (Pd)


Pd = θJCP θJC = Thermal resistance °/Watt
P = Watts

πLC = Lead configuration factor

Table A12-2: CR - Cycling Rate Values

Equipment Type Number of Cycles/Hour

Consumer (television, radio, recorder) .0042


Computer .17
Telecommunications .0042
Commerical Aircraft .34
Industrial .021
Military Ground Applications .03
Military Aircraft .12

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 109

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A12

Table A12-3: πLC - Lead Configuration Factor

Lead Configuration πLC

Leadless 1
J or S Lead 150
Gull Wing 5,000

Table A12-4: αCC - TCE Package Values

Substrate Material αCC Average Value

Plastic 7
Ceramic 6

Table A12-5: ∆T - Use Environment Temperature Difference

Environment ∆T

GB 7
GF 21
GM 26
AIC 31
AUC 57
AIF 31
AUF 57
ARW 31
NU 61
NS 26

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 110

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A12

Table A12-6: αS - TCE Substrate Values

Substrate Material αS

FR-4 Laminate 18
FR-4 Multilayer Board 20
FR-4 Multilayer Board w/Copper Clad Invar 11
Ceramic Multilayer Board 7
Copper Clad Invar 5
Copper Clad Molybdenum 5
Carbon-Fiber/Epoxy Composite 1
Kevlar Fiber 3
Quartz Fiber 1
Glass Fiber 5
Epoxy/Glass Laminate 15
Polimide/Glass Laminate 13
Polyimide/Kevlar Laminate 6
Polyimide/Quartz Laminate 8
Epoxy/Kevlar Laminate 7
Aluminum (Ceramic) 7
Epoxy Aramid Fiber 7
Polyimide Aramid Fiber 6
Epoxy-Quartz 9
Fiberglass Teflon Laminates 20
Porcelainized Copper Clad Invar 7
Fiberglass Ceramic Fiber 7

Example: A large plastic encapsulated leadless chip carrier is mounted on a


epoxy-glass printed wiring assembly. The design considerations are: a square
package is 1480 mils on a side, solder height is 5 mils, power dissipation is .5
watts, thermal resistance is 20°C/watt, the design life is 20 years and environment
is military ground application. The failure rate developed is the impact of SMT for a
single circuit board and accounts for all SMT devices on this board. This failure
rate is added to the sum of all of the component failure rates on the circuit board.

ECF
λSMT =
αSMT

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 111

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A12

Nf
αSMT = CR

⎛ d ⎞ -2.26
Nf = 3.5 ⎜⎜ (α S ∆T − aCC (∆T + TRISE )) x10 −6 ⎟⎟ (πLC)
⎝ (.65)(h) ⎠

1
For d: d = 2 (1480) = 740 mils

For h: h = 5 mils

For αS: αS = 15 (Table A12-6 - Epoxy Glass)

For ∆T: ∆T = 21 (Table A12-5 - GF)

For αCC: αCC = 7 (Table A12-4 - Plastic)

For TRISE: TRISE = θJC P = 20(.5) = 10°C

For πLC: πLC = 1 (Table A12-3 - Leadless)

For CR: CR = .03 cycles/hour (Table A12-2 - Military Ground)

⎛ 740 ⎞ -2.26
Nf = 3.5 ⎜⎜ (15(21) − 7(21 + 10)) x10 −6 ⎟⎟ (1)
⎝ (.65)(5) ⎠

Nf = 18,893 thermal cycles to failure

18,893 cycles
αSMT = .03 cyles/hour = 629,767 hours

hr
(20 yrs.)⎛8760 yr ⎞
LC ⎝ ⎠
= 629,767 hrs. = .28
αSMT

ECF = .23 failures (Table A12-1)

ECF .23 failures


λSMT = = 629,767 hours = .0000004 failures/hour
αSMT

λSMT = .4 failures/106 hours

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 112

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A13

Topic A13: Finite Element Analysis

Background
Finite Element Analysis (FEA) is a computer simulation technique that can predict
the material response or behavior of a modeled device. These analyses can
provide material stresses and temperatures throughout modeled devices by
simulating thermal or dynamic loading situations. FEA can be used to assess
mechanical failure mechanisms such as fatigue, rupture, creep, and buckling.

When to Apply
FEA of electronic devices can be time consuming and analysis candidates must be
carefully selected. Selection criteria includes devices, components, or design
concepts which: (a) Are unproven and for which little or no prior experience or test
information is available; (b) Utilize advanced or unique packaging or design
concepts; (c) Will encounter severe environmental loads; (d) Have critical thermal
or mechanical performance and behavior constraints.

Typical Application
A typical finite element reliability analysis of an electronic device would be an
assessment of the life (i.e. number of thermal or vibration cycles to failure or hours
of operation in a given environment) or perhaps the probability of a fatigue failure
after a required time of operation of a critical region or location within the device.
Examples are surface mount attachments of a chip carrier to a circuit board, a
critical location in a multichip module, or a source via in a transistor microcircuit.
First, the entire device (or a symmetrical part of the entire device) is modeled with a
coarse mesh of relatively large sized elements such as 3-dimensional brick ele-
ments. For example, as shown in Figure A13-1, an entire circuit board is analyzed
(Step 1). The loading, material property, heat sink temperature, and structural
support data are entered into the data file in the proper format and sequence as
required by the FEA solver. Output deflections and material stresses for all node
point locations on the model are then acquired. For microelectronic devices,
second or third follow-on models of refined regions of interest may be required
because of the geometrically small feature sizes involved. The boundary nodes for
the follow-on model are given initial temperatures and displacements that were
acquired from the circuit board model. The figure shows a refined region
containing a single chip carrier and its leads (Step 2). The more refined models
provide accurate temperature, deflection, and stress information for reliability
analyses. For example, the results of Step 2 could be a maximum stress value in a
corner lead of a chip carrier caused by temperature or vibration cycling. A
deterministic life analysis is made by locating the stress value on a graph of stress
versus cycles to failure for the appropriate material and reading cycles to failures
on the abscissa (Step 3). Cycles to failure and time to failure are related by the
temperature cycling rate or the natural frequency for thermal or dynamic
environments, respectively. A distribution of stress coupled with a distribution of
strength (i.e. scatter in fatigue data) will result in a probability distribution function
and a cumulative distribution function of time to failure (Step 4).

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 113

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A13

● FEM Results

Step 1

● Interpretation of Local Displacements/Stresses

Step 2

Vibration and Thermal Displacements


Component Relative to Board
● Life Analysis

Step 3

● Probabilistic Reliability Analysis

Step 4

Figure A13-1

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 114

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A14

Topic A14: Common Thermal Analysis Procedures


The following graphs and associated examples provide a guide for performing
basic integrated circuit junction temperature calculations for three of the most
common types of cooling designs: impingement, cold wall, and flow through
modules. This procedure is intended to provide the Reliability Engineer with a
simple means of calculating approximate junction temperatures and for performing
a quick check of more detailed thermal analysis calculations.

Card-Mounted, Flow-through Modules

Notes:
1. Module dissipation uniformly distributed and applied on both sides.
2. The part junction temperature is obtained as follows:
T = T + ∆T + (θ +θ )Q
J A BA JC CB P
where
T is the junction temperature
J
T is the cooling air inlet
A
∆T is the weighted average heat-exchanger-to-cooling-air inlet temperature difference (See Note 4)
BA
θ is the junction-to-case thermal resistance in °C/W
JC
θ is the thermal resistance between the case and the heat exchanger in °C/W
CB
Q is the part power dissipation in watts
P
3. All temperatures are in °C
4. Weighted average temperature difference is the value at a location two thirds of the distance from the inlet to
the outlet, as shown in sketch. Experience has shown that the temperature at this location approximates the
average board temperature.

Figure A14-1: Estimated Temperature of Card-mounted Parts


Using Forced-air-cooled Flow-through Modules

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 115

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A14

Card-Mounted, Air-Cooled Coldwalls

Notes:
1. ∆TCE from curve is for L/W = 2; for other L/W ratios, multiply ∆TCE from curve by 0.5 L/W

2. The junction temperature is obtained as follows:

0.03 QT
TJ = TA + ma + ∆TCE + QT (0.0761/W + 0.25) + QP (θJC + θCB)
where
TJ is the junction temperature
TA is the air inlet temperature
QT is the total card power dissipation in watts
QP is the part power dissipation in watts
ma is the airflow rate in Kg/Min
∆TCE is the temperature difference between center of card and card edge
W is the card width in meters
θJC is the junction-to-case thermal resistance in °C/W
θCB is the case-to-mounting surface thermal resistance in °C/W

3. All temperatures are in °C

4. The card edge to card guide interface thermal resistance is 0.0761 °C/W per meter of card width

5. The coldwall convective thermal resistance is 0.25°C/W

Figure A14-2: Estimated Temperature of Card-mounted Parts


Using Forced-air Cooled Coldwalls

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 116

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A14

Air Impingement, Card-Mounted

Notes:
1. The part junction temperature is obtained as follows:
TJ = TA + ∆TBA + (θJC + θCB) QP
where
TJ is the junction temperature
TA is the local cooling air temperature
∆TBA is the local card-to-air temperature difference
θJC is the junction-to-case thermal resistance in °C/W
θCB is the case-to-mounting-surface thermal resistance in °C/W
QP is the part power dissipation in watts

2. All temperatures are in °C

3. Assumes all the heat is uniformly distributed over both sides of the board

4. Assumes no air temperature rise (add any rise in air temperature to the result)

Figure A14-3: Estimated Temperature of Card-mounted Parts


Using Forced-air Impingement Cooling at Sea Level

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 117

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A14

Example 1: Card Mounted, Air Cooled Coldwalls


Estimate the junction temperature of a 0.25-W microcircuit mounted at the center of
a coldwall-cooled circuit board, 0.152 X 0.102 m, with a total power dissipation of
20 W. The part, which has a mounting base of 0.00635 X 0.00953 m, is attached
to the board with a 7.6 X 10-5 m (3 mils) thick bonding compound whose thermal
conductivity (k) is 0.25 W/m-°C. The forced airflow rate is 1.8 kg/min with an inlet
temperature of 45°C. The board contains a 5.08 X 10-4 (0.020 inch) thick copper
thermal plane. The θJC of the part is 50°C/W.

1. From Figure A14-2, ∆TCE = 57°C for L/W = 2


0.152 m
Actual L/W = 0.102 m = 1.49, so

Corrected ∆TCE = (0.5) (1.49) (57°C) = 42.5°C


7.6 X 10-5m
2. θCB = (0.25 W/m°C) (0.00635m) (0.00953 m) = 5.03°C/W

3. From Note 2 in Figure A14-2


0.03QT
TJ = TA + ma + ∆TCE + QT (0.0761 W + 0.25) + QP (θJC + θCB)

= 45 +
°
0.03 (20)
1.8 42.5 + 20 (0.0761
0.102 + 0.25) + 0.25 (50 + 5.03)
TJ = 122 C

Example 2: Air Impingement, Card Mounted Cooling


Estimate the junction temperature of a part dissipating 0.25 W and mounted on a
circuit board cooled by impingement with ambient air at 40°C and a velocity of 15
m/s. The circuit board, whose dimensions are 0.102 X 0.152 m, has a total power
dissipation of 20 W. The part, whose mounting base is 0.00635 X 0.00953 m, is
attached to the board with a 7.61 X 10-5 m (3 mils) thick bonding compound whose
thermal conductivity (k) is 0.25 W/m-°C. The junction-to-case thermal resistance
(θJC) of the part is 50°C/W.

1. Compute the card heat flux density (see Note 3 in Figure A14-3):
20 W 2
2 (0.102 m) (0.152 m) = 645 W/m

2. From Figure A14-3: ∆TBA = 17°C

7.61 X 10-5 m
3. θCB = (0.25W/m°C) (0.00635 m) (0.00953 m) = 5.03°C/W

4. From Note 1 in Figure A14-3


TJ = TA + ∆TBA + (θJC + θCB) QP = 40 + 17 + (50 + 5.03) 0.25
TJ = 71°C

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 118

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A15

Topic A15: Sneak Circuit Analysis


Electronics that operate within their specifications are still vulnerable to critical
failures. Hidden within the complexity of electronic designs are conditions that slip
past standard stress tests. These conditions are known as sneak circuits.

Definitions

● Sneak Circuit: A condition which causes the occurrence of an unwanted


function or inhibits a desired function even though all components function
properly.

● Sneak Paths: Unintended electrical paths within a circuit and its external
interfaces.

● Sneak Timing: Unexpected interruption or enabling of a signal due to switch


circuit timing problems.

● Sneak Indications: Undesired activation or de-activation of an indicator.

● Sneak Labels: Incorrect or ambiguous labeling of a switch.

● Sneak Clue: Design rule applied to a circuit pattern to identify design


inconsistencies.

Cause of Sneaks

● Complex designs with many interfaces


● Flaws unknowingly designed into equipment
● Switching and timing requirements
● Incomplete analyses and test

Why Do Sneak Analysis?

● Method for detecting hidden failures


● Verification of interface switching and timing requirements
● Improves system/unit reliability

Where are Sneak Circuits?

● Electrical power systems


● Switching circuits
● Distribution and control systems
● Software control functions
● Interface configurations

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 119

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A15

Table A15-1: Typical Clue Statements

Clue Sneak Impact


Fanout Exceeded Design Concern Unpredictable Outputs
Unterminated CMOS Design Concern Device Damage
Input
Large Time Constant Sneak Timing Unpredictable
Switching Times
Uncommitted Open Design Concern False Unstable Logic
Collector Output

Performing Sneak Analysis


● Time to complete analysis: An average Sneak Circuit Analysis (SCA) is a
lengthy process that requires several months to complete. Redrawing the
electronics of a system into hundreds of topographical patterns and checking
each one against a multitude of sneak clues is a time consuming task.

● Cost of analysis: SCA specialists will be required due to the need for
proprietary sneak clues. Their cost of analysis is based on part count and
design complexity. Outside specialists, not familiar with the design, will
require extra time and money to complete a detailed analysis of the functions
and operation of a design. This learning curve cost is in addition to the cost
of analysis.

● Availability of results: A manual SCA requires preproduction level


drawings to prevent late design changes from inserting new sneaks into the
system after performing the analysis. Extra time must be available to review
the results or taking the necessary corrective action will require hardware
rework, recall, or redesign rather than drawing changes.

For More Information


To perform a manual analysis, many independent contractors are available for
contracts. If in-house work is contemplated, RADC-TR-89-223, "Sneak Circuit
Analysis for the Common Man," is recommended as a guide. Automated tools are
available including the Rome Laboratory prototype called SCAT (Sneak Circuit
Analysis Tool). A new Rome Laboratory tool, Sneak Circuit Analysis Rome
Laboratory Engineering Tool (SCARLET), is in development for future use.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 120

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A15

Example: Subsystem Sneak Circuit Reverse Current Operation


Figure A15-1a shows the original circuit which was designed to prevent routine
opening of the cargo door unless the aircraft was on the ground with the gear down
and locked. The secondary switch permits emergency operation of the door when
the gear is not down. Figure A15-1b shows the network tree diagram which
indicates the existence of a sneak path. If the emergency and normal door open
switches are both closed, the gear will be inadvertently lowered. The solution to
the problem is the addition of a diode to prevent reverse current flow as shown in
Figure A15-1c.

Figure A15-1: Sneak Circuit Example

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 121

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A16

Topic A16: Dormant Analysis


In the past, analysis techniques for determining reliability estimates for dormant or
storage conditions relied on rules of thumb such as "the failure rate will be reduced
by a ten to one factor" or "the failure rate expected is zero." A more realistic
estimate, based on part count failure results, can be calculated by applying the
conversion factors shown in Table A16-1. The factors convert active failure rates
by part type to passive or dormant conditions for seven scenarios. For example, to
convert the reliability of an active airborne receiver to a captive carry dormant
condition, determine the number of components by type, then multiply each by the
respective active failure rate obtained from handbook data, field data, or vendor
estimates. The total active failure rate for each type is converted using the
conversion factors of Table A16-1. The dormant estimate of reliability for the
receiver is determined by summing the part results.

Example: Aircraft Receiver Airborne Active Failure


Rate to Captive Carry Passive Failure Rate
Conversion
Device Qty. λA λT Factor λP
IC 25 0.06 1.50 .06 .090
Diode 50 0.001 0.05 .05 .003
Transistor 25 0.002 0.05 .06 .003
Resistor 100 0.002 0.20 .06 .012
Capacitor 100 0.008 0.80 .10 .080
Switch 25 0.02 0.50 .20 .100
Relay 10 0.40 4.00 .20 .800
Transformer 2 0.05 0.10 .20 .020
Connector 3 1.00 3.00 .005 .015
PCB 1 0.70 0.70 .02 .014
TOTALS --- --- 10.9 --- 1.137

λA = Part (Active) Failure Rate (Failures per Million Hours)


λT = Total Part (Active) Failure Rate (Failures per Million Hours)
λP = Part (Passive) (Dormant) Failure Rate (Failures per Million Hours)

Mean-Time-Between-Failure (Active) = 92,000 hours

Mean-Time-Between-Failure (Passive) = 880,000 hours

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 122

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A16

Table A16-1: Dormant Conversion Factors


(Multiply Active Failure Rate by)

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 123

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A16

Topic A17: Software Reliability Prediction and Growth


Software failures arise from a population of software faults. A software fault (often
called a "bug") is a missing, extra, or defective code that has caused or can
potentially cause a failure. Every time a fault is traversed during execution, a
failure does not necessarily ensue; it depends on the machine state (values of
intermediate variables). The failure rate of a piece of software is a function of the
number and location of faults in the code, how fast the program is being executed,
and the operational profile. While most repair activity is imperfect, the hoped-for
and generally observed result is that the times between failures tend to grow longer
and longer as the process of testing and fault correction goes on. A software
reliability growth model mathematically summarizes a set of assumptions about the
phenomenon of software failure. The model provides a general form for the failure
rate as a function of time and contains parameters that are determined either by
prediction or estimation.

The following software reliability prediction and growth models are extracted from
Rome Laboratory Technical Report RL-TR-92-15, "Reliability Techniques For
Combined Hardware and Software Systems." These models can be used to
estimate the reliability of initially released software along with the reliability
improvement which can be expected during debugging.

Initial Software Failure Rate


ri K W o
λo = l failures per CPU second

where
ri = host processor speed (instructions/sec)
K = fault exposure ratio which is a function of program data dependency
and structure (default = 4.2 x 10-7)
Wo = estimate of the total number of faults in the initial program
(default = 6 faults/1000 lines of code)
I = number of object instructions which is determined by number of
source lines of code times the expansion ratio

Programming Language Expansion Ratio


Assembler 1
Macro Assembler 1.5
C 2.5
COBOL 3
FORTRAN 3
JOVIAL 3
Ada 4.5

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 124

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A16

Software Reliability Growth

λ(t) = λo e-[βt]

where
λ(t) = software failure rate at time t (in CPU time)
λo = initial software failure rate
t = CPU execution time (seconds)

β = decrease in failure rate per failure occurrence

λ
β = B Wo
o

B = fault reduction factor (default = .955)


Wo = initial number of faults in the software program per 1,000
lines of code

Example 1: Estimate the initial software failure rate and the failure rate after
40,000 seconds of CPU execution time for a 20,000 line Ada program:

ri = 2 MIPS = 2,000,000 instructions/sec

K = 4.2 x 10-7

Wo = (6 faults/1000 lines of code) (20,000 lines of code) = 120 Faults

I = (20,000 source lines of code) (4.5) = 90,000 instructions

(2,000,000 inst./sec) (4.2 x 10-7) (120 faults)


λo = 90 000 inst.
'

λo = .00112 failures/CPU second

λ .00112 failures/sec
β = B Wo = (.955) ( 120 faults )
o

β = 8.91 x 10-6 failures/sec


-6
λ (40,000) = .00112 e-[ (8.91 x 10 failures/sec) (40,000 sec)]

λ (40,000) = .000784 failures/CPU second

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 125

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A16

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 126

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A16

Section T
Testing
Contents
T1 ESS Process ...................................................... 129

T2 ESS Placement .................................................. 130

T3 Typical ESS Profile............................................ 131

T4 RGT and RQT Application ................................ 133


T5 Reliability Demonstration Plan
Selection ............................................................ 134
T6 Maintainability Demonstration Plan
Selection ............................................................ 136
T7 Testability Demonstration Plan
Selection ............................................................ 137

T8 FRACAS (Failure Reporting and


Corrective Action System) ............................... 138

T9 Reliability Demonstration Test Plan


Checklist ............................................................ 140

T10 Reliability Test Procedure Checklist ............... 144

T11 Maintainability Demonstration Plan


and Procedure Checklist .................................. 145

T12 R&M Test Participation Criteria ....................... 146

T13 R&M Demonstration Checklist......................... 147

T14 Design of Experiments ..................................... 148

T15 Accelerated Life Testing................................... 153

T16 Time Stress Measurement................................ 159

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 127

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ANALYSIS - TOPIC A16

Insight
A well tailored reliability and maintainability program contains several forms of
testing. Depending on the program constraints, a program should be invoked to
mature the designed in reliability as well as to determine whether the contract
quantitative reliability and maintainability requirements have been achieved prior to
a commitment to production. All forms of testing (Environmental Stress Screening
(ESS), Reliability Growth, Reliability Demonstration) must be tailored to fit specific
program constraints. Test plans and procedures must be evaluated to ensure
proper test implementation. Test participation depends on the program situation
but test reports must be carefully evaluated by the government.

For More Information

MIL-STD-471 "Maintainability Verification/Demonstration /Evaluation"


MIL-STD-781 "Reliability Testing for Engineering Development,
Qualification and Production"
MIL-HDBK-781 "Reliability Test Methods, Plans, and Environments for
Engineering Development, Qualification, and Production"
DoD-HDBK-344 "Environmental Stress Screening of Electronic
Equipment"
MIL-HDBK-189 "Reliability Growth Management"
RADC-TR-86-241 "Built-In-Test Verification Techniques"
RADC-TR-89-160 "Environmental Extreme Recorder
RADC-TR-89-299 "Reliability & Maintainability Operational Parameter
Translation II
RADC-TR-90-269 "Quantitative Reliability Growth Factors for ESS"
RL-TR-91-300 "Evaluation of Quantitative Environmental Stress
Screening (ESS) Methods"

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 128

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T1

Topic T1: ESS Process


Environmental Stress Screening (ESS) has been the subject of many recent
studies. Determination of the optimum screens for a particular product, built by a
particular manufacturer, at a given time is an iterative process. Procedures for
planning for and controlling the screening process are contained in DOD-HDBK-
344 (USAF), "Environmental Stress Screening of Electronic Equipment." The
process can be depicted as shown below:

Figure T1-1: ESS Process

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 129

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
January 21, 2005 TESTING - TOPIC T3

Topic T2: ESS Placement


Level of
Assembly Advantages Disadvantages

Assembly ● Cost per flaw precipitated is ● Test detection efficiency is


lowest (unpowered screens) relatively low

● Small size permits batch ● Test equipment cost for


screening powered screens is high

● Low thermal mass allows


high rates of temperature
change

● Temperature range greater


than operating range
allowable

Unit ● Relatively easy to power and ● Thermal mass precludes


monitor performance during high rates of change or
screen requires costly facilities

● Higher test detection ● Cost per flaw significantly


efficiency than assembly higher than assembly level
Ievel
● Temperature range reduced
● Assembly interconnections from assembly level
(e.g., wiring backplane) are
screened

System ● All potential sources of flaws ● Difficult and costly to test at


are screened temperature extremes

● Unit interoperability flaws ● Mass precludes use of


detected effective vibration screens or
makes use costly
● High test detection efficiency
● Cost per flaw is highest

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 130

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T3

Topic T3: Typical ESS Profile


Assemblies (Printed
Screen Type, Parameter Wiring Assemblies) (SRU)* Equipment or Unit
and Conditions (LRU/LRM)*

Thermal Cycling Screen


Temperature Range From - 50°C to + 75°C From -40°C to +71°C
(Minimum) (See Note 1 )

Temperature Rate of Change 20°C/Minute 15°C/Minute


(Minimum)
(See Notes 1 & 2)

Temperature Dwell Duration (See Until Stabilization Until Stabilization


Note 3)

Temperature Cycles 20 to 40 12 to 20

Power On/Equipment Operating No (See Note 5)

Equipment Monitoring No (See Note 6)

Electrical Testing After Screen Yes (At Ambient Yes (At Ambient
Temperature) Temperature)

Random Vibration
(See Notes 7 and 8)
Acceleration Level 6 Grms 6 G rms

Frequency Limits 20 - 2000 Hz 20 - 2000 Hz

Axes Stimulated Serially or 3 3


Concurrently (See Note 9)

Duration of Vibration (Minimum)


o Axes stimulated serially 10 Minutes/Axis 10 Minutes/Axis
o Axes stimulated concurrently 10 Minutes 10 Minutes

Power On/Off Off On (See Note 5)

Equipment Monitoring No Yes (See Note 6)

Piece Parts: Begin the manufacturing and repair process with 100 defects per million or
less (See Note 10).

*SRU - Shop Replaceable Unit *LRM - Line Replaceable Module


*LRU - Line Replaceable Unit

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 131

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T3

Notes:
1. All temperature parameters pertain to agreed upon selected sample points inside the
unit being screened, not chamber air temperature.

2. Rapid transfers of the equipment between one chamber at maximum temperature and
another chamber at minimum temperature are acceptable. SRU temperature rates of
change may be reduced if equipment damage will occur at 20°C/minute.

3. The temperature has stabilized when the temperature of the part of the test item
considered to have the longest thermal lag is changing no more than 2°C per hour.

4. A minimum of 5 thermal cycles must be completed after the random vibration screen.
Random vibration frequently induces incipient failures.

5. Shall occur during the low to high temperature excursion of the chamber and during
vibration. When operating, equipment shall be at maximum power loading. Power will be
OFF on the high to low temperature excursion until stabilized at the low temperature.
Power will be turned ON and OFF a minimum of three times at temperature extremes on
each cycle.

6. Instantaneous go/no-go performance monitoring during the stress screen is essential to


identify intermittent failures when power is on.

7. Specific level may be tailored to individual hardware specimen based on vibration


response survey and operational requirements.

8. When random vibration is applied at the equipment level, random vibration is not
required at the subassembly level. However, subassemblies purchased as spares are
required to undergo the same random vibration required for the equipment level. An
"LRU mock-up" or equivalent approach is acceptable.

9. One axis will be perpendicular to plane of the circuit board(s)/LRM(s).

10. The Air Force or its designated contractor may audit part defective rates at its discretion.
The test procedure will include thermal cycling as outlined below. Sample sizes and test
requirements are included in the "Stress Screening Military Handbook," DOD-HDBK-
344.

Minimum Temperature Range From - 54°C to + 100°C


Minimum Temperature Rate of Change The total transfer time from hot-to-cold or cold-
to-hot shall not exceed one minute. The
working zone recovery time shall be five
minutes maximum after introduction of the load
from either extreme in accordance with MIL-
STD-883D.
Temperature Dwell Until Stabilization (See Note 3)
Minimum Temperature Cycles 25
Power On/Equipment Monitoring No
Electrical Testing After Screen Yes (At high and low temperatures)

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 132

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T4

Topic T4: RGT and RQT Application


The Reliability Qualification Test (RQT) is an "accounting task" used to measure
the reliability of a fixed design configuration. It has the benefit of holding the
contractor accountable some day down the road from his initial design process. As
such, he is encouraged to seriously carry out the other design related reliability
tasks. The Reliability Growth Test (RGT) is an "engineering task" designed to
improve the design reliability. It recognizes that the drawing board design of a
complex system cannot be perfect from a reliability point of view and allocates the
necessary time to fine tune the design by finding problems and designing them out.
Monitoring, tracking and assessing the resulting data gives insight into the
efficiency of the process and provides nonreliability persons with a tool for
evaluating the development's reliability status and for reallocating resources when
necessary. The forms of testing serve very different purposes and complement
each other in development of systems and equipments. An RGT is not a substitute
for an RQT, or other reliability design tasks.

Table T4-1: RGT and RQT Applicability as a Function of


System/Program Constraints

Reliability Qualification Test


System/Program Reliability Growth Test
Parameter Apply Consider Don't Apply Apply Consider Don't Apply

Challenge to state-of- X X
the-art
Severe use environment X X
One-of-a-kind system X X
High quantities to be X X
produced
Benign use environment X X
Critical mission X X
Design flexibility exists X X
No design flexibility X X
Time limitations X X
Funding limitations X X
Very high MTBF system X X

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 133

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T5

Topic T5: Reliability Demonstration Plan Selection

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 134

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T5

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 135

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
January 21, 2005 TESTING - TOPIC T6

Topic T6: Maintainability Demonstration


Plan Selection
Program Constraints
Number of Level of Desired
Test Calendar Equipments Test Facility Maintainabilit Confidence in
Characteristic Time Available Limitations y Required Results
Required

Fixed sample Much less No effect on No effect on Fixed sample


size or than that sample size sample size size test gives
sequential required for number. number. demonstrated
type tests reliability maintainability
demo. Time to desired
required is confidence.
proportional to Sequential is
sample size test of
number. hypothesis.
Sample size
may vary
depending on
program.

Test plan risks Lower Must have No effect on Higher


(consumer and producer and ability to sample size confidence
producer) (1 - consumer simulate number. levels require
consumer risk risks require operational more samples
= confidence) larger sample maintenance than lower
Risks can be sizes than environment, confidence
tailored to higher risks. scenario, levels.
program skills, levels
available.

Note: Demonstration facility must have capacity for insertion of simulated faults.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT 136

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T7

Topic T7: Testability Demonstration Plan Selection


Program Constraints
Number of Desired
Test Calendar Time Equipments Test Facility Confidence in
Available Results
Characteristic Required Limitations

Fixed sample Calendar time No effect on Same as that Provides for


size type tests much less than that sample size required for producer's risks of
required for number. maintainability 10%. Provides
reliability demonstration. consumer
demonstration. assurance that
Time required is designs with
proportional to significant
sample size. May deviations from
vary depending on specified values
program. will be rejected.

Preset Risks Risks inversely


(consumer and proportional to
producer) (1 - sample size used.
consumer risk
= confidence)

Notes:

1. Sample size dependent on total number of sample maintenance tasks selected as per
paragraph A.10.4 of MIL-STD-471A.

2. Demonstration facility must have capability for insertion of simulated faults.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-137

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T8

Topic T8: FRACAS (Failure Reporting and


Corrective Action System)
Early elimination of failure trends is a major contributor to reliability growth and
attaining the needed operational reliability. To be effective, a closed loop
coordinated process must be implemented by the system/equipment contractor. A
description of the major events and the participant's actions is shown below.

Event Functions Actions

Operators: ● Identify a problem, call for


Failure or Malfunction maintenance, annotate the incident.

Maintenance: ● Corrects the problem, logs the failure.

Quality: ● Inspects the correction.

Failure Report Maintenance: ● Generates the failure report with


supporting data (time, place,
equipment, item, etc.)

Quality: ● Insures completeness and assigns a


travel tag for the failed item for audit
control.
Data Logged
R&M: ● Log all the failure reports, validate the
failures and forms, classify the failures
(inherent, induced, false alarm).

Failure Review
R&M: ● Determine failure trends (i.e., several
failures of the same or similar part).

Design: ● Review operating procedures for error.

Failure Analysis R&M: ● Decide which parts will be destructively


analyzed.

Physics of Failure: ● Perform failure analysis to determine


the cause of failure (i.e., part or
external).
Failure Correction Quality:
● Inspect incoming test data for the part.
Design:
● Redesign hardware, if necessary.
Vendor:
Post Data Review ● New part or new test procedure.
Quality:
● Evaluate incoming test procedures,
inspect redesigned hardware.
R&M:
● Close the loop by collecting and
evaluating post test data for
reoccurrence of the failure.
Figure T8-1: Failure Reporting System Flow Diagram

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-138

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T8

Table T8-1: FRACAS Evaluation Checklist

Topic Items to Be Addressed

General ● Closed loop (i.e., reported, analyzed, corrected and


verified)

● Responsibility assigned for each step

● Overall control by one group or function

● Audit trail capability

● Travel tags for all failed items

● Fast turn-around for analysis

Failure Report ● Clear description of each event

● Surrounding conditions noted

● Operating time indicated

● Maintenance repair times calculated

● Built-in-test indications stated

Failure Analysis ● Perform if three or more identical or similar parts fail

● Perform if unit reliability is less than half of predicted

● Results should indicate: overstress condition,


manufacturing defect, adverse environmental condition,
maintenance induced or wearout failure mode

Failure Data ● Collated by week and month by unit

● Compared to allocated values

● Reliability growth tracked

● Problems indicated and tracked

● Correction data collected for verification

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-139

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T9

Topic T9: Reliability Demonstration Test


Plan Checklist*
Topic Items to Be Addressed

Purpose and Scope ● Statement of overall test objectives


● General description of all tests to be performed
Reference Documents ● List all applicable reference documents
Test Facilities ● Description of test item configuration
● Sketches of system layout during testing
● Serial numbers of units to be tested
● General description of test facility
● Identification of test location
● General description of failure analysis facility
● Security of test area
● Security of test equipment and records
● Test safety provisions
Test Requirements ● Pre-reliability environmental stress screening (ESS)
● Test length
● Number of units to be tested
● Number of allowable failures
● Description of MIL-HDBK-781 test plan showing accept, reject
and continue test requirements
● List of government furnished equipment
● List and schedule of test reports to be issued
Test Schedule ● Start date (approximate)
● Finish date (approximate)
● Test program review schedule
● Number of test hours per day
● Number of test days per week
Test Conditions ● Description of thermal cycle
● Description of thermal survey
● Description of vibration survey
● Description of unit under test mounting method
● Description of test chamber capabilities
● List of all limited life items and their expected life

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-140

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T9

Topic Items to Be Addressed

Test Conditions ● Description of all preventive maintenance tasks and


(cont'd) their frequency
● Description of unit under test calibration requirements
● Description of unit under test duty cycle
● General description of unit under test operating modes and
exercising method
Test Monitoring ● Description of test software and software verification method
● List of all units under test functions to be monitored and
monitoring method
● List of all test equipment parameters to be monitored and
monitoring method
● Method and frequency of recording all monitored parameters
Test Participation ● Description of all contractor functions
● Description of all contractor responsibilities
● Description of all government responsibilities
● Description of test management structure
Failure Definitions The following types of failures should be defined as relevant in
the test plan:
● Design defects
● Manufacturing defects
● Physical or functional degradation below specification limits
● Intermittent or transient failures
● Failures of limited life parts which occur before the specified
life of the part
● Failures which cannot be attributed to a specific cause
● Failure of built-in-test (BIT)

The following types of failures should be defined as nonrelevant


in the test plan:
● Failures resulting from improper installation or handling
● Failure of instrumentation or monitoring equipment which is
external to equipment under test
● Failures resulting from overstress beyond specification limits
due to a test facility fault
● Failures resulting from procedural error by technicians
● Failures induced by repair actions
● A secondary failure which is the direct result of a failure of
another part within the system.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-141

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T9

Topic Items to Be Addressed

Test Ground Rules The following test ground rules should be stated in the test plan:
● Transient Failures - Each transient or intermittent failure is to
be counted as relevant. If several intermittent or transient
failures can be directly attributed to a single hardware or
software malfunction which is corrected and verified during the
test, then only a single failure will be counted as relevant.
● Classification of Failures - All failures occurring during
reliability testing, after contractor failure analysis, shall be
classified as either relevant or nonrelevant. Based on the
failure analysis, the contractor shall justify the failure as
relevant or nonrelevant to the satisfaction of the procuring
activity.
● Pattern Failure - A pattern failure is defined as three or more
relevant failures of the same part in identical or equivalent
applications whose 95th percentile lower confidence limit
failure rate exceeds that predicted.
● Malfunctions Observed During Test Set Up, Troubleshooting
or Repair Verification - Malfunctions occurring during test set
up, troubleshooting or repair verification tests shall not be
considered as reliability test failures; however, such
malfunctions shall be recorded and analyzed by the contractor
to determine the cause of malfunctions and to identify possible
design or part deficiencies.
● Test Time Accumulation - Only the time accumulated during
the equipment power "on" portion of the test cycle shall be
considered as test time, provided that all functions are
operating as required. Operating time accumulated outside
the operational cycles such as during tests performed to
check out the setup or to verify repairs shall not be counted.
Also, time accumulated during degraded modes of operation
shall not be counted.
● Design Changes to the Equipment:
- After test reject decision—With procuring activity approval,
the equipment may be redesigned and retested from time
zero.

- Major design change prior to test reject—The contractor


may stop the test for purposes of correcting a major
problem. The test will restart from time zero after the
design change has been made.

- Minor design change prior to test reject—With procuring


activity approval, the test may be halted for the purpose of
making a minor design change. Test time will resume from
the point at which it was stopped and the design change
shall have no effect on the classification of previous
failures. Minor changes made as a result of other testing
may be incorporated, with procuring activity approval,
without declaring a failure of the equipment under test.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-142

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T9

Topic Items to Be Addressed

Test Ground Rules ● Failure Categorization - In order to clearly evaluate test results
(cont'd) and identify problem areas, failure causes will be categorized
as: (1) deficient system design, (2) deficient system quality
control, and (3) deficient part design or quality.

Test Logs The following types of test logs should be described in the test
plan:

● Equipment Data Sheets - used to record the exact values of


all parameters measured during functional testing of the
equipment.

● Test Log - a comprehensive narrative record of the required


test events. All names and serial numbers of the equipments
to be tested shall be listed before start of the test. An entry
shall be made in the test log each time a check is made on the
equipment under test, including data, time, elapsed time, and
result (e.g., pass/malfunction indication/failure or etc.). An
entry shall be made in the log whenever a check is made of
the test facilities or equipments (such as accelerometers,
thermocouples, input power, self-test, etc.). In the event of a
failure or malfunction indication, all pertinent data, such as test
conditions, facility conditions, test parameters and failure
indicators, will be recorded. The actions taken to isolate and
correct the failure shall also be recorded. Whenever
engineering changes, or equipment changes are
implemented, an entry shall be made in the log.

● Failure Summary Record - the failure summary record must


chronologically list all failures that occur during the test. This
record must contain all the information needed to reach an
accept or reject decision for the test. Each failure must be
described and all failure analysis data must be provided.

● Failure Report - for each failure that occurs, a failure report


must be initiated. The report should contain the unit that failed,
serial number, time, data, symptoms of failure and part or
parts that failed .

*Most of these contents also apply to reliability growth testing.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-143

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
January 21, 2005 TESTING - TOPIC T10

Topic T10: Reliability Test Procedure Checklist


Topic Items to Be Addressed

Equipment A general description of the equipment under test and its


Operation operation must be provided.

On/Off Cycle Specific on/off times for each subsystem must be described.

Operation Modes Specific times of operation for each system/subsystem mode


must be described.

Exercising Methods Methods of exercising all system/subsystem operation modes


must be described. (Note: The system should be exercised
continuously, not just power on).

Performance
Verification Step by step test procedures must be provided which fully
Procedure describe how and when each performance parameter will be
measured. Acceptable and unacceptable limits of each
measured parameter should also be specified. All failure and
out-of-tolerance indicators must be described and their location
defined. Programmable alarm thresholds must be specified.

Failure Event Step by step procedures must describe specific actions to be


Procedure taken in the event of a trouble indication.

Adjustments and Step by step procedures must be provided which fully describe
Preventive how and when all adjustments and preventive maintenance
Maintenance actions will be performed.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-144

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T11

Topic T11: Maintainability Demonstration Plan and


Procedure Checklist
Topic Items to Be Addressed
Purpose and Scope ● Statement of general test objectives
● General description of test to be performed

Reference ● List of all applicable reference documents


Documents

Test Facilities ● Description of test item configuration


● Sketches of system layout during testing
● Serial numbers of units to be tested
● General description of site and test facility
● Description of all software and test equipment

Test Requirements ● Description of MIL-STD-471 test plan requirements


● Method of generating candidate fault list
● Method of selecting and injecting faults from candidate list
● List of government furnished equipment
● List and schedule of test reports to be issued
● Levels of maintenance to be demonstrated
● Spares and other support material requirements

Test Schedule ● Start and finish dates (approximate)


● Test program review schedule

Test Conditions ● Description of environmental conditions under which test


will be performed
● Modes of equipment operation during testing

Test Monitoring ● Method of monitoring and recording test results

Test Participation ● Test team members and assignments


● Test decision making authority

Test Ground Rules ● Instrumentation failures


with Respect to ● Maintenance due to secondary failures
● Technical manual usage and adequacy
● Maintenance inspection, time limits and skill level

Testability ● Repair levels for which requirements will be demonstrated


Demonstration ●Built-in-test requirements to be demonstrated
● External tester requirements to be demonstrated
● Evaluation method for making pass/fail decision
● Performance of FMEA prior to test start
● Method of selecting and simulating candidate faults
● Acceptable levels of ambiguity at each repair level

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-145

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
January 21, 2005 TESTING - TOPIC T12

Topic T12: Reliability and Maintainability Test


Participation Criteria
Degree of Participation Depends On:
● Availability of program resources to support on-site personnel

● How important R&M are to program success

● Availability and capability of other government on-site personnel

Test Preliminaries
● All test plans and procedures must be approved

● Agreements must be made among government personnel with respect to


covering the test and incident reporting procedures

● Units under test and test equipment including serial numbers should be
documented

● Working fire alarms, heat sensors and overvoltage alarms should be used

● Trial survey runs should be made per the approved test plan

Test Conduct
● Approved test plans and procedures must be available and strictly adhered to

● Equipment must not be tampered with

● Test logs must be accurately and comprehensively maintained

● Appropriate government personnel must be kept informed

● Only authorized personnel should be allowed in area (a list should be posted)

● Test logs, data sheets, and failure reports should be readily available for
government review

● Units under test should be sealed to prevent tampering or unauthorized


repair

● A schedule of inspections and visits should be maintained

● No repairs or replacements should be made without a government witness

● Government representatives must take part in failure review process

● Failed items should have "travel tags" on them

● Technical orders should be used for repair if available

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-146

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T13

Topic T13: Reliability and Maintainability


Demonstration Reports Checklist
● Identification and description of equipment/system tested

● Demonstration objectives and requirements


● Test Plans, Risks and Times ● Test Conditions
● Test Deviations and Risk ● Test Facilities
Assessment

● Data Analysis Techniques


● Statistical Equations ● Accept/Reject Criteria

● Test Results (Summarized)


Reliability Maintainability
● Test Hours ● Maintenance Tasks Planned and
Selected
● Number of Failures/Incidents
● Task Selection Method
● Classification of Failures
● Personnel Qualifications
● Data Analysis Calculations Performing Tasks
● Application of Accept/Reject ● Documentation Used During
Criteria Maintenance
● Failure Trends/Design and ● Measured Repair Times
Process Deficiencies
● Data Analysis Calculations
● Status of Problem Corrections
● Application of Accept/Reject
Criteria
● Discussion of Deficiencies
Identified

Testability
● Summary data for each item involved in testability demonstration
including original plans, summarized results and any corrective action
taken.
● Recommended action to be taken to remedy testability deficiencies or
improve the level of testability achievable through prime equipment
engineering changes, ATE improvements and/or test program set
improvements.

● Data
● Test Logs and Failure Reports
● Failure Analysis Results

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-147

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T14

Topic T14: Design of Experiments


Design of Experiments is a very efficient, statistically based method of
systematically studying the effects of experimental factors on response variables of
interest. The efficiency is achieved through greatly reduced test time because the
effects of varying multiple input factors at once can be systematically studied. The
technique can be applied to a wide variety of product design, process design, and
test and evaluation situations. Many books have been written on various
experimental design strategies which cannot possibly be addressed in these few
pages. It is the intent of this section only to give the reader a brief introduction to
Design of Experiments by providing a single numerical example of what is called a
fractional factorial design. Some other competing design strategies, each with
their own strengths or weaknesses, include Full Factorial, Plackett-Burman, Box-
Burman, and Taguchi.

Improved levels of reliability can be achieved through the use of Design of


Experiments. Design of Experiments allows the experimenter to examine and
quantify the main effects and interactions of factors acting on reliability. Once
identified, the main factors affecting reliability (some of which may be
uncontrollable, such as weather) can be dealt with systematically and scientifically.
Their adverse effects on the system design can be minimized, thereby meeting
performance specifications while remaining insensitive to uncontrollable factors.
The following example illustrates the general procedure and usefulness of Design
of Experiments. The example is broken down into a series of steps which illustrate
the general procedure of designing experiments.

Example: Fractional Factorial Design


An integrated circuit manufacturer desired to maximize the bond strength of a die
mounted on an insulated substrate since it was determined that bonding strength
problems were resulting in many field failures. A designed experiment was con-
ducted to maximize bonding strength.

Step 1 - Determine Factors: It isn't always obvious which factors are important.
A good way to select factors is through organized "brainstorming". Ishikawa charts
(see Introduction) are helpful in organizing cause and effect related data. For our
example, a brainstorming session was conducted and four factors were identified
as affecting bonding strength: (1) epoxy type, (2) substrate material, (3) bake time,
and (4) substrate thickness.

Step 2 - Select Test Settings: Often, as with this example, high and low settings
are selected. This is referred to as a two-level experiment. (Design of Experiments
techniques are often used for more than two-level experiments.) The four factors
and their associated high and low settings for the example are shown in Table
T14-1. The selection of high and low settings is arbitrary (e.g. Au Eutectic could be
"+" and Silver could be "-").

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-148

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T14

Table T14-1: Factors and Settings

Factor Levels
Low (-) High (+)
A. Epoxy Type Au Eutectic Silver
B. Substrate Material Alumina Beryllium Oxide
C. Bake Time (at 90°C) 90 Min 120 Min
D. Substrate Thickness .025 in .05 in

Step 3 - Set Up An Appropriate Design Matrix: For our example, to investigate


all possible combinations of four factors at two levels (high and low) each would
require 16 (i.e., 24) experimental runs. This type of experiment is referred to as a
full factorial. The integrated circuit manufacturer decided to use a one half replicate
fractional factorial with eight runs. This decision was made in order to conserve
time and resources. The resulting design matrix is shown in Table T14-2. The
Table T14-2 "+, -" matrix pattern is developed utilizing a commonly known Design
of Experiments method called Yates algorithm. The test runs are randomized to
minimize the possibility of outside effects contaminating the data. For example, if
the tests were conducted over several days in a room where the temperature
changed slightly, randomizing the various test trials would tend to minimize the
effects of room temperature on the experimental results. The matrix is orthogonal
which means that it has the correct balancing properties necessary for each factor's
effect to be studied statistically independent from the rest. Procedures for setting
up orthogonal matrices can be found in any of the references cited.

Step 4 - Run The Tests: The tests are run randomly at each setting shown in the
rows of the array. The trial run order is determined by a random number table or
any other type of random number generator. Resultant bonding strengths from
testing are shown in Table T14-2 .

Table T14-2: Orthogonal Design Matrix With Test Results

Treatment Random Trial Factors Bonding Strength (psi)


Run Order A B C D y
Combination
1 6 - - - - 73
2 5 - - + + 88
3 3 - + - + 81
4 8 - + + - 77
5 4 + - - + 83
6 2 + - + - 81
7 7 + + - - 74
8 1 + + + + 90
yi 647
Mean y = _ 8 = 8 = 80.875

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-149

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T14

Step 5 - Analyze The Results: This step involves performing statistical analysis to
determine which factors and/or interactions have a significant effect on the
response variable of interest. As was done in Table T14-3, interactions and
aliasing (aliasing is defined as two or more effects that have the same numerical
value) patterns must be identified. The impact on the response variable caused by
"A or BCD" cannot be differentiated between factor A or the interaction of BCD.
This is the penalty which is paid for not performing a full factorial experiment (i.e.,
checking every possible combination). The determination of aliasing patterns are
unique to each experiment and are described in many Design of Experiments
textbooks. The assumption is usually made that 3-way interactions such as BCD
are negligible. An Analysis of Variance is then performed as shown in Table T14-4
to determine which factors have a significant effect on bonding strength. The steps
involved in performing an Analysis of Variance for this example are:

5A. Calculate Sum of Squares: From Table T14-3 the Sum-of-Squares


for a two level, single replicate experiment is computed for all factors and
interactions as illustrated below for the A factor (Epoxy Type).

# of treatment combinations
Sum of Sq. (Factor A) = 4 (Avg(+)-Avg(-))2

8
Sum of Sq. (Factor A) = 4 (2.25)2 = 10.125

5B. Calculate Error: The Sum of Squares for the error in this case is set
equal to the sum of the Sum of Squares values for the three two-way
interactions (i.e., AB or CD, AC or BD, BC or AD). This is known as
pooling the error. This error is calculated as follows: Error = 1.125 +
1.125 + .125 = 2.375.

5C. Determine Degrees of Freedom. Degrees of Freedom is the


number of levels of each factor minus one. Degrees of Freedom (df) is
always 1 for factors and interactions for a two level experiment as shown
in this simplified example. Degrees of Freedom for the error (dferr) in this
case is equal to 3 since there are 3 interaction Degrees of Freedom. dfF
denotes degrees of freedom for a factor.

5D. Calculate Mean Square. Mean Square equals the sum of squares
divided by the associated degrees of freedom. Mean Square for a two
level, single replicate experiment is always equal to the sum of squares
for all factors. Mean Square for the error in this case is equal to the Sum
of Squares error term divided by 3 (3 is the df of the error).

5E. Perform F Ratio Test for Significance. To determine the F ratio the
mean square of the factor is divided by the mean square error (.792) from
Table T14-4. F (α, dfF, dferr) represents the critical value of the statistical
F-distribution and is found in look-up tables in most any statistics book.
Alpha (α) represents the level at which you are willing to risk in concluding
that a significant effect is not present when in actuality it is. If the F ratio is
greater than the looked up value of F (α, dfF, dferr) then the factor

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-150

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T14

does have a significant effect on the response variable. (F (.1,1,3) = 5.54


in this case).

As a word of caution, the above formulations are not intended for use in a
cookbook fashion. Proper methods for computing Sum of Squares, Mean Square,
Degrees of Freedom, etc. depend on the experiment type being run and can be
found in appropriate Design of Experiments reference books.

Table T14-3: Interactions, Aliasing Patterns and


Average "+" and "-" Values
Bonding
Treatment A or B or AB or C or AC or BC or D or Strength*
Combination BCD ACD CD ABD BD AD ABC y
1 - - + - + + - 73
2 - - + + - - + 88
3 - + - - + - + 81
4 - + - + - + - 77
5 + - - - - + + 83
6 + - - + + - - 81
7 + + + - - - - 74
8 + + + + + + + 90
Avg (+) 82 80.5 81.25 84 81.25 80.75 85.5
Avg (-) 79.75 81.25 80.5 77.75 80.5 81 76.25
∆ = Avg(+) -
Avg (-) 2.25 -.75 .75 6.25 .75 -25 9.25

*The mean bonding strength calculated from this column is 80.875.

Table T14-4: Results of Analysis of Variance


Sum of Degrees of Mean Significant
Source Squares Freedom Square F ratio* Effect

Epoxy Type (A) 10.125 1 10.125 12.789 Yes

Substrate Material (B) 1.125 1 1.125 1.421 No

Bake Time (C) 78.125 1 78.125 98.684 Yes

Substrate Thickness (D) 171.125 1 171.125 216.158 Yes

A x B or C x D 1.125 1 -- -- --

A x C or B x D 1.125 1 -- -- --

B x C or A x D 0.125 1 -- -- --

Error 2.375 3 .792 --


*Example Calculation: F = Mean Square/Error = 10.125/.792 = 12.789

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-151

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T14

Step 6 - Calculate Optimum Settings: From the Analysis of Variance, the factors
A, C, and D were found to be significant at the 10% level. In order to maximize the
response, i.e. bonding strength, we can determine optimum settings by inspecting
the following prediction equation:

y = (mean bonding strength) + 2.25A + 6.25C + 9.25D

Since A, C, and D are the only significant terms they are then the only ones found
in the prediction equation. Since A, C, and D all have positive coefficients they
must be set at high to maximize bonding strength. Factor B, substrate material,
which was found to be nonsignificant should be chosen based on its cost since it
does not affect bonding strength. A cost analysis should always be accomplished
to assure that all decisions resulting from designed experiments are cost-effective.

Step 7 - Do Confirmation Run Test: Since there may be important factors not
considered or nonlinear effects, the optimum settings must be verified by test. If
they check out, the job is done. If not, some new tests must be planned.

Design of Experiments References:

Barker, T. B., "Quality By Experimental Design," Marcel Dekker Inc., 1985.

Box, G.E.P., Hunter, W. G., and Hunter, J. S., "Statistics for Experiments," John
Wiley & Sons, New York, 1978

Davies, O. L., "The Design and Analysis of Industrial Experiments," Hafner


Publishing Co.

Hicks, C.R., "Fundamental Concepts in the Design of Experiments," Holt, Rinehart


and Winston, Inc, New York, 1982

Schmidt, S. R. and Launsby, R. G., "Understanding Industrial Designed


Experiments," Air Academy Press, Colorado Springs CO, 1989

Taguchi, G., "Introduction to Quality Engineering," American Supplier Institute, Inc,


Dearborn MI, 1986

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-152

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T15

Topic T15: Accelerated Life Testing


Accelerated life testing employs a variety of high stress test methods that shorten
the life of a product or quicken the degradation of the product's performance. The
goal of such testing is to efficiently obtain performance data that, when properly
analyzed, yields reasonable estimates of the product's life or performance under
normal conditions.

Why Use It?


● Considerable savings of time and money

● Quantify the relationship between stress and performance

● Identify design and manufacturing deficiencies

Why Not?
● Difficulty in translating the stress data to normal use levels

● High stress testing may damage systems

● Precipitated failures may not represent use level failures

Test Methods
Most accelerated test methods involving electronics are limited to temperature or
voltage. However, other methods have included: acceleration, shock, humidity,
fungus, corrosion, and vibration.

Graphical Analysis
The advantages are:

● Requires no statistics

● Easily translates the high stress data to normal levels

● Very convincing and easy to interpret

● Provides visual estimates over any range of stress

● Verifies stress/performance relations

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-153

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T15

The disadvantages are:

● Does not provide objectiveness

● Has statistical uncertainty

● Relies on an assumed relationship which may not fit the data

Test Design
All test conditions should be limited to three elevated stress levels (considering
budget, schedule, and chamber capabilities) with the following conditions:

● Test stress should exceed maximum operating limits

● Test stress should not exceed maximum design limits

● Stress levels only for normal use failure modes

Test Units
The units shall be allocated to the particular stress levels so that most of the units
are at the lower stress levels and fewer units at the higher. If 20 test units are
available, a reasonable allocation would be 9 units at the lowest level and 7 and 4
at the higher levels. This allocation scheme is employed so that the majority of the
test data is collected nearest to the operating levels of stress. Three units should be
considered a minimum for the higher levels of stress; if fewer than 10 units are
available for test, design for only two levels.

Data Analysis: Probability Plot


The operational performance (time before failure in most cases) of nearly all
electronic and electromechanical systems can be described by either the
Lognormal or Weibull probability density functions (pdf). The pdf describes how
the percentage of failures is distributed as a function of operating time. The
probability plot of test data is generated as follows:

● Rank the failure times from first to last for each level of test stress (nonfailed
units close out the list).

● For each failure time, rank i, calculate its plotting position as:

i - .5
P = 100 ⎛ n ⎞
⎝ ⎠
Where n is the total number of units on test at that level.

● Plot P versus the failure time for each failure at each stress level on
appropriately scaled graph paper (either Logarithmic or Weibull).

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-154

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T15

● Visually plot lines through each set (level of stress) of points. The lines
should plot parallel, weighting the tendency of the set with the most data
heaviest. If the lines do not plot reasonably parallel, investigate failure
modes.

Data Analysis: Relationship Plot


The relationship plot is constructed on an axis that describes unit performance as a
function of stress. Two of the most commonly assumed relations are the Inverse
Power and the Arrhenius Relationship. The relationship plot is done as follows:

● On a scaled graph, plot the 50% points determined from the probability plot
for each test stress.

● Through these 50% points, plot a single line, projecting beyond the upper and
lower points.

● From this plot locate the intersection of the plotted line and the normal stress
value. This point, read from the time axis, represents the time at which 50%
of the units will fail while operating under normal conditions.

● Plot the time determined in step three on the probability plot. Draw a line
through this point parallel to those previously drawn. This resulting line
represents the distribution of failures as they occur at normal levels of stress.

Example: Probability and Relationship Plots


Consider an electronic device life test that demonstrates an Arrhenius
performance/stress relationship that fails lognormally at any given level of stress.
Engineers wish to determine the unit's reliability (MTBF) at 90°C (maximum
operating temperature). There are 20 units available for test.

After reviewing the design and considering the potential failure modes, the
engineers concluded that the units could survive at temperatures in excess of
230°C without damage. The engineers did, however, estimate that non-regular
failure modes will be precipitated above this temperature, therefore, 230°C was
established as the maximum test level with 150°C and 180°C as interim stress
levels. The test units were allocated to three test levels and run for 1000 hours.
The resulting failure times are shown in Table T15-1.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-155

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T15

Table T15-1: Test Results

9 Units @ 150°C 7 Units @ 180°C 4 Units @ 230°C


Time to Time to Time to
Failure Failure Failure
(Hrs.) Rank P (Hrs.) Rank P (Hrs.) Rank P
567 1 5.5 417 1 7.1 230 1 12.5
688 2 16.6 498 2 21.4 290 2 37.5
750 3 27.7 568 3 35.7 350 3 62.5
840 4 38.8 620 4 50.0 410 4 87.5
910 5 50.0 700 5 64.3
999 6 61.1 770 6 78.6
--- 7 --- 863 7 92.9
--- 8 ---
*--- 9 ---
* Unit still operating at 1000 hours

The probability and relationship plots are shown in Figures T15-1 & T15-2. From
Figure T15-2 it is estimated that 50% of the units will fail by 3500 hours while
operating at 90°C. Further, from Figure T15-1, it can be estimated that at 90°C,
10% of the units will fail by 2200 hours and 10% will remain (90% failed) at 5000
hours.

This type of testing is not limited to device or component levels of assembly.


Circuit card and box level assemblies can be tested in a similar manner. Generally,
for more complex test units, the probability plot will be developed on Weibull paper,
while the relationship plot will likely require a trial and error development utilizing
several inverse power plots to find an adequate fit.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-156

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T15

Figure T15-1: Lognormal Plot

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-157

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T15

Figure T15-2: Arrhenius Plot

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-158

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T16

Topic T16: Time Stress Measurement


Environmental factors, such as temperature, humidity, vibration, shock, power
quality, and corrosion impact the useful lifetime of electronic equipment. Knowing
the environmental conditions under which the equipment is operated provides
insight into equipment failure mechanisms. The capability to measure
environmental parameters will help reduce and control the incidence of Retest OK
(RTOK) and Cannot Duplicate (CND) maintenance events which account for 35%
to 65% of the indicated faults in Air Force avionics systems. Many of these RTOK
and CND events are environmentally related and a record of the environmental
conditions at the time of occurrence should greatly aid in the resolution of these
events.

Active Time Stress Measurement Devices (TSMD)


● Module TSMD: The module developed by the Rome Laboratory is physically
6" x 4" x 1.25" and measures and records temperature, vibration, humidity,
shock, corrosion and power transients. This module operates
independently of the host equipment and records and stores data for later
retrieval.

● Micro TSMD: The micro version of the TSMD is a small hybrid circuit that is
suitable for mounting on a circuit card in a Line Replaceable Unit (LRU). All
the parameters measured by the module TSMD are recorded in the micro
version.

● Fault Logging TSMD: A new advanced device has been developed that is
suitable for circuit board mounting and includes environmental parameters
being measured prior to, during, and after a Built-In-Test (BIT) detected fault
or event. The environment data will be used to correlate faults with
environmental conditions such as temperature, vibration, shock, cooling air
supply pressure, and power supply condition to better determine what impact
environment has on system failure.

● Quick Reliability Assessment Tool (QRAT): The objective of the effort is


to build a stand-alone, compact, portable, easily attachable system for quick
reaction measurement and recording of environmental stresses. The
parameters it measures include voltage, temperature, vibration and shock.
The system which includes a debrief laptop computer, an electronics module
with internal sensors, a battery pack, remote sensors, various attachment
plates, and will fit in a ruggedized suitcase. The electronics module is be 3" x
2" x 0.5" and contains the sensors, digital signal processor, and 512K bytes
of EEPROM for storage of data. Three axis continuous vibration data will be
recorded and stored in a power spectral density format. The user could
choose to use either the sensors internal to the electronics module or the
remote sensors. The debrief computer is used to tailor the electronics
module to the specific needs of the user and to graphically display the
collected data. Some potential uses for the collected data are: identification
of environmental design envelopes, determination of loads and boundary

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-159

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
TESTING - TOPIC T16

conditions for input into simulation techniques, and characterization of


failures in specific systems.

Passive Environmental Recorders

● High and Low Temperature Strip Recorders: Strip recorders offer a


sequence of chemical mixtures deposited as small spots on a paper. Each
spot changes color at a predetermined temperature showing that a given
value has been exceeded.

● Temperature Markers: Markers are available to measure temperature


extremes. The marking material either melts or changes color at
predetermined temperatures.

● Humidity Strip Recorders: Using crystals that dissolve at different humidity


levels, a strip recorder is available that indicates if a humidity level has been
surpassed.

● Shock Indicators: Single value indicators that tell when an impact


acceleration exceeds the set point along a single axis.

Application, Active Devices

● Avionic Environmental Stress Recording

● Transportation Stress Recording

● Flight Development Testing

● Warranty Verification

● Aircraft: A-10, A-7, B-1, and EF-111

For More Information:

For more information on the active TSMD devices under development at Rome
Laboratory, write:

Rome Laboratory/ERS
Attn: TSMD
525 Brooks Rd.
Griffiss AFB, NY 13441-4505

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-160

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Appendix 1
Operational Parameter Translation

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-1

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
OPERATIONAL PARAMETER TRANSLATION

Because field operation introduces factors which are uncontrollable by contractors


(e.g. maintenance policy), "contract" reliability is not the same as "operational"
reliability. For that reason, it is often necessary to convert, or translate, from
"contract" to "operational" terms and vice versa. This appendix is based on RADC-
TR-89-299 (Vol I & II), "Reliability and Maintainability Operational Parameter
Translation II" which developed models for the two most common environments,
ground and airborne. The translation models are summarized in Table 1-1.

Definitions
● Mean-Time-Between-Failure-Field (MTBFF) includes inherent maintenance
events which are caused by design or manufacturing defects.

Total Operating Hours or Flight Hours


MTBFF = Inherent Maintenance Events

● Mean-Time-Between-Maintenance-Field (MTBMF) consists of inherent,


induced and no defect found maintenance actions.

Total Operating Hours or Flight Hours


MTBMF = Total Maintenance Events

● Mean-Time-Between Removals-Field (MTBRF) includes all removals of the


equipment from the system.

Total Operating Hours or Flight Hours


MTBRF = Total Equipment Removals

● θP = is the predicted MTBF (i.e. MIL-HDBK-217).


● θD = is the demonstrated MTBF (i.e. MIL-HDBK-781).
● RF = is the equipment type or application constant.
● C = is the power on-off cycles per mission.
● D = is the mission duration.

Equipment Operating Hour to Flight Hour Conversion


For Airborne Categories - MTBFF represents the Mean-Time-Between-Failure in
Equipment Operating Hours. To obtain MTBFF in terms of flight hours (for both
fighter and transport models), divide MTBFF by 1.2 for all categories except
counter measures. Divide by .8 for counter measure equipment.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-3

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
OPERATIONAL PARAMETER TRANSLATION

Example
Estimate the MTBM of a fighter radar given a mission length of 1.5 hours, two radar
shutdowns per mission and a predicted radar MTBF of 420 hours. Using Model 1B
in Table 1-1,

C 2 cyc.
MTBMF = θP.64 RF (D) -.57 = (420 hr.).64 1.7 ( 1.5 hr. ) -.57

MTBMF = 69 equipment operating hours between maintenance.

Since this is below the dependent variable lower bound of (.24)(420) = 101 hours,
the calculated MTBMF is correct. Since this equipment is often turned on for pre
and post flight checkout, the number of flight hours between maintenance is
somewhat less than the actual equipment operating hours. The number of flight
hours between maintenance is approximately 69/1.2 = 58 hours.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-4

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
OPERATIONAL PARAMETER TRANSLATION

Table 1-1: Reliability Translation Models

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-5

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Appendix 2
Example R&M Requirement Paragraphs

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-7

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE R&M REQUIREMENT PARAGRAPHS

Example Reliability Requirements for the System Specification

R.1 Reliability Requirements

Guidance: The use of the latest versions and notices of all military specifications,
standards and handbooks should be specified. See Toolkit Section R,
"Requirements" for task tailoring guidance. When specifying an MTBF, it should be
the "upper test MTBF (θ0)" as defined in MIL-STD-781. When specifying MTBCF,
the maintenance concept needs to be clearly defined for purposes of calculating
reliability of redundant configurations with periodic maintenance. If immediate
maintenance will be performed upon failure of a redundant element then specifying
the system MTTR is sufficient. If maintenance is deferred when a redundant
element fails, then the length of this deferral period should be specified.

R.1.1 Mission Reliability: The (system name) shall achieve a mean-time-


between-critical-failure (MTBCF) of _____ hours under the worst case
environmental conditions specified herein. MTBCF is defined as the total uptime
divided by the number of critical failures that degrade full mission capability (FMC).
FMC is that level of performance which allows the system to perform its primary
mission without degradation below minimum levels stated herein. For purposes of
analyzing redundant configurations, calculation of MTBCF shall reflect the expected
field maintenance concept.

R.1.2 Basic Reliability: The (system name) shall achieve a series configuration
mean-time-between-failure (MTBF) of _____ hours under the worst case
environmental conditions specified herein. The series configuration MTBF is
defined as the total system uptime divided by the total number of part failures.

R.1.3 Reliability Configuration: The reliability requirements apply for the


delivered configuration of the system. Should differences exist between this
configuration and a potential production configuration, all analyses shall address
the reliability effects of the differences.

Guidance: If equipment or system performance criteria are not stated elsewhere


in the statement of work or specification, the following paragraph must be included.

R.1.4 Reliability Performance Criteria: The minimum performance criteria that


shall be met for full mission capability of the (system name) system is defined as
(specify full mission capability).

R.1.5 Reliability Design Requirements: Design criteria and guidelines shall be


developed by the contractor for use by system designers as a means of achieving
the required levels of reliability.

Guidance: For more critical applications, Level II or I, derating should be specified.


See Topic D1 for derating level determination. Baseline thermal requirements such
as ambient and extreme temperatures, pressure extremes, mission profile and

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-9

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE R&M REQUIREMENT PARAGRAPHS

duration, temperature/pressure rates of change and maximum allowable


temperature rise should be specified.

R.1.5.1 Thermal Management and Derating: Thermal management (design,


analysis and verification) shall be performed by the contractor such that the
reliability quantitative requirements are assured. RADC-TR-82-172, "RADC
Thermal Guide for Reliability Engineers," shall be used as a guide. Derating criteria
shall be established for each design such that all parts used in the system are
derated to achieve reliability requirements. As a minimum, Level 3 of AFSC
Pamphlet 800-27 "Part Derating Guidelines" shall be used for this design.

Guidance: If the system is for airborne use, MIL-STD-5400 must be referenced in


place of MIL-E-4158 (ground equipment).

R.1.5.2 Parts Selection: All parts employed in the manufacture of the system shall
be selected from the government generated and maintained Program Parts
Selection List (PPSL), Electrical/Electronic Parts and the PPSL for Mechanical
Parts. Parts not covered by the above referenced PPSLs shall be selected in
accordance with MIL-E-4158 and MIL-STD-454 and require approval by the
procuring activity.

a. Microcircuits. Military standard microcircuits must be selected in accordance


with Requirement 64 of MIL-STD-454. All non-JAN devices shall be tested in
accordance with the Class B screening requirements of MIL-STD-883,
Method 5004 and 5008, as applicable. All device types shall be tested to the
quality conformance requirements of MIL-STD-883, Method 5005 and 5008
Class B.

b. Semiconductors. Military standard semiconductors must be selected in


accordance with Requirement 30 of MIL-STD-454. All non-JANTX devices
shall be screened in accordance with Table ll of MIL-S-19500. All device
types shall be tested to the Group A, Table lll and Group B, Table IV quality
conformance requirements of MIL-S-19500, as a minimum. The following
device restrictions apply:

(1 ) Only solid glass metallurgically bonded axial lead diodes and rectifiers
shall be used.
(2) TO-5 packages shall be limited to the solid metal header type.
(3) All semiconductor device junctions must be protected and no organic
or desiccant materials shall be included in the package.
(4) Devices using aluminum wire shall not use thermocompression wedge
bonding.
(5) Aluminum TO-3 packages shall not be used.

(6) Germanium devices shall not be used.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-10

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE R&M REQUIREMENT PARAGRAPHS

c. Electrostatic Sensitive Parts. Certain types of integrated circuits are


susceptible to electrostatic discharge damage. Appropriate discharge
procedures are necessary when handling, storing or testing these parts and
design selections of desired devices should include a consideration of the
effectiveness of the input or other protective elements included in the device
design.

R.1.6 Reliability Test and Evaluation: The quantitative reliability levels required
by paragraph (R.1) shall be verified by the following:

R.1.6.1 The final approved reliability analyses for the various configurations and
worst case environments shall demonstrate compliance with the quantitative
requirements cited in paragraph (R.1).

R.1.6.2 The contractor shall demonstrate that the reliability (mission and/or basic)
requirements have been achieved by conducting a controlled reliability test in
accordance with MIL-HDBK-781 Test Plan (specify MIL-HDBK-781 Test Plan).
(See Topic T5 and Appendix 5 for Plan Selection). The lower test (MTBCF and/or
MTBF) to be demonstrated shall be ____ hours tested in a ____ environment.
Relevant failures are defined as any malfunction which causes loss or degradation
below the performance level specified for the (equipment/system) and can be
attributed to design defect, manufacturing defect, workmanship defect, adjustment,
deterioration or unknown causes. Nonrelevant failures are failures caused by
installation damage, external test equipment failures, mishandling, procedural
errors, dependent failures and external prime power failures.

Guidance: A growth test may apply if the next phase is production. If one is
required, it's appropriate to require a higher risk (e.g., 30 percent) demonstration
test. See RADC-TR-84-20 "Reliability Growth Testing Effectiveness," Topic T4 and
Appendix 6 for further guidance.

R.1.6.3 The contractor shall conduct a controlled fixed length dedicated reliability
growth test of ____ hours using MIL-HDBK-189 as a guide. The test shall be at the
same environmental conditions as the RQT. Although there is no pass/fail criteria,
the contractor shall track the reliability growth process to ensure improvement is
taking place by effective implementation of corrective action.

Guidance: See Electronic Systems Center Report TR-85-148, "Derated


Application of Parts for ESC Systems Development" (Attachment 2) for a
recommended derating verification procedure.

R.1.6.4 The contractor shall verify the thermal and electrical stresses on ____
percent (3 to 5 percent sample is typical) of the semiconductor and microcircuit
parts by measurement while the equipment is operated at the worst case
environment, duty cycle and load. The results of the measurements shall be
compared to the derating requirements and the verification shall be considered
successful if measured values are less than specified derated levels.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-11

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE R&M REQUIREMENT PARAGRAPHS

Example Reliability Requirements for the Statement of Work

R.2 Reliability Program Tasks


R.2.1 Reliability Program: The contractor shall conduct a reliability program in
accordance with MIL-STD-785 including the following tasks as a minimum to
assure reliability consistent with state-of-the-art.

R.2.2 Subcontractor Control: The contractor shall establish management


procedures and design controls including allocation of requirements in accordance
with Task 102 of MIL-STD-785 which will insure that products obtained from
subcontractors will meet reliability requirements.

R.2.3 Reliability Design Reviews: The status of the reliability design shall be
addressed at all internal and external design reviews. Task 103 of MIL-STD-785
shall be used as a guide.

R.2.4 Failure Reporting, Analysis and Corrective Action System (FRACAS):


The contractor shall establish, conduct and document a closed loop failure
reporting, analysis and corrective action system for all failures occurring during
system debugging, checkout, engineering tests and contractor maintenance.
Failure reports shall be retained by the contractor and failure summaries provided
to the procuring activity thirty days after start of system engineering test and
evaluation, and updated monthly thereafter. Failure reporting shall be to the piece
part level.

R.2.5 Reliability Modeling: The contractor shall develop reliability models for all
system configurations in accordance with Task 201 of MIL-STD-785 and Task 101
and 201 of MIL-STD-756. The specific mission parameters and operational
constraints that must be considered are: ____ (or reference applicable SOW and
specification paragraphs).

R.2.6 Reliability Allocations: Reliability requirements shall be allocated to the


LRU level in accordance with Task 202 of MIL-STD-785.

R.2.7 Reliability Prediction: The contractor shall perform reliability predictions in


accordance with (Task 201 (basic reliability)) and/or (Task 202 (mission reliability))
of MIL-STD-756. The specific technique to be used shall be method 2005 parts
stress analysis of MIL-STD-756. Electronic part failure rates shall be used from
MIL-HDBK-217 and nonelectronic part failure rates from RADC-TR-85-194. All
other sources of part failure rate data shall require review and approval of the
procuring activity prior to use. A ____ environmental factor, worst case operating
conditions and duty cycles shall be used as a baseline for developing part failure
rates. The results of the thermal analysis shall be included and shall provide the
temperature basis for the predicted reliability. The part quality grade adjustment
factor used shall be representative of the quality of the parts selected and applied
for this system procurement.

R.2.8 Parts Program: The contractor shall establish and maintain a parts control
program in accordance with Task 207 of MIL-STD-785 and Procedure 1 of MIL-
STD-965. Requests for use of parts not on the government generated and

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-12

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE R&M REQUIREMENT PARAGRAPHS

maintained PPSL shall be submitted in accordance with the CDRL. Amendments


to the PPSL as a result of such requests, after procuring activity approval, shall be
supplied to the contractor by the Program Contracting Officer not more often than
once every 30 days.

Guidance: The level of detail of the FMECA must be specified (e.g., part, circuit
card, etc.). The closer the program is to full scale engineering development, the
greater the level of detail needed.

R.2.9 Failure Modes, Effects and Criticality Analysis (FMECA): The contractor
shall perform a limited FMECA to the ____ level to identify design weaknesses and
deficiencies. Potential failure modes shall be identified and evaluated to determine
their effects on mission success. Critical failures shall be investigated to determine
possible design improvements and elimination means. MIL-STD-785, Task 204
shall be used as a guide.

Guidance: Reliability critical items should be required where it's anticipated that
the design will make use of custom VLSI, hybrids, microwave hybrids and other
high technology nonstandard devices. See Topic D5 for a critical item checklist.

R.2.10 Reliability Critical Items: Task number 208 of MIL-STD-785 applies. The
contractor shall prepare a list of critical items and present this list at all formal
reviews. Critical items shall include: items having limited operating life or shelf life,
items difficult to procure or manufacture, items with unsatisfactory operating history,
items of new technology with little reliability data, single source items, parts
exceeding derating limits, and items causing single points of failure.

R.2.11 Effects of Storage, Handling, Transportation: The contractor shall


analyze the effects of storage, handling and transportation on the system reliability.

R.2.12 Reliability Qualification Test: The contractor shall demonstrate


compliance with the quantitative reliability requirements in accordance with MIL-
STD-785 Task 302. Test plans and reports shall be developed and submitted.

R.2.13 Reliability Development/Growth Test: Test plans that show data tracking
growth, testing methods and data collection procedures shall be developed and
submitted for the Growth Test Program.

Guidance: When specifying ESS, the level (circuit card, module, assembly, etc.)
at which the screening is to be performed must be specified. Different levels of
screening should be performed at different hardware assembly levels. See R&M
2000 guidelines in Section T for recommended screening as a function of hardware
assembly level.

R.2.14 Environmental Stress Screening: Task number 301 of MIL-STD-785


applies. A burn-in test of ____ (specify the number of hours or temperature cycles)
at ____ temperature and ____ vibration level extremes shall be performed at the
____ level. At least ____ (hours/cycles) of failure free operation shall be
experienced before termination of the burn-in test for each unit. DOD-HDBK-344,
ESS of Electronic Equipment, shall be used as a guide.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-13

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE R&M REQUIREMENT PARAGRAPHS

Example Maintainability Requirements for the System


Specification

M.1 Maintainability Requirements


M.1.1 Maintainability Quantitative Requirements: The (system name) shall be
designed to achieve a mean-corrective-maintenance-time (MCT) of no greater than
____ minutes and a maximum-corrective maintenance-time (MMAXCT) of no
greater than ____ minutes (95th percentile) at the (specify organization,
intermediate or depot level), when repaired by an Air Force maintenance technician
of skill level ____ or equivalent.

Guidance: Preventive maintenance requirements are considered an option to be


implemented when items are used in the design that are subject to wearout,
alignment, adjustment or have fault tolerance that must be renewed. If the option is
exercised, then attach the paragraph below to M.1.1.

M.1.2 Preventive maintenance shall not exceed ____ minutes for each period and
the period shall not be more frequent than every ____.

M.1.3 The mean time to restore system (MTTRS) following a system failure shall
not be greater than ____. MTTRS includes all corrective maintenance time and
logistics delay time.

M.1.4 The mean maintenance manhours (M-MMH) shall not be greater than ____
hours per year. M-MMH is defined as follows: (operating hours per year) _ (system
MTBF) (system MTTR) (number of maintenance personnel required for corrective
action).

Guidance: Above definition of M-MMH assumes that a repair is made when each
failure occurs. If a delayed maintenance concept is anticipated through the use of
fault tolerance, then MTBCF should be used (instead of MTBF) in the above
definition. If only a limited number of site visits are allowed, then this value should
be used in the above definition in place of "operating hours per year _ system
MTBF."

M.1.5 Maintainability Design: The system design shall provide modularity,


accessibility, built-in-test (BIT) and other maintainability features to provide
installation simplicity, ease of maintenance and the attainment of the maintainability
requirements (both corrective and preventive). Line Replaceable Units (LRUs) such
as printed circuit boards or assemblies shall be replaceable without cutting or
unsoldering connections. All plug-in modules shall be mechanically keyed/coded to
prevent insertion of a wrong module.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-14

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE R&M REQUIREMENT PARAGRAPHS

M.1.5.1 Testability: The system design shall be partitioned based upon the ability
to isolate faults. Each item shall have sufficient test points for the measurement or
stimulus of internal circuit nodes to achieve the capability of detecting 100 percent
of all permanent failures using full resources. Automatic monitoring and diagnostic
capabilities shall be provided to show the system status (operable, inoperable,
degraded) and to detect 90 percent of all permanent failures. The false alarm rate
due to self-test circuitry shall be less than 1 percent of the series failure rate. Self-
test circuitry shall be designed to correctly isolate the fault to a group of four (4)
LRUs, or less, 95 percent of the time.

M.1.6 Maintainability Test and Evaluation: Maintainability requirements for the


(system name) shall be verified by the following:

M.1.6.1 Maintainability Analysis. The results of the final maintainability prediction


shall be compared to the quantitative requirements and achievement determined if
the predicted parameters are less than or equal to the required parameters.

M.1.6.2 Maintainability Demonstration. A maintainability demonstration shall be


performed in accordance with Test Method ____ (Test Method 9 is commonly
specified, see Appendix 7 for further guidance) of MIL-STD-471. A minimum
sample size of 50 tasks shall be demonstrated. The consumer's risk for the
maintainability demonstration shall be equal to 10 percent. Fault detection and
isolation requirements shall be demonstrated as part of the maintainability test.

M.1.6.3 Testability Demonstration. A testability demonstration shall be performed


on the (system name) in accordance with Notice 2 of MIL-STD-471A.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-15

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE R&M REQUIREMENT PARAGRAPHS

Example Maintainability Requirements for the Statement of Work

M.2 Maintainability Program Tasks


M.2.1 Maintainability Program: The contractor shall conduct a maintainability
program in accordance with MIL-STD-470 appropriately tailored for full scale
development including the following tasks as a minimum to assure maintainability
consistent with the requirements.

M.2.2 Testability Program: Testability characteristics and parameters are related


to, and shall be treated as part of the maintainability program. The contractor shall
conduct a testability program in accordance with MIL-STD-2165 appropriately
tailored for FSD including the following tasks as a minimum to assure testability
consistent with the requirements.

M.2.3 Maintainability Design Review: The status of the maintainability/


testability design shall be addressed at all internal and external design reviews.

M.2.4 Subcontractor Control: The contractor shall specify maintainability


requirements to all subcontractors to insure that (equipment/system name)
requirements of this program are attained. Task 102 of MIL-STD-470 shall be used
as a guide.

M.2.5 Maintainability/Testability Modeling: The contractor shall establish a


maintainability model using MIL-STD-470, Task 201 which reflects the construction
and configuration of the FSD design. Linkages with MIL-STD-2165, Task 201 to
relate testability/diagnostic design characteristics to maintainability parameters
shall be provided.

M.2.6 Maintainability Prediction: The contractor shall predict maintainability


figures of merit using Procedure V of MIL-HDBK-472 (Notice 1) at the on
equipment level. MIL-STD-470, Task 203 shall be used as a guide.

M.2.7 Maintainability/Testability Design Criteria: The contractor shall develop


design criteria to be used in the design process to achieve the specified
maintainability and testability requirements. In addition, a design analysis showing
failure modes, failure rates, ease of access, modularity and the capability to
achieve the fault detection/isolation requirement shall be provided. RADC-TR-74-
308 "Maintainability Engineering Design Handbook," RADC-TR-82-189 "RADC
Testability Notebook," Task 202 of MIL-STD-2165 and Task 206 of MIL-STD-470A
shall be used as a guide.

Guidance: Maintainability demonstration reports are only necessary if a


maintainability test is specified in the maintainability specification requirements.

M.2.8 Maintainability/Testability Demonstration: A test plan and test report


shall be submitted by the contractor. Task 301 of MIL-STD-470 and Task 301 of
MIL-STD-2165 shall be used as guides.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-16

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Appendix 3
R&M Software Tools

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-17

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SOFTWARE TOOLS

Several hundred R&M software tools exist throughout Government, industry and
academia. Table 3-1 lists software tool types with associated supplier reference
numbers. The numbered list of suppliers follows. The list includes addresses and
telephone numbers confirmed to be accurate as of Aug 92. The Rome Laboratory
doesn't in any way endorse or encourage use of any specific supplier's tools listed.
Potential software tool users should thoroughly research any claims made by
software suppliers and carefully study their own needs before obtaining any
software. Further information on R&M software tools can be obtained in the reports
referenced below. The reports contain data relative to software tool's hardware
requirements, claimed capabilities, interface capabilities, demonstration package
availability and price.

R&M Software Tool References


RL-TR-91-87 "A Survey of Reliability, Maintainability, Supportability and
Testability Software Tools"

RMST 91 "R&M Software Tools," Reliability Analysis Center

Table 3-1: Software Tool Type/Supplier Reference


Number Listing

Software Tool Type Supplier Reference Numbers

1. Reliability Prediction
1a. Component Prediction Tools (e.g. MIL-HDBK- 1,5,9,10,15,16,17,19,20,21,27,
217, Bellcore, etc.) 28,32,34, 36,38,39
1b. System Modeling (e.g. Markov, Monte Carlo, 1,5,6,17,19,20,22,32,33,35,36
Availability)
1c. Mechanical Component Data 15,27,31
2. Failure Mode and Effects Analysis (FMEA) 1,5,19,20,21,27
3. Fault Tree Analysis 1,5,14,16,17,18,21,22,32,33
4. Reliability Testing 13,16,18,25,32
(e.g. MIL-HDBK-781, ESS, etc.)
5. Reliability Management 32,35
6. Maintainability Prediction 5,10,17,19,21,27,32
7. Testability Analysis 2,3,4,5,7,19,21,23,24,30,32
8. Thermal Analysis 26,32,38
9. Finite Element Analysis 8,26,32,37
10. Statistical Analysis (e.g. Weibull) 11,12,16,25,29,40,41
11. Sneak Circuit Analysis 32,35
12. Design of Experiments 25
13. Logistics 1,5,17,20,21,38

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-19

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SOFTWARE TOOLS

R&M Software Tool Supplier Listing

1. Advanced Logistics Developments 11. Fulton Findings


PO Box 232 1251 W. Sepulveda Blvd #800
College Point NY 11356 Torrance CA 90502
(718)463-6939 (310)548-6358

2. ARINC Research Corp 12. G.R. Technologies (Pister Grp)


2551 Riva Road PO Box 38042
Annapolis MD 21401 550 Eglinton Ave, West
(301)266-4650 Toronto Ontario, M5N 3A8
(416)886-9470
3. Automated Technology Systems Corp
25 Davids Drive 13. H&H Servicco
Hauppauge NY 11788 PO Box 9340
(516)231-7777 North St. Paul MN 55109
(612)777-0152
4. CINA, Inc.
PO Box 4872 14. Idaho National Engineering Lab
Mountain View CA 94040 EG&G Idaho, Inc.
(415)940-1723 Idaho Falls ID 83415
(208)526-9592
5. COSMIC
382 East Broad St 15. Innovative Software Designs, Inc.
Athens GA 30602 Two English Elm Court
(404)542-3265 Baltimore MD 21228
(410)788-9000
6. Decision Systems Assoc
746 Crompton 16. Innovative Timely Solutions
Redwood City CA 94061 6401 Lakerest Court
(415)369-0501 Raleigh NC 27612
(919)846-7705
7. DETEX Systems, Inc.
1574 N. Batavia, Suite 4 17. Item Software Ltd
Orange CA 92667 3031 E. LaJolla St
(714)637-9325 Anaheim CA 92806
(714)666-8000
8. Engineering Mechanics Research Corp
PO Box 696 18. JBF Associates
Troy MI 48099 1000 Technology Park Ctr
(313)689-0077 Knoxville TN 37932
(615)966-5232
9. Evaluation Associates Inc.
GSB Building, 1 Belmont Ave 19. JORI Corp
Bala Cynwyd PA 19004 4619 Fontana St
(215)667-3761 Orlando FL 32807
(407)658-8337
10. Evaluation Software
2310 Claassen Ranch Lane 20. Logistic Engineering Assoc
Paso Robles CA 93446 2700 Navajo Rd, Suite A
(805)239-4516 El Cajon CA 92020
(619)697-1238

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-20

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SOFTWARE TOOLS

21. Management Sciences Inc. 31. Reliability Analysis Center (RAC)


6022 Constitution Ave, N.E. PO Box 4700, 201 Mill St
Albuquerque NM 87110 Rome NY 13440
(505)255-8611 (315)337-0900

22. Energy Science & Technology 32. Rome Laboratory/ERS


Software Ctr 525 Brooks Rd
PO Box 1020 Griffiss AFB NY 13441-4505
Oak Ridge TN 37831 (315)330-4205
(615)576-2606
33. SAIC
23. Naval Air Warefare Ctr/AD, ATE 5150 El Camino Real, Suite C-31
Software Center Los Altos CA 94022
Code PD22 (415)960-5946
Lakehurst NJ 08733
(908)323-2414 34. Sendrian Resources Corp (SRC)
42 San Lucas Ave
24. NAVSEA Newbury Lake CA 91320
Code 04 D52 (805)499-7991
Washington DC 20362
(703)602-2765 35. SoHaR Incorporated
8421 Wilshire Blvd, Suite 201
25. Nutek, Inc. Beverly Hills CA 90211
30400 Telegraph Rd, Suite #380 (213)653-4717
Birmingham MI 48010
(313)642-4560 36. Spentech Company
2627 Greyling Drive
26. Pacific Numerix Corporation San Diego CA 92123
1200 Prospect St, Suite 300 (619)268-3742
La Jolla CA 92037
(619)587-0500 37. Swanson Analysis Systems Inc.
Johnson Rd, PO Box 65
27. Powertronic Systems, Inc. Houston PA 15342
13700 Chef Menteur Hwy (412)746-3304
New Orleans LA 70129
(504)254-0383 38. Systems Effectiveness Assoc
20 Vernon Street
28. Prompt Software Co Norwood MA 02062
393 Englert Court (617)762-9252
San Jose CA 95133
(408)258-8800 39. T-Cubed Systems, Inc.
31220 La Baya Dr, Suite 110
29. Pritsker Corporation Westlake Village CA 91362
8910 Perdue Rd, Suite 500 (818)991-0057
Indianapolis IN 46286
(317)879-1011 40. Team Graph Papers
Box 25
30. RACAL-REDAC Tamworth NH 03886
1000 Wyckoff Ave (603)323-8843
Mahwah NJ 07430
(201)848-8000 41. Teque, Inc.
11686 N. Daniels Dr.
Germantown WI 53022
(414)255-7210

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-21

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Appendix 4
Example Design Guidelines

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-23

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE DESIGN GUIDELINES

This Appendix contains an example set of design guidelines structured to include


verification methods. These guidelines are an example only and don't apply to all
situations.

a. Thermal Design
(1) Integrated Circuit Junction Temperatures

Design Guidelines: The design of the environmental cooling system (ECS) should
be capable of maintaining an average integrated circuit junction temperature of
55°C or less under typical operating conditions. Under worst case steady state
conditions, components should operate at least 50°C below their rated maximum
junction temperature.

Analysis Recommendation: Thermal finite element analysis should be performed


to project operating temperatures under specified environmental conditions. The
analysis should consider ECS performance, environmental impacts, and system
thermal design. Average junction temperatures should include all integrated
circuits within the system. Average temperature rise should include all components
on an individual module.

Test Recommendations: Thermally instrumented observations should be made


of components under specified environmental conditions. Instrumentation can be
by direct contact measurement or by infrared photography.

(2) Thermal Gradients

Design Guideline: The maximum allowable temperature rise from any junction to
the nearest heat sink should be 25°C. The average temperature rise from
integrated circuit junctions to the heat sink should be no greater than 15°C. To
minimize gradients, more complex and power-intensive devices should be placed
to minimize their operating temperature.

Analysis Recommendation: Automated design tools that perform component


placement should be programmed to produce this result. A thermal finite element
analysis should be used to evaluate the projected thermal gradient under the
specified environmental conditions.

Test Recommendation: Thermally instrumented observation of components


under specified environmental conditions. Instrumentation can be by direct contact
measurement or by infrared photography.

(3) Thermal Expansion Characteristics

Design Guideline: Component and board materials should be selected with


compatible thermal coefficients of expansion (TCE). Additionally, coldplate
materials should be selected for TCE compatibility with the attached printed wiring
board. TCE mismatch results in warpage of the laminated assembly, which can
reduce module clearances and stress circuit board component leads and solder
joints.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-25

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE DESIGN GUIDELINES

Analysis Recommendation: A finite element analysis should be performed to


identify the stress patterns in the solder joints attaching the components to the
board. TCE compatibility should be evaluated for the components, circuit board,
and coldplate.

Test Recommendation: Environmental stress tests should be utilized in the


development phase to verify the design analysis and environmental stress
screening should be used in production to ensure consistency throughout the
production cycle.

(4) Heat Transport Media

Design Guideline: The design should use a thermal conduction medium that is
integral to the mechanical design of the board or module. Heat pipes, metal rails or
internal planes are examples of thermally conductive media. The unit should meet
temperature design requirements by cooling through the integral thermal
conduction medium without depending on any other heat loss.

Analysis Recommendation: Thermal finite element analysis should be used to


project heat flow under specified environmental conditions. Modules employing
heat pipes for cooling should meet operating temperature requirements when the
module heat sink is inclined at an angle of 90 degrees from the horizontal.

Test Recommendation: Thermally instrumented observation should be made of


components under specified environmental conditions. Instrumentation can be by
direct contact measurement or by infrared photography.

(5) Component Attachment

Design Guideline: Surface contact should be maximized between the component


and the heat transport media. This can be achieved by direct pressure thermal
compounds or solder. The technique used should be reversible for component
removal during board repairs such that damage is not induced to nearby devices. If
a thermal compound is used, it should not migrate or react with other components
during testing or service use.

Analysis Recommendation: Specialized stress analyses should be performed to


quantify thermal and mechanical stresses involved in removing the component from
the board after production installation.

Test Recommendation: Demonstration of repair techniques should be performed


early in the development phase.

(6) Thermal Cycling

Design Guideline: The unit should be designed to dampen its thermal response
to the thermal excursions required by the specification. This can be achieved by
using a large thermal mass or by using the cooling medium to insulate the unit from
its environment to the maximum extent possible.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-26

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE DESIGN GUIDELINES

Analysis Recommendation: Thermal finite element analysis to project heat flow


and temperature excursions under specified environmental conditions.

Test Recommendation: Thermally instrumented observation of components


under specified environmental excursions. Instrumentation can be by direct contact
measurement or by infrared photography.

b. Testability Design
(1) Bottom-up Fault Reporting

Design Guideline: Incorporate autonomous self-testing at the lowest levels that


are technically feasible. Utilize positive indication to report chip, module and
subsystem status. The design should not depend upon external stimuli to perform
fault detection or isolation to a replaceable element.

Analysis Recommendation: As soon as automated testability analysis tools


become available, they should be used for the applicable engineering design
workstations.

Test Recommendation: Hardware demonstration should be conducted early in


the development phase to verify simulation results through the insertion of faults
using the currently available version of the operational program, firmware and
microcode.

(2) Fault Logging

Design Guideline: Modules should contain a non-volatile fault log that can be
accessed by a system maintenance controller or by test equipment. The use of the
fault log will improve reliability by reducing depot "Cannot Duplicates." Failure of
the fault log should not cause a critical system failure, but should be observable to
the maintenance controller.

Analysis Recommendation: Compliance should be verified by inspection.


Operation should be verified by simulation.

Test Recommendation: Not applicable.

(3) Start-up Built-in-Test (BIT)

Design Guideline: The module should execute a BIT internal diagnostic routine
immediately after power-up or receipt of an "Execute BIT" command. BIT should
provide a complete functional test of the module to the maximum extent possible
without transmitting any signals on external interface media. BIT should provide a
complete functional test of the module and should include:

(1) Verification of internal data paths

(2) Verify station physical address

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-27

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE DESIGN GUIDELINES

(3) Verify message identification process from system

(4) Verify proper functioning of all internal memory and other components

Any failure encountered during execution of BIT should be retried at lease once to
confirm the response. Any confirmed failures should prevent the module from
becoming enabled. A failed module should respond only to "RESET," "Execute
BIT," and "Report Status" commands.

Analysis Recommendation: System design simulation tools should be used to


verify operation of the BIT. These tools should include fault simulations as well as
operational simulation.

Test Recommendation: Hardware demonstration should be conducted early in


the development phase to verify simulation results through insertion of faults using
currently available versions of the operational program, firmware and microcode.

(4) Background Diagnostics

Design Guideline: During normal operation, the module should continuously


monitor itself through a background diagnostic test. The background diagnostic
should provide coverage to the maximum extent possible without interfering with
normal station operation. Failure of any test in the background diagnostic should
cause the module to re-execute the failed test to screen out transient anomalous
responses. If the failure is confirmed, the module should become immediately
disabled.

Analysis Recommendation: System design simulation tools should be used to


verify operation of the BIT. These tools should include fault simulations as well as
operational simulation.

Test Recommendation: Hardware demonstration should be conducted early in


the development phase to verify simulation results through insertion of faults using
currently available versions of the operational program, firmware and microcode.
Hardware demonstration may be performed by physically inserting faults in a
module or by instrumenting a module to allow insertion of faults through external
methods.

c. Mechanical Packaging Design


(1) Mechanical Insertion/Extraction-Induced Stresses

Design Guideline: Each module should withstand, without damage or separation,


a minimum force equal to at least 100 pounds on insertion and four ounces per
contact on extraction. Additionally, the backplane for the assembly should
withstand the same forces at all module positions applied repeatedly in any
sequence with any combination of modules present or missing.

Analysis Recommendation: A mechanical loads analysis should be performed to


verify compliance with the mechanical requirements.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-28

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE DESIGN GUIDELINES

Test Recommendation: The total computed force should be applied to simulate


module insertion and extraction. The force should be applied in 2 seconds and
maintained for 15 seconds.

(2) Insertion/Extraction Durability

Design Guideline: Modules should be capable of withstanding 500 cycles of


mating and unmating with no degradation of module performance. The module
should also be capable of withstanding 500 cycles of lateral displacement to
simulate the use of thermal clamping devices. The backplane of the module's host
assembly should be capable of withstanding 500 of the same cycles on each of its
module positions.

Analysis Recommendation: A mechanical loads analysis should be performed to


verify compliance with the mechanical requirements.

Test Recommendation: Each module/backplane position should be subjected to


500 cycles of insertion/extraction. The maximum specified insertion and extraction
forces should be applied in 2 seconds and maintained for 15 seconds. Five
hundred lateral displacement cycles should be applied to the module.

(3) Mechanical Vibration-Induced Stresses

Design Guideline: The larger components are more susceptible to mechanical


stresses because they have a larger mass and because they are more constrained
by the high number of pin-outs that act as attachment points. Module stiffness
should be maximized to prevent board flexing resulting in stress fractures at the
solder joints or component leadframe.

Analysis Recommendation: Mechanical finite element analysis should be


performed to identify module characteristics throughout the specified vibrational
environment.

Test Recommendation: Developmental units should be specially instrumented


with accelerometers early in the development program. These units could use
dummy masses attached using the intended production technique. Standard
endurance and qualification tests should be performed in accordance with MIL-
STD-810, "Environmental Test Methods and Engineering Guidelines."

(4) Module Torque Stresses

Design Guidelines: The module should be capable of withstanding a 6 inch-


pound torque applied in 2 seconds and maintained for 15 seconds in both
directions along the header in a direction perpendicular to the plane of the header
without detrimental effect to the mechanical or electrical properties of the module.

Analysis Recommendation: A mechanical loads analysis should be performed to


verify compliance with the mechanical requirements.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-29

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE DESIGN GUIDELINES

Test Recommendation: The required torque should be applied in 2 seconds and


maintained for 15 seconds. During the time the torque is applied, the module
should be rigidly supported with a zone between the interface plane and 0.5 inch
above the interface panel.

(5) Module Cantilever Load

Design Guideline: The module should be capable of withstanding a force of 2


pounds applied perpendicular to the header height along the center line midway
between the two extractor holes.

Analysis Recommendation: A mechanical loads analysis should be performed to


verify compliance with the mechanical requirements.

Test Recommendation: The required force should be applied in two directions


and should be applied in 2 to 10 seconds and maintained for 10 to 15 seconds
without detrimental effect to the header structure.

(6) Module Retention

Design Guideline: Module retention techniques must be carefully designed to


integrate the insertion mechanism, required connector insertion force, thermal
contact area, and extraction mechanism. Conventional electronics have required
the same considerations, but to a lesser degree because of their more conventional
housings.

Analysis Recommendation: Specialized analyses should be used to quantify


torque requirements and limitations of the wedge-clamping device, lever moments
of insertion or extraction devices, tolerance buildups of the module slot and
connector placement and mechanical deflections of the backplane.

Test Recommendations: Standard endurance and qualification tests in


accordance with MIL-STD-810, "Environmental Test Methods and Engineering
Guidelines."

(7) Connector Contact Integrity

Design Guideline: Each contact pin, as mounted in the connector, should


withstand a minimum axial force of 20 ounces.

Analysis Recommendation: A mechanical loads analysis should be performed to


verify compliance with the mechanical requirements.

Test Recommendation: The required force should be applied in 2 seconds along


the length of the contact in either direction and maintained for 15 seconds.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-30

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE DESIGN GUIDELINES

(8) Connector Float

Design Guideline: The connector-to-module interface should be sufficiently


flexible to compensate for specified misalignments or tolerance buildup between
the module and the backplane connector shells.

Analysis Recommendation: Tolerance review should be performed early in


design process.

Test Recommendation: Demonstration testing can be performed easily during


the initial mechanical design phase.

(9) Keying Pin Integrity

Design Guideline: When installed in the module, the keying pins should meet the
following integrity requirements. Each keying pin should withstand a:

● Torque of 20 inch-ounces

● Pullout force of 9 pounds

● Pushout force of 40 pounds

● Cantilever load of 10 pounds

Analysis Recommendation: A mechanical loads analysis should be performed to


verify compliance with the mechanical requirements.

Test Recommendation: The required forces should be applied to the keying pin
in 2 seconds and maintained for 15 seconds.

d. Power Supply Design


(1) Overcurrent Protection

Design Guideline: The power supply should supply 125 percent of its rated output
for 2 ± 0.25 seconds, after which the power supply will shut down (shut down is
defined as all outputs at less than 1 mv and 1 ma current, but all status and control
lines still operating). Operation should not resume until the power supply is reset.
In addition, the power supply outputs should be short circuit protected.

Analysis Recommendation: Compliance with the specified operation should be


verified throughout the design process.

Test Recommendation: Specified operation of the protective device should be


induced by application of the anomalous condition protected against. Correct
operation of the protective device should be observed. Normal specified power
supply operation should be verified after removal of the anomalous condition.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-31

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE DESIGN GUIDELINES

(2) Overvoltage Protection

Design Guideline: The output should be sensed for overvoltage. An overvoltage


on the output should immediately shut down the power supply. Operation should
not resume until the power supply is reset. The overvoltage limits should be
compatible with device logic absolute maximum limits. The overvoltage protection
and sense circuits should be constructed such that an overvoltage on a failed
power supply will not cause any other paralleled power supply to also shut down.

Analysis Recommendation: Compliance with the specified operation should be


verified throughout the design process.

Test Recommendation: Specified operation of the protective device should be


induced by application of the anomalous condition protected against. Correct
operation of the protective device should be observed. Normal specified power
supply operation should be verified after removal of the anomalous condition.

(3) Abnormal Thermal Operation

Design Guideline: In the event of an above-normal internal temperature, the


power supply should be capable of continued operation at a reduced power output.
Thermal sense circuits should regulate the output to the extent necessary to keep
semiconductor junctions at or below specified levels. The power supply should
resume operation at rated output if internal temperatures return to normal.

Analysis Recommendation: Compliance with the specified operation should be


verified throughout the design process.

Test Recommendation: Specified operation of the protective device should be


induced by application of the anomalous condition protected against. Correct
operation of the protective device should be observed. Normal specified power
supply operation should be verified after removal of the anomalous condition.

(4) Thermal Shutdown

Design Guideline: When thermal limiting is no longer capable of maintaining


internal temperature at an acceptable level, the power supply should automatically
shut down. Operation should not resume until the power supply is reset.
Temperature sense circuits should remain active during shut down.

Analysis Recommendation: Compliance with the specified operation should be


verified throughout the design process.

Test Recommendation: Specified operation of the protective device should be


induced by application of the anomalous condition protected against. Correct
operation of the protective device should be observed. Normal specified power
supply operation should be verified after removal of the anomalous condition.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-32

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE DESIGN GUIDELINES

(5) Power Supply Status Reporting

Design Guideline: There should be an interface on each power supply module


that will allow data communication between the power supply and a CPU located
on a separate module. Each power supply module will be addressed individually.
The data and control lines should interface to the power supply module through the
backplane connector. The following power supply parameters should be read by
the CPU:

● Overcurrent status

● Overvoltage status

● Thermal limiting mode status

● Thermal shutdown status

● Percentage of full output power available

The following commands should be issued by the CPU to the power supply
module:

● Reset

● Percentage of full output power required

Analysis Recommendation: Compliance with the specified operation should be


verified throughout the design process.

Test Recommendation: Specified operation of the protective device (i.e.,


monitoring mechanism and control) should be induced by application of the
anomalous condition protected against. Correct operation of the protective device
should be observed. Normal specified power supply operation should be verified
after removal of the anomalous condition.

(6) Power Supply Input Protection

Design Guideline: The power supply should automatically shut down if the input
voltage is not within the specified allowable range, and at any time when the control
circuits in the power supply do not have adequate voltage to regulate the outputs.
This should include the time during normal start-up when generators are not
producing their normal output voltage.

Analysis Recommendation: Compliance with the specified operation should be


verified throughout the design process.

Test Recommendation: Specified operation of the protective device should be


induced by application of the anomalous condition protected against. Correct
operation of the protective device should be observed. Normal specified power
supply operation should be verified after removal of the anomalous condition.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-33

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE DESIGN GUIDELINES

(7) Backplane Conditions

Design Guideline: A sufficient number of connector pins should be paralleled so


that no backplane connector pin carries more than 5 amps of current.

Analysis Recommendation: Compliance with the specified operation should be


verified throughout the design process.

Test Recommendation: Not applicable.

(8) M-of-N Power Supply Redundancy

Design Guideline: The quantity of power supplies for a system of functional


elements should be determined to allow uninterrupted operation if one of the power
supplies fails. When all power supplies are functional, they should share the
system load equally by operating at reduced output. If the system power
requirement is less than that available from one power supply, redundancy should
not be used unless a critical function is involved.

Analysis Recommendation: Compliance should be verified by electrical loads


analysis.

Test Recommendation: Not applicable.

(9) Current Sharing

Design Guideline: The power supplies should be constructed so that units which
have the same output voltage may operate in parallel. The design should be such
that power supply failures will not cause degradation of parallel power supplies.
Each power supply should provide its proportional share (±10%) of the total electric
load required at the configured output voltage.

Analysis Recommendation: Compliance with the specified operation should be


verified as a part of the design process.

Test Recommendation: A demonstration should be conducted under load to


verify that the parallel power supplies power up and power down in unison. Failure
and reset of one of the power supplies should be simulated or induced to
demonstrate proper operation of the remaining units through the transition.

(10) Protective Device Operation

Design Guideline: During parallel operation, each power supply protective device
should be capable of sensing and operating independently of the other power
supplies. Master-slave type operation should not be permitted under any
circumstances.

Analysis Recommendation: Compliance with the specified operation should be


verified as a part of the design process.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-34

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
EXAMPLE DESIGN GUIDELINES

Test Recommendation: A demonstration should be conducted under load to


verify proper operation of each protective device during parallel operation.

e. Memory Fault Tolerance


(1) Block Masking

Design Guideline: Known locations of defective memory should be mapped out of


the memory directories. In this manner, permanently failed cells can be prevented
from contributing to double error occurrences in combination with soft errors. At
power-up or reinitialization, BIT should perform a memory test routine and leave a
memory map of all good blocks. At the conclusion of the memory test routine, all
words contained in the memory blocks marked good should have been initialized in
an error free data pattern. Program loader software should make use of the good
memory block map, the process memory mapping registers, and information stored
in program file headers to load distributed operating systems and application
programs into the remaining good areas of main memory. Repair or replacement
of the module should not be required until the number of remaining good blocks of
memory are insufficient to meet operational requirements.

Analysis Recommendation: An analysis should be performed to identify the


optimum combination of component/bit mapping, hardware control and software
control.

Test Recommendation: Not applicable.

(2) Error Detection/Correction

Design Guideline: As a minimum, single error correct/double error detect code


should be used in large bulk semiconductor memories. It should be considered in
any application involving large amounts of semiconductor memory, but may impose
unacceptable speed and complexity penalties in some applications (e.g., CPU).

Analysis Recommendation: A detailed timing analysis should be conducted to


determine the impact of this technique on the specific application.

Test Recommendation: System bench testing should be used to insert faults and
confirm expected system operation.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-35

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Appendix 5
Reliability Demonstration Testing

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-37

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY DEMONSTRATION TESTING

1.0 Reliability Demonstration Testing: This appendix presents tables and


examples which summarize the following:

● MIL-HDBK-781 "Reliability Test Methods, Plans and Environments for


Engineering Development, Qualification and Production"

● Confidence Interval Calculations

● Poisson's Exponential Binomial Limits

2.0 MIL-HDBK-781 Test Plans: Tables 5-1 and 5-2 summarize standard test
plans as defined in MIL-HDBK-781. These plans assume an exponential failure
distribution. For nonexponential situations the risks are different.

The fixed length test plans (Table 5-1) must be used when the exact length and
cost of the test must be known beforehand and when it is necessary to
demonstrate a specific MTBF to a predetermined confidence level by the test as
well as reach an accept/reject decision.

The probability ratio sequential test (PRST) plans (Table 5-2) will accept material
with a high MTBF or reject material with a very low MTBF more quickly than fixed
length test plans having similar risks and discrimination ratios. However, different
MTBF's may be demonstrated by different accept decision points for the same test
plan and the total test time may vary significantly.

Additional guidance on test plan selection is provided in Section T, Topic T5.

2.1 Fixed Length Test Plan Example: If the design goal MTBF (θ0) for a
system is specified as 750 hours and Test Plan XID is chosen, the following
statements can be made:

a. There is a 20 percent probability of rejecting a system whose true MTBF is


750 hours (producers risk).

b. There is a 20 percent probability of accepting a system whose true MTBF is


500 hours (consumers risk).

c. The lower test MTBF (θ1) is 500 hours (750/1.5).

d. The duration of the test is 10,750 hours (21.5 x 500).

e. The test will reject any system which experiences 18 or more failures.

f. The test will accept any system which experiences 17 or less failures.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-39

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY DEMONSTRATION TESTING

Table 5-1: Fixed Length MIL-HDBK-781 Reliability


Demonstration Test Plans

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-40

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY DEMONSTRATION TESTING

Table 5-2: MIL-HDBK-781 PRST Reliability Demonstration Test


Plan Summary

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-41

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY DEMONSTRATION TESTING

2.2 PRST Test Plan Example: If the design goal MTBF (θ0) for a system is
specified as 750 hours and Test Plan IID is chosen, the following statements can
be made:

a. There is a 20 percent probability of rejecting a system whose true MTBF is


750 hours (producers risk).
b. There is a 20 percent probability of accepting a system whose true MTBF is
500 hours (consumers risk).
c. The lower test MTBF (θ1) is 500 hours (750/1.5).
d. The minimum time to an accept decision is 2095 hours (4.19 x 500).
e. The expected time to an accept decision is 5700 hours (11.4 x 500).
(Expected time to decision based on assumption of a true MTBF equal to θ0).
f. The maximum time to reach an accept decision is 10950 hours (21.9 x 500).

3.0 Confidence Level Calculation (Exponential Distribution): There are two


ways to end a reliability test, either on a specified number of failures occurring
(failure truncated), or on a set period of time (time truncated). There are usually two
types of confidence calculations of interest, either one sided (giving the confidence
that an MTBF is above a certain value) or two sided (giving the confidence that an
MTBF is between an upper and lower limit). Table 5-4 provides a simple means to
estimate one or two sided confidence limits. Multiply the appropriate factor by the
observed total life (T) to obtain the desired confidence interval.

Example 1 - Failure Truncated Test with Replacement: Twenty items are tested
and replaced until 10 failures are observed. The tenth failure occurs at 80 hours.
Determine the mean life of the items and the one-sided and two-sided 95%
confidence intervals for the MTBF.

Solution: The mean life is (20 items) (80 hours/items) / 10 failures = 160 hours.
From Table 5-4, Note 2 applies, d = (2)(10) = 20. The following factors are
obtained from the table:

95% two-sided lower factor = .0585


95% two-sided upper factor = .208
95% one-sided lower factor = .0635

Multipling these factors by 1600 total part hours (i.e., (20 items) (80 hours/item))
results in a 95% confidence that the MTBF is between 94 hours and 333 hours, or
a 95% confidence that the MTBF is at least 102 hours.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-42

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY DEMONSTRATION TESTING

Table 5-4: Factors for Calculation of Mean Life Confidence


Intervals from Test Data

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-43

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY DEMONSTRATION TESTING

Example 2 - Time Terminated Test without Replacement: Twenty items are


placed on test for 100 hours with seven failures occuring at the 10, 16, 17, 25, 31,
46 and 65 hour points. Determine the one-sided lower 90% confidence interval.

Solution: The total number of part hours accumulated is:

10 + 16 + 17 + 25 + 31 + 46 + 65 + (13 non-failed items) (100 hours) = 1510


hrs.

The MTBF is 1510 hours/7 failures = 216 hrs.

From Table 5-4, Note 3 applies, d = 2(7+1) = 16.

The factor from the table is .0848 for the 90% one-sided lower limit. Therefore, we
are 90% confident that the MTBF is greater than (.0848)(1510 hours) = 128 hours.

4.0 Poisson Distribution: The Poisson distribution is useful in calculating the


probability that a certain number of failures will occur over a certain length of time
for systems exhibiting exponential failure distributions (e.g., non-redundant
electronic systems). The Poisson model can be stated as follows:

e-λt (λt)r
P(r) = r!

where
P(r) = probability of exactly r failures occurring

λ = the true failure rate per hour (i.e., the failure rate which would be
exhibited over an infinite period)

t = the test time

r = the number of failure occurrences

e = 2.71828 . . . ,

! = factorial symbol (e.g., 4! = 4 x 3 x 2 x 1 = 24, 0! = 1, 1! = 1 )

The probability of exactly 0 failures results in the exponential form of this


distribution which is used to calculate the probability of success for a given period
of time (i.e., P(0) = e-λt). The probability of more than one failure occurring is the
sum of the probabilities of individual failures occurring. For example, the probability
of two or less failures occurring is P(0) + P(1) + P(2). Table 5-5 is a tabulation of
exact probabilities used to find the probability of an exact number of failures
occurring. Table 5-6 is a tabulation of cumulative probabilities used to find the
probability of a specific number of failures, or less, occurring.

4.1 Poisson Example 1: If the true MTBF of a system is 200 hours and a
reliability demonstration test is conducted for 1000 hours, what is the probability of
accepting the system if three or less failures are allowed?

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-44

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY DEMONSTRATION TESTING

t 1000
Solution: Expected number of failures = λt = MTBF = 200 = 5

From Table 5-6, the probability of three or less failures (probability of acceptance)
given that five are expected is .265. Therefore, there is only a 26.5 percent chance
that this system will be accepted if subjected to this test.

4.2 Poisson Example 2: A system has an MTBF of 50 hours. What is the


probability of two or more failures during a 10 hour mission?

t 10
Solution: Expected number of failures = MTBF = 50 = .2

The probability of two or more failures is one minus the probability of one or less
failures. From Table 5-6, P(r _1 ) when .2 are expected is .982.

P(r _ 2) = 1 - P(r _ 1)

1 - .982 = .018

Therefore, there is a very remote chance (1.8 percent) that a system with a 50 hour
MTBF will experience two or more failures during a 10 hour mission.

4.3 Poisson Example 3: A system has an MTBF of 50 hours. What is the


probability of experiencing two failures during a 10 hour mission?

t 10
Solution: Expected number of failures = MTBF = 50 = .2

From Table 5-5, the probability of experiencing exactly two failures when .2 are
expected is .017 or 1.7 percent. It should be noted that the probability of
experiencing two or more failures, as determined in the last example, can also be
determined from this table by adding P(r = 2) + P(r = 3) when .2 are expected.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-45

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY DEMONSTRATION TESTING

Table 5-5: Summation of Terms of Poisson's Exponential


Binomial Limit
1000 times the probability of exactly r failure occurrences given an average
number of occurrences equal to λt.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-46

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY DEMONSTRATION TESTING

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-47

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY DEMONSTRATION TESTING

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-48

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY DEMONSTRATION TESTING

Table 5-6: Summary of Terms of Poisson's Exponential


Binomial Limit
1000 times the probability of r or less failure occurrences given an average
number of occurrences equal to λt.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-49

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY DEMONSTRATION TESTING

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-50

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Appendix 6
Reliability Growth Testing

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-51

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY GROWTH TESTING

1.0 RGT Definition: MIL-STD-785 distinguishes reliability growth testing


(RGT) from reliability qualification testing (RQT) as follows:

Reliability Growth Test (RGT): A series of tests conducted to disclose


deficiencies and to verify that corrective actions will prevent recurrence in the
operational inventory. (Also known as "TAAF" testing).

Reliability Qualification Test (RQT): A test conducted under specified conditions,


by, or on behalf of, the government, using items representative of the approved
production configuration, to determine compliance with specified reliability
requirements as a basis for production approval. (Also known as a "Reliability
Demonstration," or "Design Approval" test.)

2.0 RGT Application Effectiveness: An effective way to explain the concept of


RGT is by addressing the most frequently asked questions relative to its use as
summarized from "Reliability Growth Testing Effectiveness" (RADC-TR-84-20). For
more information consult this reference and MIL-HDBK-189, "Reliability Growth
Management."

Who pays for the RGT? Does the government end up paying more? The usual
case is that the government pays for the RGT as an additional reliability program
cost and in stretching out the schedule. The savings in support costs (recurring
logistics costs) exceed the additional initial acquisition cost, resulting in a net
savings in Life Cycle Cost (LCC). The amount of these savings is dependent on the
quantity to be fielded, the maintenance concept, the sensitivity of LCC to reliability
and the level of development required. It is the old "pay me now or pay me later
situation" which in many cases makes a program manager's situation difficult
because his or her performance is mainly based on the "now" performance of cost
and schedule.

Does RGT allow contractors to "get away with" a sloppy initial design
because they can fix it later at the government's expense? It has been shown
that unforeseen problems account for 75% of the failures due to the complexity of
today's equipment. Too low an initial reliability (resulting from an inadequate
contractor design process) will necessitate an unrealistic growth rate in order to
attain an acceptable level of reliability in the allocated amount of test time. The
growth test should be considered as an organized search and correction system for
reliability problems that allows problems to be fixed when it is least expensive. It is
oriented towards the efficient determination of corrective action. Solutions are
emphasized rather than excuses. It can give a nontechnical person
an appreciation of reliability and a way to measure its status.

Should all development programs have some sort of growth program? The
answer to this question is yes in that all programs should analyze and correct
failures when they occur in prequalification testing. A distinction should be in the
level of formality of the growth program. The less challenge there is to the state-of
the-art, the less formal (or rigorous) a reliability growth program should be. An
extreme example would be the case of procuring off-the-shelf equipment to be part
of a military system. In this situation, which really isn't a development, design
flexibility to correct reliability problems is mainly constrained to newly developed

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-53

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY GROWTH TESTING

interfaces between the "boxes" making up the system. A rigorous growth program
would be inappropriate but a failure reporting and corrective action system
(FRACAS) should still be implemented. The other extreme is a developmental
program applying technology that challenges the state-of-the-art. In this situation a
much greater amount of design flexibility to correct unforeseen problems exists.
Because the technology is so new and challenging, it can be expected that a
greater number of unforeseen problems will be surfaced by growth testing. All
programs can benefit from testing to find reliability problems and correcting them
prior to deployment, but the number of problems likely to be corrected and the cost
effectiveness of fixing them is greater for designs which are more complex and
challenging to the state-of-the-art.

How does the applicability of reliability growth testing vary with the following
points of a development program?

(1) Complexity of equipment and challenge to state-of-the-art? The more


complex or challenging the equipment design is, the more likely there will
be unforeseen reliability problems which can be surfaced by a growth
program. However, depending on the operational scenario, the number of
equipments to be deployed and the maintenance concept, there may be a
high LCC payoff in using a reliability growth program to fine tune a
relatively simple design to maximize its reliability. This would apply in
situations where the equipments have extremely high usage rates and LCC
is highly sensitive to MTBF.

(2) Operational environment? All other factors being equal, the more severe
the environment, the higher the payoff from growth testing. This is because
severe environments are more likely to inflict unforeseen stress associated
with reliability problems that need to be corrected.

(3) Quantity of equipment to be produced? The greater the quantities of


equipment, the more impact on LCC by reliability improvement through a
reliability growth effort.

What reliability growth model(s) should be used? The model to be used, as


MIL-HDBK-189 says, is the simplest one that does the job. Certainly, the Duane is
most common, probably with the AMSAA developed by Dr. Larry H. Crow of the
Army Materiel Systems Analysis Activity second. They both have advantages; the
Duane being simple with parameters having an easily recognizable physical
interpretation, and the AMSAA having rigorous statistical procedures associated
with it. MIL-HDBK-189 suggests the Duane for planning and the AMSAA for
assessment and tracking. When an RQT is required, the RGT should be planned
and tracked using the Duane model; otherwise, the AMSAA model is
recommended for tracking because it allows for the calculation of confidence limits
around the data.

Should there be an accept/reject criteria? The purpose of reliability growth


testing is to uncover failures and take corrective actions to prevent their recurrence.
Having an accept/reject criteria is a negative contractor incentive towards this
purpose. Monitoring the contractor's progress and loosely defined

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-54

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY GROWTH TESTING

thresholds are needed but placing accept/reject criteria, or using a growth test as a
demonstration, defeat the purpose of running them. A degree of progress
monitoring is necessary even when the contractor knows that following the
reliability growth test he will be held accountable by a final RQT. Tight thresholds
make the test an RQT in disguise. Reliability growth can be incentivized but
shouldn't be. To reward a contractor for meeting a certain threshold in a shorter
time or by indicating "if the RGT results are good, the RQT will be waived," the
contractor's incentive to "find and fix" is diminished. The growth test's primary
purpose is to improve the design, not to evaluate the design.

What is the relationship between an RQT and RGT? The RQT is an


"accounting task" used to measure the reliability of a fixed design configuration. It
has the benefit of holding the contractor accountable some day down the road from
his initial design process. As such, he is encouraged to seriously carry out the
other design related reliability tasks. The RGT is an "engineering task" designed to
improve the design reliability. It recognizes that the drawing board design of a
complex system cannot be perfect from a reliability point of view and allocates the
necessary time to fine tune the design by finding problems and designing them out.
Monitoring, tracking and assessing the resulting data gives insight into the
efficiency of the process and provides nonreliability persons with a tool for
evaluating the development's reliability status and for reallocating resources when
necessary. The forms of testing serve very different purposes and complement
each other in development of systems and equipments. An RGT is not a
substitute for an RQT or any other reliability design tasks.

How much validity/confidence should be placed on the numerical results of


RGT? Associating a hard reliability estimate from a growth process, while
mathematically practical, has the tone of an assessment process rather than an
improvement process, especially if an RQT assessment will not follow the RGT. In
an ideal situation, where contractors are not driven by profit motives, a reliability
growth test could serve as an improvement and assessment vehicle. Since this is
not the real world, the best that can be done if meaningful quantitative results are
needed without an RQT, is to closely monitor the contractor RGT. Use of the
AMSAA model provides the necessary statistical procedures for associating
confidence levels with reliability results. In doing so, closer control over the
operating conditions and failure determinations of the RGT must be exercised than
if the test is for improvement purposes only. A better approach is to use a less
closely controlled growth test as an improvement technique (or a structured
extension of FRACAS, with greater emphasis on corrective action) to fine tune the
design as insurance of an accept decision in an RQT. With this approach,
monitoring an improvement trend is more appropriate than development of hard
reliability estimates. Then use a closely controlled RQT to determine acceptance
and predict operational results.

3.0 Duane Model: Because the Duane model is the one most commonly used, it
will be further explained. The model assumes that the plot of MTBF versus time is
a straight line when plotted on log-log paper. The main advantage of this model is
that it is easy to use. The disadvantage of the model is it assumes that a fix is
incorporated immediately after a failure occurs (before further test time is
accumulated). Because fixes are not developed and implemented that easily in real

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-55

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY GROWTH TESTING

life, this is rarely the case. Despite this problem, it is still considered a useful
planning tool. Below is a brief summary of the Duane model.

a. Growth Rate ∆MTBF


α=
∆TIME

b. Cumulative MTBF 1
MTBFc = K Tα

c. Instantaneous MTBF MTBFc


MTBF1 =
1-α

d. Test Time 1
T = [ (MTBF1) (K) (1-α) ]α

e. Preconditioning period at which system will realize an initial MTBF of MTBFc


1
Tpc = 2 (MTBFPRED)
where
k = a constant which is a function of the initial MTBF
α = the growth rate
T = the test time

The instantaneous MTBF is the model's mathematical representation of the MTBF


if all previous failure occurrences are corrected. Therefore, there is no need to
selectively purge corrected failures from the data.

The scope of the up-front reliability program, severity of the use environment and
system state-of-the-art can have a large effect on the initial MTBF and, therefore,
the test time required. The aggressiveness of the test team and program office in
ensuring that fixes are developed and implemented can have a substantial effect
on the growth rate and, therefore, test time. Other important considerations for
planning a growth test are provided in Table 6-1.

Table 6-1: RGT Planning Considerations

● To account for down time, calendar time should be estimated to be roughly


twice the number of test hours.

● A minimum test length of 5 times the predicted MTBF should always be used
(if the Duane Model estimates less time). Literature commonly quotes typical
test lengths of from 5 to 25 times the predicted MTBF

● For large MTBF systems (e.g., greater than 1000 hours), the preconditioning
period equation does not hold; 250 hours is commonly used.

● The upper limit on the growth rate is .6 (growth rates above .5 are rare).

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-56

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY GROWTH TESTING

4.0 Prediction of Reliability Growth Expected: It is possible to estimate the


increase in reliability that can be expected for an equipment undergoing a reliability
growth development program. The methodology to do this is documented in RADC-
TR-86-148 "Reliability Growth Prediction."

4.1 Terms Explained:

λp = MIL-HDBK-217 predicted equipment failure rate (failures per hour).

Fm = Equipment maturity factor. Estimated as the percentage of the design


which is new.

K1 = Number of failures in the equipment prior to test.

K1 = 30,000 x Fm x λp

FA = Test acceleration factor, based on the degree to which the test


environment cycle represents the operational environmental cycle.

TOPERATIONAL Length of operational life


FA = TTEST = Length of test cycle

0.0005
K2 = 6.5 (FA)

4.2 Prediction Procedure:

a. Calculate the equipment MTBF prior to test, MTBF(o):

0.0005K1⎤ -1
MTBF(o) = ⎡λp +
⎣ 6.5 ⎦

b. Calculate the equipment MTBF after "t" hours of growth testing:

FA
MTBF(t) =
(FA)(λp) + K1K2e-K2t

MTBF(t)
c. Percent MTBF lmprovement = MTBF(o) x 100

4.3 Example:

To illustrate application of the reliability growth prediction procedure, consider the


following hypothetical example of an avionics equipment to be subjected to
reliability growth testing during full-scale development. The following assumptions
are made:

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-57

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY GROWTH TESTING

● 40 percent of the equipment is new design; the remainder is comprised of


mature, off-the-shelf items.

● The MIL-HDBK-217 MTBF prediction is 300 hours (λp = 1/300).

● An RGT program is to be conducted during which 3000 hours will be


accumulated on the equipment.

● The operational cycle for the equipment is a ten-hour aircraft mission.

● The test profile eliminates the period of operation in a relatively benign


environment (e.g., the cruise portion of the mission) resulting in a test cycle of
two hours.

The predicted number of failures in the equipment prior to testing is:

K1 = 30,000 x (0.4) x (1/300) = 40

The initial MTBF is:

1 0.005(40)⎤ -1
MTBF(o) = ⎡300 + = 156 hours
⎣ 6.5 ⎦
The test acceleration factor is:

10
FA = 2 = 5

The rate of surfacing failures during the test is:

K2 = ⎛0.0005⎞ x 5 = 0.0003846
⎝ 6.5 ⎠
The equipment MTBF after incorporation of corrective actions to eliminate those
failures identified in the RGT program is:

5
MTBF(3000) = 1 = 232 hours
0.0003846 x 3000
(5 x 300 + 40 x 0.0003846 e )

Hence, the predicted reliability growth is from an initial MTBF of 156 hours to an
improved MTBF of 232 hours, approximately a 50 percent improvement.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-58

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Appendix 7
Maintainability/Testability
Demonstration Testing

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-59

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
MAINTAINABILITY/TESTABILITY DEMONSTRATION TESTING

1.0 Testing: This appendix presents a listing of the possible maintainability


demonstration plans as determined from MIL-STD-471 "Maintainability Verification
Demonstration/Evaluation" and general plans for testability demonstrations. In most
circumstances, maintainability and testability demonstrations are linked together
and tested concurrently. Concurrent testing is cost effective and reduces the total
number of tasks that must be demonstrated.

2.0 Maintainability: For maintainability there are two general classes of


demonstration: tests that use naturally occurring failures, and tests that require
induced failures. Natural failure testing requires a long test period, while induced
testing is only limited to the time to find fix the fault. To run a thirty task test using
induced faults, the test time should be less than a week while a natural failure test
could require six months or more depending on the failure frequency.

2.1 Maintainability Test Recommendations (See Table 7-1 for complete MIL-
STD-471 Test Plan listing.)

● Test plan eight should be used if dual requirements of the mean and either
90th or 95th percentile of maintenance times are specified and a lognormal
distribution is expected.
● Test plan nine should be used for mean corrective maintenance, mean
preventive maintenance or combination of corrective and preventive
maintenance testing. Any underlying distribution can be used in this test plan.
● The sample size of the tasks to be demonstrated should exceed 400 to
reduce the risk of biasing the test results.
● The task samples must be based on the failure rate distribution of the
equipment to be tested.
● Final selection of the tasks to be demonstrated must be performed by the
procuring activity just prior to test.

3.0 Testability: Three parameters which are usually tested in a testability


demonstration are: the fault detection capability, the fault isolation capability, and
the false alarm rate. Fault detection and isolation parameters are demonstrated
using induced faults, while false alarm demonstrations are based on naturally
occurring events. (See Table 7-2 for more information on testability demonstration.)

3.1 Testability Test Recommendations:

● Fault detection and isolation testing should be combined.


● Test samples should exceed 400 to reduce any bias.
● The test samples should be based on the failure rate distribution of the
equipment to be tested.
● False alarm demonstration should be a data collection effort using all the
contractor planned tests such as acceptance testing and initial operating
tests (IOT).

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-61

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
MAINTAINABILITY/TESTABILITY DEMONSTRATION TESTING

Table 7-1: Maintainability Demonstration Test Plan Summary

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-62

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
MAINTAINABILITY/TESTABILITY DEMONSTRATION TESTING

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-63

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
MAINTAINABILITY/TESTABILITY DEMONSTRATION TESTING

Table 7-2: Testability Demonstration Plans

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-64

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Appendix 8
Reliability and Maintainability
Data Sources

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-65

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY AND MAINTAINABILITY DATA SOURCES

1.0 Air Force Databases

G063: Maintenance and Operational Data Access System (MODAS): MODAS


is an on-line data storage and access system to track field maintenance events for
purposes of product improvement, monitoring product performance and enhancing
reliability and maintainability. The data base is menu driven and contains data on
both ground and airborne equipment. Data can be sorted and accessed in several
ways. For example, data on the top 50 most maintained subsystems on an aircraft
can be viewed for a specific geographical area or for a specific aircraft platform.
Mean-time-between-maintenance actions (MTBMA) can be calculated from the
data on airborne systems because flight hours are also provided with the number of
maintenance actions.

Air Force Materiel Command/ENIT


Wright-Patterson AFB OH 45433-5001
(513) 257-6021
DSN: 787-6021

Reliability and Maintainability Information System (REMIS): REMIS is a central


source on-line data access system containing all unclassified maintenance,
operational, configuration and selected supply information for USAF weapon
systems. REMIS, when completed, will be a conglomeration of almost all of the Air
Force databases.

Air Force Materiel Command/MSC/SR


Wright-Patterson AFB OH 45433-5001
(513) 429-5076
DSN: 787-5076

D041: Requirements Computation System: This system contains part failure


rates and data assets for recoverable items.

Air Force Materiel Command/XRII


Wright-Patterson AFB OH 45433-5001
(513) 257-5361
DSN: 787-5361

Tactical Interim CAMMS and REMIS Reporting System (TICARRS): This


system reports on F-15 and F-16 aircraft inventory, utilization and maintenance.

Dynamics Research Corporation


60 Frontage Rd
Andover MA 01810
(800) 522-4321, x2612

G021: Product Quality Deficiency Reporting (PQDR): This system provides


procedures for assuring that the quality deficiency data generated by using
activities are effective and appropriate management levels are apprised of quality
problems. Also, it provides tracking to assure that corrective and preventive
actions are carried out to alleviate future quality problems.

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-67

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY AND MAINTAINABILITY DATA SOURCES

Air Force Materiel Command/ENI


Wright Patterson AFB OH 45433-5001
(513) 257-6021
DSN: 787-6021

Systems Effectiveness Data System (SEDS): This system contains R&M test
data obtained during test and evaluation of new systems at Eglin AFB FL.

Aeronautical Systems Center /ENM


Eglin AFB FL 32542
(904) 882-8652
DSN: 872-8652

Visibility and Management of Operating and Support Costs Program


(VAMOSC): This system contains operating and support cost for parts used in
over 100 aircraft.

Air Force Cost Analysis Agency/ISM


Wright-Patterson AFB OH 45433
(513) 257-4963
DSN: 787-4963

Reliability, Availability, Maintainability of Pods (RAMPOD)

Warner Robins Air Logistics Center/LNXA


RAMPOD Program Office
Robins AFB GA
(912) 926-5404
DSN: 468-5404

2.0 Navy Databases

3M: Maintenance, Material, Management System: 3M is a mass-data collection


system which tracks maintenance information at the organizational and
intermediate levels on all types of equipments and assemblies used on Navy ships,
submarines and aircraft.

Naval Sea Logistics Center


5450 Carlisle Pike
PO Box 2060, Code 44
Mechanicsburg PA 17055-0795
(717) 790-2953 (Ships & Submarines)
DSN: 430-2953
(717) 790-2031 (Avionics)
DSN: 430-2031

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-68

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY AND MAINTAINABILITY DATA SOURCES

Naval Aviation Logistics Data Analysis System (NALDA): NALDA contains


data similar to the 3M Avionics database.

Naval Aviation Maintenance Office


NAVAIR Air Station, Code 424
Patuxent River MD 20670
(800) 624-6621
(301) 863-4454
DSN: 326-4454

Marine Corps Integrated Maintenance Management System (MIMMS): MIMMS


contains maintenance information at all levels for all types of equipment and
assemblies used in Marine Corps vehicles and aircraft.

Headquarters, US Marine Corps, HQBN


Code LPP-3
Washington DC 20380-0001
(703) 696-1060
DSN: 226-1060

3.0 Army Databases

Troop Support Sample Data Collection (TSSDC): TSSDC is a sample data


collection system which contains maintenance times, maintenance actions and
operating hours of various equipment.

US Army Aviation Troop Command


Attn: AMSAT-I-MDC
4300 Goodfellow Blvd.
St Louis MO 63120-1798
(314) 263-2734
DSN: 693-2734

Work Order Logistics File (WOLF): WOLF is a maintenance database containing


repair part consumption data on fielded systems.

Commander
USAMC Materiel Readiness Support Activity
Attn: AMXMD-RA
Lexington KY 40511-5101
(606) 293-4110
DSN: 745-4110

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-69

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY AND MAINTAINABILITY DATA SOURCES

Reliability, Availability, Maintainability and Logistics Data Base (RAM/LOG):


RAM/LOG contains testing data on Aircraft.

US Army Aviation Troop Command


4300 Goodfellow Blvd
St Louis MO 63120-1798
(314) 263-1791
DSN: 693-1791

USAMC Materiel Readiness Support Activity Deficiency Reporting System

This system tracks equipment and component deficiencies for all equipments.

Commander
USAMC Materiel Readiness Support Activity
ATTN: AMXMD-RS
Lexington KY 40511-5101
(606) 293-3577
DSN: 745-3577

4.0 Other Government Databases

Reliability Analysis Center (RAC): RAC is a Department of Defense Information


Analysis Center sponsored by the Defense Technical Information Center, managed
by the Rome Laboratory, and currently operated by IIT Research Institute (IITRI).
RAC is chartered to collect, analyze and disseminate reliability information
pertaining to electronic systems and parts used therein. The present scope
includes integrated circuits, hybrids, discrete semiconductors, microwave devices,
opto-electronics and nonelectronic parts employed in military, space and
commercial applications.

Data is collected on a continuous basis from a broad range of sources, including


testing laboratories, device and equipment manufacturers, government laboratories
and equipment users (government and non-government). Automatic distribution
lists, voluntary data submittals and field failure reporting systems supplement an
intensive data solicitation program.

Reliability data and analysis documents covering most of the device types
mentioned above are available from the RAC. Also, RAC provides reliability
consulting, training, technical and bibliographic inquiry services.

For further technical assistance and information on available RAC Services,


contact:

Reliability Analysis Center


201 Mill Street
Rome NY 13440-6916
Technical Inquiries: (315) 337-9933
Non-technical Inquiries: (315) 337-0900
DSN: 587-4151

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-70

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY AND MAINTAINABILITY DATA SOURCES

All Other Requests Should Be Directed To:

Rome Laboratory
ERSS/Duane A. Gilmour
Griffiss AFB NY 13441-5700
Telephone: (315) 330-2660
DSN: 587-2660

Government Industry Data Exchange Program (GIDEP): The GIDEP program is


a cooperative activity between government and industry participants for the
purpose of compiling and exchanging technical data. It provides an on-line menu
driven means of searching for desired information. Table 8-1 summarizes several
separate GIDEP data banks which contain R&M related information.

Table 8-1: GIDEP Data Bank Summary

Data Bank Content

Engineering Test reports, nonstandard part


justification data, failure analysis data,
manufacturing processes data.

Reliability and Maintainability Failure mode and replacement rate


data on parts, reports on theories,
methods, techniques and procedures
related to reliability and maintainability
practices.

Failure Experience Failure information generated on


significant problems on parts,
processes, materials, etc. Includes
ALERTS and failure analysis
information.

GIDEP provides special services such as the ALERT system which notifies all
participants of significant problem areas and the Urgent Data Request System
which allows all participants queried for information to solve a specific problem.
The current information found on-line is usually a brief summary of a report or
collected data which provides a reference for further detailed information found on
microfilm; however, GIDEP is working on a new system which will provide full text
reports and ALERTS on-line.

GIDEP Operations Center


Corona CA 91720-5000
(714) 273-4677
DSN: 933-4677

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-71

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY AND MAINTAINABILITY DATA SOURCES

5.0 Electronic Bulletin Boards

DOD Field Failure Return Program (FFRP) Reliability Bulletin Board: This
Bulletin Board provides information concerning the DOD FFRP program as well as
providing a vehicle for both commercial and government users to exchange ideas
and information on component and system problems.

Reliability Analysis Center Technical Data:


201 Mill Street 1200 Baud or less
Rome NY 13440-6916 8 Data bits
(315) 339-7120, Access No Parity
(315) 339-7043, Questions 1 stop bit
DSN: 587-4151

DESC Engineering Standardization Bulletin Board: This service provides


information on standard military drawings (SMD) parts as well as information on
MIL-M-38510 microcircuits. Examples include downloadable self-extracting files of
standard military drawing microcircuits (MIL-BUL-103) and MIL-STD-1562, a listing
of standard microcircuits cross-referenced to commercial part numbers. Many files
are available in both ASCI text format and formats compatible with popular
commerical data base programs.

Defense Electronics Supply Center Technical Data:


Dayton OH 45444 2400 Baud or less
(513) 296-6046, Access 8 Data bits
(513) 296-6879, Questions No Parity
DSN: 986-6879 1 stop bit

IEEE Reliability Society Bulletin Board Technical Data:


Los Angeles Chapter 2400 Baud or less
PO Box 1285 8 Data bits
Pacific Palisades CA 90272 No Parity
(818) 768-7644, Access 1 stop bit
(213)454-1667, Questions

Statistics Applications Board System Technical Data:


Statistical Applications Institute 1200 - 2400 Baud
(316) 265-3036 8 Data bits
No Parity
1 stop bit

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-72

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Appendix 9
Reliability and Maintainability
Education Sources

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-73

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY AND MAINTAINABILITY EDUCATION SOURCES

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-75

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY AND MAINTAINABILITY EDUCATION SOURCES

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-76

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
RELIABILITY AND MAINTAINABILITY EDUCATION SOURCES

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-77

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Appendix 10
R&M Specifications, Standards,
Handbooks and Rome Laboratory
Technical Reports

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-79

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SPECIFICATIONS, STANDARDS, HANDBOOKS
AND ROME LABORATORY TECHNICAL REPORTS

1.0 Specifications, Standards and Handbooks

This appendix provides a summary of military documents related to the R&M


discipline. Table 10-1 lists reliability standards and handbooks along with an
abbreviation to cross-reference the custodial agencies which are listed in Table 10-
3. Table 10-2 lists maintainability standards and handbooks along with
abbreviations of custodial agencies which are listed in Table 10-3. Table 10-4 lists
other R&M related standards, specifications, pamphlets and regulations.
Department of Defense Directives and Instructions may be obtained from the
National Technical Information Service at the address shown at the bottom of this
page. Copies of military specifications, standards and handbooks may be ordered
from:

Standardization Document Order Desk


700 Robbins Ave.
Building 4, Section D
Philadelphia, PA 19111-5094
(215) 697-2667, -2179

2.0 Rome Laboratory Technical Reports

Table 10-5 summarizes Rome Laboratory (formerly RADC) Technical Reports


related to R&M design. Documents with a prefix of "A" in the AD number may be
ordered by the general public from the National Technical Information Center. All
others are available to DoD contractors from the Defense Technical Information
Center.

National Technical Information Service Defense Technical Information Center


(NTIS) DTIC-FDAC
Department of Commerce Cameron Station, Bldg. 5
5285 Port Royal Road Alexandria, VA 22304-6145
Springfield, VA 22161-2171 (703) 274-7633 DSN: 284-7633
(703) 487-4650

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-81

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SPECIFICATIONS, STANDARDS, HANDBOOKS
AND ROME LABORATORY TECHNICAL REPORTS

Table 10-1: Reliability Standards and Handbooks

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-82

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SPECIFICATIONS, STANDARDS, HANDBOOKS
AND ROME LABORATORY TECHNICAL REPORTS

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-83

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SPECIFICATIONS, STANDARDS, HANDBOOKS
AND ROME LABORATORY TECHNICAL REPORTS

Table 10-2: Maintainability Standards and Handbooks

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-84

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SPECIFICATIONS, STANDARDS, HANDBOOKS
AND ROME LABORATORY TECHNICAL REPORTS

Table 10-3: Custodial Agencies for R&M Documents

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-85

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SPECIFICATIONS, STANDARDS, HANDBOOKS
AND ROME LABORATORY TECHNICAL REPORTS

Table 10-4: Other R&M Related Standards,


Specifications, Pamphlets and Regulations

Document Date Title

MIL-STD-454M 30 Oct 91 Standard General Requirements for Electronic


Notice 3 Equipment

MIL-STD-883D 16 Nov 91 Test Methods and Procedures for Microcircuits

MIL-STD-965A 13 Dec 85 Parts Control Program

MIL-STD-1309D 12 Feb 92 Definition of Terms for Testing Measurement and


Diagnostics

MIL-STD-1388/1A 28 Mar 91 Logistics Support Analysis


Notice 3

MIL-STD-1388/2B 28 Mar 90 Logistics Support Analysis Record, DoD


Requirements for a

MIL-STD-1547A 1 Dec 87 Electronic Parts, Materials and Processes for Space


and Launch Vehicles

MIL-STD-1562W 25 Sep 91 List of Standard Microcircuits

MIL-BUL-103J 31 Oct 91 List of Standardized Military Drawings (SMDs)

MIL-STD-2165 26 Jan 85 Testability Program for Electronic Systems and


Equipment

MIL-E-5400T 14 May 90 Electronic Equipment, Aerospace, General


Specification for

MIL-M-38510J 15 Nov 91 Microcircuits, General Specification for

MIL-H-38534 22 Aug 90 Hybrid Microcircuits, General Specification for

MIL-I-38535A 29 Nov 91 Integrated Circuits (Microcircuits) Manufacturing,


General Specification for

MIL-STD-1772B 22 Aug 90 Hybrid Microcircuit, General Specification for

MIL-S-19500H 30 Apr 90 Semiconductor Devices, General Specification for


Supplement 1 28 Sep 90
Amendment 2 30 Jul 91

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-86

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SPECIFICATIONS, STANDARDS, HANDBOOKS
AND ROME LABORATORY TECHNICAL REPORTS

Document Date Title

ESD-TR-85-148 Mar 85 Derating Application of Parts for ESD System


Development

RELI 24 Apr 87 DoD Reliability Standardization Document Program


Plan, Revision 4

MNTY Dec 89 DoD Maintainability Standardization Document


Program Plan, Revision 3

MIL-HDBK-H108 29 Apr 60 Sampling Procedures and Tables for Life &


Reliability Testing (Based on Exponential
Distribution)

MIL-HDBK-978B 1 Sep 89 NASA Parts Application Handbook

DoD Dir. 5000.1 23 Feb 91 Defense Acquisition

DoD Inst. 5000.2 23 Feb 91 Defense Acquisition Management Policies and


Procedures

MIL-STD-810E 9 Feb 90 Environmental Test Methods and Engineering


Notice 1 Guidelines

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-87

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SPECIFICATIONS, STANDARDS, HANDBOOKS
AND ROME LABORATORY TECHNICAL REPORTS

Table 10-5: Rome Laboratory Reliability & Maintainability


Technical Reports
RL-TR AD No. Title
RL-TR-92-95 ADB164722 Signal Processing Systems Packaging - 1
Apr 1992

RL-TR-92-96 ABD165167 Signal Processing Systems Packaging - 2


Apr 1992

RL-TR-91-29 ADA233855 A Rome Laboratory Guide to Basic Training in TQM


Mar 1991 Analysis Techniques

RL-TR-91-39 ADA236585 Reliability Design for Fault Tolerant Power Supplies


Apr 1991

RL-TR-91-48 ADA235354 Measuring the Quality of Knowledge Work

RL-TR-91-87 ADA236148 A Survey of Reliability, Maintainability,


Apr 1991 Supportability, and Testability Software Tools

RL-TR-91-121 ADB157688 Electronic Equipment Readiness Testing Marginal


Jul 1991 Checking

RL-TR-91-122 ADB156175 Reliability Analysis of an Ultra Lightweight Mirror


Jun 1991

RL-TR-91-155 ADA241476 Computer Aided Assessment of Reliability Using


Jul 1991 Finite Element Methods

RL-TR-91-180 ADA2418621 Analysis and Demonstration of Diagnostic


Aug 1991 Performance in Modern Electronic Systems

RL-TR-91-200 ADA241865 Automated Testability Decision Tool


Sept 1991

RL-TR-91-220 ADB159584 Integration of Simulated and Measured Vibration


Sept 1991 Response of Microelectronics

RL-TR-91-251 ADB160138 Reliability Assessment of Wafer Scale Integration


Oct 1991 Using Finite Element Analysis

RL-TR-91-300 ADA245735 Evaluation of Quantitative Environmental Stress


Nov 1991 Screening (ESS) Methods

RL-TR-91-305 ADA242594 Total Quality Management (TQM), An Overview


Sept 1991

RL-TR-91-353 ADA247192 SMART BIT/TSMD Integration


Dec 1991

RL-TR-91-402 ADA251921 Mission/Maintenance/Cycling Effects of Reliability


Dec 1991

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-88

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SPECIFICATIONS, STANDARDS, HANDBOOKS
AND ROME LABORATORY TECHNICAL REPORTS

RADC-TR AD No. Title


RADC-TR-90-31 ADA222733 A Contractor Program Manager's Testability
Diagnostics Guide

RADC-TR-90-64 ADA221325 Personal Computer (PC) Thermal Analyzer

RADC-TR-90-72 ADA223647 Reliability Analysis Assessment of Advanced


Technologies

RADC-TR-90-109 Integration of Sneak Analysis with Design


Vol. I ADA226902
Vol. II ADA226820

RADC-TR-90-120 ADA226820 Reliability/Maintainability/Logistics Support Analysis


Computer Aided Tailoring Software Program (R/M/L
CATSOP)

RADC-TR-90-239 ADA230067 Testability/Diagnostics Design Encyclopedia

RADC-TR-90-269 ADB150948 Quantitative Reliability Growth Factors for ESS

RADC-TR-89-45 ADA208917 A Government Program Manager's


Testability/Diagnostics Guide

RADC-TR-89-160 ADB138156L Environmental Extreme Recorder

RADC-TR-89-165 ADA215298 RADC Fault Tolerant System Reliability Evaluation


Facility

RADC-TR-89-209 Computer-Aided Design for Built-in-Test


Vol. I ADA215737 (CADBIT) - Technical Issues
Vol. II ADA215738 (CADBIT) - BIT Library
Vol. III ADA215739 (CADBIT) - Software Specification

RADC-TR-89-223 ADA215275 Sneak Circuit Analysis for the Common Man

RADC-TR-89-276 ADB140924L Dormant Missile Test Effectiveness

RADC-TR-89-277 ADB141826L SMART BIT-2

RADC-TR-89-281 ADA216907 Reliability Assessment Using Finite Element


Techniques

RADC-TR-89-299 Reliability and Maintainability Operational


Vol. I ADB141960L Parameter Translation II
Vol. II ADB141961L

RADC-TR-89-363 ADA219941 FASTER: The Fault Tolerant Architecture


Simulation Tool for Evaluating Reliability,
Introduction and Application

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-89

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SPECIFICATIONS, STANDARDS, HANDBOOKS
AND ROME LABORATORY TECHNICAL REPORTS

RADC-TR AD No. Title


RADC-TR-88-13 ADB122629L VHSIC Impact on System Reliability

RADC-TR-88-69
Vol. I ADA200204 R/M/T Design for Fault Tolerance, Program
Manager's Guide
Vol. II ADA215531 R/M/T Design for Fault Tolerance, Design
Implementation Guide

RADC-TR-88-72 ADA193759 Reliability Assessment of Surface Mount


Technology

RADC-TR-88-97 ADA200529 Reliability Prediction Models for Discrete


Semiconductor Devices

RADC-TR-88-110 ADA202704 Reliability/Maintainability/Testability Design for


Dormancy

RADC-TR-88-118 ADA201346 Operational and Logistics Impact on System


Readiness

RADC-TR-88-124 ADA201946 Impact of Fiber Optics on System


Reliability/Maintainability

RADC-TR-88-124 ADA201946 Impact of Fiber Optics on System


Reliability/Maintainability

RADC-TR-88-211 ADA205346 Testability/Diagnostics Encyclopedia Program


(Part l)

RADC-TR-88-304
Vol. I, Part A ADB132720L Reliability Design Criteria for High Power Tubes
Vol. II, Part B ADB132721L Review of Tube and Tube Related Technology

RADC-TM-87-11 ADA189472 Availability Equations For Redundant Systems,


Both Single and Multiple Repair

RADC-TR-87-13 ADB119216L Maintenance Concepts for VHSIC

RADC-TR-87-55 ADA183142 Predictors of Organizational-Level Testability


Attributes

RADC-TR-87-92 ADB117765L Large Scale Memory Error Detection and Correction

RADC-TR-87-177 ADA189488 Reliability Analyses of a Surface Mounted Package


Using Finite Element Simulation

RADC-TR-87- 225 ADA193788 Improved Readiness Thru Environmental Stress


Screening

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-90

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SPECIFICATIONS, STANDARDS, HANDBOOKS
AND ROME LABORATORY TECHNICAL REPORTS

RADC-TR AD No. Title


RADC-TR-86-138 ADA174333 RADC Guide to Environmental Stress Screening

RADC-TR-86-148 ADA176128 Reliability Growth Prediction

RADC-TR-86-149 ADA176847 Environmental Stress Screening

RADC-TR-86-195 Tools For Integrated Diagnostics


Vol. I ADB110761
Vol. II ADB111438L

RADC-TR-86-241 ADA182335 Built-In-Test Verification Techniques

RADC-TR-85-66 ADA157242 Study and Investigation to Update the Nonelectronic


Reliability Notebook

RADC-TR-85-91 ADA158843 Impact of Nonoperating Periods on Equipment


Reliability

RADC-TR-85-148 ADB098377L Smart BIT

RADC-TR-85-150 ADA162617 A Rationale and Approach for Defining and


Structuring Testability Requirements

RADC-TR-85-194 ADA163900 RADC Nonelectronic Reliability Notebook

RADC-TR-85-228
Vol. I ADA165231 Impact of Hardware/Software Faults on System
Reliability - Study Results
Vol. II ADA165232 Procedures for Use of Methodology

RADC-TR-85-229 ADA164747 Reliability Prediction for Spacecraft

RADC-TR-85-268 ADA167959 Prediction and Analysis of Testability Attributes:


Organizational Level Testability Prediction

RL-TR-84-20 ADA141232 Reliability Growth Testing Effectiveness

RADC-TR-84-25 Reliability/Maintainability Operational Parameter


Vol. I ADB087426 Translation
Vol. II ADB087507L

RADC-TR-84-83 ADA145971 Ballpark Reliability Estimation Techniques

RADC-TR-84-100 ADB086478L Thermal Stress Analysis of Integrated Circuits


Using Finite Element Methods

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-91

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SPECIFICATIONS, STANDARDS, HANDBOOKS
AND ROME LABORATORY TECHNICAL REPORTS

RADC-TR AD No. Title


RADC-TR-84-165 ADA149684 Maintainability Time Standards for Electronic
Equipment

RADC-TR-84-182 ADA153268 VLSI Device Reliability Models

RADC-TR-84-203 ADA150694 Artificial Intelligence Applications to Testability

RADC-TR-84-244 ADA154161 Automated FMEA Techniques

RADC-TR-84-254 ADA153744 Reliability Derating Procedures

RADC-TR-84-268 ADA153761 Prediction of Scheduled and Preventive


Maintenance Workload

RADC-TR-83-2 ADA127546 Study of Causes of Unnecessary Removals of


Avionic Equipment

RADC-TR-83-4 ADA126167 Analytical Procedures for Testability

RADC-TR-83-13 ADB075924L Testability Task Traceability

RADC-TR-83-29 Reliability, Maintainability and Life Cycle Costs


Vol. I ADA129596 Effects of Using Commercial-Off-the-Shelf
Vol. II ADA129597 Equipment

RADC-TR-83-36 ADA129438 Fault Tolerance, Reliability and Testability of


Distributed Systems

RADC-TR-83-49 ADA130465 Guide to Government Reliability, Maintainability and


Quality Assurance Organizations

RADC-TR-83-72 ADA13158 The Evolution and Practical Applications of Failure


Modes and Effects Analyses

RADC-TR-83-85 Reliability Programs for Nonelectronic Parts


Vol. I ADA133624
Vol. II ADA133625

RADC-TR-83-108 ADA135705 Reliability Modeling of Critical Electronic Devices

RADC-TR-83-172 ADB077240L ORACLE and Predictor Computerized Reliability


Prediction Programs

RADC-TR-83-180 ADA138576 Condition Monitoring Techniques for


Electromechanical Equipment Used in AF Ground
C3I Systems

RADC-TR-83-257 ADA149683 Computer Aided Testability Design Analysis

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-92

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
R&M SPECIFICATIONS, STANDARDS, HANDBOOKS
AND ROME LABORATORY TECHNICAL REPORTS

RADC-TR AD No. Title


RADC-TR-83-291 ADA141147 Advanced Applications of the Printed Circuit Board
Testability Design and Rating System

RADC-TR-83-316 ADB083630L Hardware/Software Tradeoffs for Test Systems

RADC-TR-82-172 ADA118839 RADC Thermal Guide for Reliability Engineers

RADC-TR-82-179 ADA118479 Sneak Analysis Application Guidelines

RADC-TR-81-106 ADA108150 "Bayesian" Reliability Tests Made Practical

RADC-TR-80-30 ADA083009 Bayesian Reliability Theory for Repairable


Equipment

RADC-TR-79-200 ADA073299 Reliability and Maintainability Management Manual

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-93

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Appendix 11
Acronyms

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-95

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ACRONYMS

µ Repair Rate (1/Mean- AFPRO Air Force Plant


Corrective-Maintenance Representative Office
Time) AFR Air Force Regulation
λ Failure Rate (1/Mean-Time- AFSC Air Force Systems
Between-Failure) Command
α Producer's Risk AFTO Air Force Technical Order
β Consumer's Risk AGS Ambiguity Group Size
θc-a Case to Ambient Thermal AI Artificial Intelligence
Resistance AJ Antijam
θj-c Junction to Case Thermal ALC Air Logistics Center
Resistance ALU Arithmetic Logic Unit
θj-a Junction to Ambient AMGS Automatic Microcode
Thermal Resistance Generation System
θ Observed Point Estimate AMSDL Acquisition Management
Mean-Time-Between- Systems and Data Control
Failure List
θ0 Upper Test (Design Goal) AP Array Processor
Mean-Time-Between- APD Avalanche Photo Diode
Failure APTE Automatic Programmed
θ1 Lower Test (Unacceptable) Test Equipment
Mean-Time-Between- APU Auxiliary Power Unit
Failure
ARM Antiradiation Missile
θp Predicted Mean-Time-
ASA Advanced Systems
Between-Failure
Architecture
Ai Inherent Availability
ASC Aeronautical Systems
Ao Operational Availability Center
AAA Allocations Assessment and ASIC Application Specific
Analysis (Report) Integrated Circuit
ACO Administrative Contracting ASTM American Society for
Officer Testing and Materials
ADAS Architecture Design and ATC Air Training Command
Assessment Systems
ATE Automatic/Automated Test
ADM Advanced Development Equipment
Model
ATF Advanced Tactical Fighter
ADP Automatic Data Processing ATG Automatic Test Generation
ADPE Automatic Data Processing ATP Acceptance Test Procedure
Equipment
ATTD Advanced Technology
AFAE Air Force Acquisition Transition Demonstration
Executive
AVIP Avionics Integrity Program
AFALC Air Force Acquisition
Logistics Centers b BIT
AFCC Air Force Communication BAFO Best and Final Offer
Command BB, B/B Brass Board
AFFTC Air Force Flight Test Center BCC Block Check-Sum
AFLC Air Force Logistics Character
Command BCS Bench Check Serviceable
AFMC Air Force Materiel BCWP Budget Cost of Work
Command Performed
AFOTEC Air Force Operational Test
and Evaluation Center

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-97

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ACRONYMS

BCWS Budget Cost of Work CDRL Contract Data


Scheduled Requirements List
BEA Budget Estimate Agreement CFAR Constant False Alarm Rate
BES Budget Estimate CFE Contractor Furnished
Submission Equipment
BIMOS Bipolar/Metal Oxide CFSR Contract Fund Status
Semiconductor Report
BIST Built-in Self Test CGA Configurable Gate Array
BIT Built-In-Test CI Configuration Item
BITE Built-In-Test Equipment CIM Computer Integrated
BIU Bus Interface Unit Manufacturing
BJT Bipolar Junction Transistor CINC Commander-in-Chief
BLER Block Error Rate CISC Complex Instruction Set
BPPBS Biennial Planning, Computer
Programming, and CIU Control Interface Unit
Budgeting System CLCC Ceramic Leaded Chip
B/S or bps Bits Per Second Carrier
C Centigrade CLIN Contract Line Item Number
C-ROM Control Read Only Memory CM Centimeter
C3 Command, Control and CM Configuration Manager or
Communications Management
C3CM Command, Control, CML Current Mode Logic
Communications and CMOS Complementary Metal
Countermeasures Oxide Semiconductor
C3I Command, Control, CND Can Not Duplicate
Communications CNI Communications,
Intelligence Navigation, and
CA Contracting Activity Identification
CAD Computer Aided Design CO Contracting Officer
CADBIT Computer Aided Design for CODEC Coder Decoder
Built-In Test COMM Communications
CAE Computer Aided COMSEC Communications Security
Engineering COPS Complex Operations Per
CALS Computer Aided Acquisition Second
Logistics & Support CPCI Computer Program
CAM Content Addressable Configuration Item
Memory CPFF Cost-Plus-Fixed-Fee
CAS Column Address Strobe CPIF Cost-Plus-Incentive-Fee
CASS Computer Aided Schematic CPM Control Processor Module
System
CPU Central Processing Unit
CAT Computer Aided Test
CRC Cyclic Redundance Check
CB Chip Boundary
CS Chip Select
CCB Capacitive Coupled Bit
CSC Computer Software
CCB Configuration Control Board Component
CCC Ceramic Chip Carrier
CCD Charged Coupled Device
CDF Cumulative Density
Function
CDIP Ceramic Dual In-Line
Package
CDR Critical Design Review

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-98

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ACRONYMS

CSCI Computer Software Ea Activation Energy in


Configuration Item Electron Volts
CSP Common Signal Processor Eox Electronic Field Strength in
CSR Control Status Register Oxide
CTE Coefficient of Thermal EAROM Electrically Alterable Read
Expansion Only Memory
CTR Current Transfer Ratio ECC Error Checking and
CV Capacitance-Voltage Correction
dB Decibel ECCM Electronic Counter
Countermeasures
dc Direct Current
ECL Emitter Coupled Logic
D/A Digital-to-Analog
ECM Electronic
DAB Defense Acquisition Board Countermeasures
DC Duty Cycle
ECP Engineering Change
DECTED Double Error Correcting, Proposal
Triple Error Detecting ECU Environmental Control Unit
DED Double Error Detection EDA Electronic Design
DEM/VAL Demonstration and Automation
Validation EDAC Error Detection and
DESC Defense Electronics Supply Correction
Center EDM Engineering Development
DID Data Item Description Model
DIP Dual In-Line Package EEPROM Electrically Erasable
DISC Defense Industrial Supply Programmable Read Only
Center Memory
DLA Defense Logistics Agency EGC Electronic Gate Count
D Level Depot Level EGS Electronic Ground System
DID Data Item Description EGSE Electronic Ground Support
DMR Defense Management Equipment
Review EM Electromigration
DOD Department of Defense EMC Electromagnetic
DOS Disk Operating System Compatibility
DOX Design of Experiments EMD Engineering and
DP Data Processor Manufacturing Development
DPA Destructive Physical EMI Electromagnetic Interface
Analysis EMP Electronic Magnetic Pulse
DRAM Dynamic Random Access EO Electro-optical
Memory EOS Electrical Overstress
DRS Deficiency Reporting EP Electrical Parameter
System EPROM Erasable Programmable
DSP Digital Signal Processing Read Only Memory
DT&E Development Test & ER Part Established Reliability Part
Evaluation ERC Electrical Rule Check
DTIC Defense Technical ESC Electronic System Center
Information Center ESD Electrostatic Discharge
DUT Device Under Test ESM Electronics Support
DoD Department of Defense Measure
DoD-ADL Department of Defense
Authorized Data List
eV Electron Volt

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-99

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ACRONYMS

ESS Environmental Stress FPAP Floating Point Array


Screening Processor
ETE Electronic or External Test FPLA Field Programmable Logic
Equipment Array
EW Electronic Warfare FPMFH Failure Per Million Flight
EXP Exponent Hours
FA False Alarm FPMH Failures Per Million Hours
F/W Firmware FPPE Floating Point Processing
FAB Fabrication Element
FAR False Alarm Rate FQR Formal Qualification Review
FAR Federal Acquisition FQT Final Qualification Test
Regulation FR Failure Rate
FARR Forward Area Alerting FRACAS Failure Reporting and
Radar Receiver Corrective Action System
FAT First Article Testing FRB Failure Review Board
FBT Functional Board Test FS Full Scale
FCA Functional Configuration FSD Full Scale Development
Audit FSED Full Scale Engineering
FD Fault Detection Development
FDI Fault Detection and FT Fourier Transform
Isolation FTTL Fast Transistor - Transistor
FET Field Effect Transistor Logic
FFD Fraction of Faults Detected FY Fiscal Year
FFI Fraction of Faults Isolated GAO General Accounting Office
FFP Firm Fixed Price GD Global Defect
FFRP Field Failure Return GFE Government Furnished
Program Equipment
FFT Fast Fourier Transform GFP Government Furnished
FFTAU Fast Fourier Transform Property
Arithmetic Unit GIDEP Government Industry Data
FFTCU Fast Fourier Transform Exchange Program
Control Unit GIMADS Generic Integrated
FI Fault Isolation Maintenance Diagnostic
FIFO First In First Out GM Global Memory
FILO First In Last Out GOCO Government Owned
Contractor Operated
FIR Fault Isolation Resolution
GOMAC Government Microcircuit
FITS Failure Per 109 Hours Applications Conference
FIT Fault Isolation Test GSE Ground Support Equipment
FLIR Forward Looking Infrared
GSPA Generic Signal Processor
FLOTOX Floating Gate Tunnel - Architecture
Oxide GaAs Gallium Arsenide
FMC Full Mission Capability Hz Hertz
FMEA Failure Modes and Effects HDL Hardware Description
Analysis Language
FMECA Failure Modes, Effects and HDS Hierarchical Design System
Criticality Analysis
HEMT High Electron Mobility
FOM Figure of Merit Transistor
FOV Field of View
FP Floating Point
FPA Focal Plane Array

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-100

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ACRONYMS

HFTA Hardware Fault Tree IR&D Independent Research &


Analysis Development
HHDL Hierarchical Hardware IRPS International Reliability
Description Language Physics Symposium
HMOS High Performance Metal ISA Instruction Set Architecture
Oxide Semiconductor ISPS Instruction Set Processor
HOL Higher Order Language Specification
H/W Hardware ITAR International Traffic In Arms
HWCI Hardware Configuration Regulation
Item ITM Integrated Test and
I Current Maintenance
Id Drain Current IWSM Integrated Weapons
Isub Substrate Current Systems Management
ID Integrated Diagnostics J Current Density
IF Interface JAN Joint Army Navy
IAC Information Analysis Center JCS Joint Chiefs of Staff
IAW In Accordance With JEDEC Joint Electron Device
Engineering Council
IC Integrated Circuit
JFET Junction Field Effect
ICD Interface Control Document Transistor
ICNIA Integrated Communications, JTAG Joint Test Action Group
Navigation and
Identification Avionics K Thousand
ICT In Circuit Testing k Boltzman's Constant (8.65 x
10-5 electron volts/°Kelvin)
ICWG Interface Control Working
Group KOPS Thousands of Operations
Per Second
IDAS Integrated Design
Automation System LAN Local Area Network
IDHS Intelligence Data Handling LCC Life Cycle Cost
System LCC Leadless Chip Carrier
IEEE Institute of Electrical and LCCC Leadless Ceramic Chip
Electronic Engineers Carrier
IES Institute of Environmental LED Light Emitting Dioide
Sciencies LFR Launch and Flight Reliability
IFB Invitation for Bid LHR Low Hop Rate
IFF Identification Friend or Foe LIF Low Insertion Force
IFFT Inverse Fast Fourier LIFO Last In First Out
Transform LISP List Processing
IG Inspector General LRM Line Replaceable Module
I Level Intermediate Level LRU Line Replaceable Unit
ILD Injection Laser Diode LSA Logistics Support Analysis
ILS Integrated Logistics Support LSAR Logistics Support Analysis
ILSM Integrated Logistics Support Record
Manager LSB Least Significant Bit
IMPATT Impact Avalanche and
Transit Time
INEWS Integrated Electronic
Warfare System
I/O Input/Output
IOC Initial Operational Capability
IOT&E Initial Operational Test &
Evaluation

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-101

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ACRONYMS

LSE Lead System Engineer MIMIC Microwave Millimeter Wave


LSI Large Scale Integration Monolithic Integrated Circuit
LSSD Level Sensitive Scan MIN Maintenance Interface
Design Network
LSTTL Low Power Schottky MIPS Million Instructions Per
Transistor Transistor Logic Second
LUT Look Up Table MISD Multiple Instructions Single
mm Millimeter Data
mA Milliampere MLB Multilayer Board
ms Millisecond MLIPS Million Logic
Inferences/Instructions Per
mW Milliwatt Second
M Maintainability MMBF Mean Miles Between
m Million Failure
Mb Megabit MMD Mean Mission Duration
Mct Mean Corrective MMH/FH Maintenance Manhours Per
Maintenance Time Flight Hour
Mil 1000th of an Inch MMH/PH Mean Manhours Per
M-MM Mean Maintenance Possessed Hour
Manhours MMIC Monolithic Microwave
MAC Multiplier Accumulator Chip Integrated Circuit
MAJCOM Major Command MMM Mass Memory Module
MAP Modular Avionics Package MMPS Million Multiples Per Second
MBPS Million Bits Per Second MMR Multimode Radar
MCCR Mission Critical Computer MMS Mass Memory Superchip
Resources MMW Millimeter Wave
MCFOS Military Computer Family MN Maintenance Node
Operating System MNN Maintenance Network Node
MCOPS Million Complex Operations MNS Mission Need Statement
Per Second
MOA Memorandum of Agreement
MCTL Military Critical Technology
List MODEM Modulator Demodulator
MCU Microcontrol Unit MOPS Million Operations Per
Second
MD Maintainability
Demonstration MOS Metal Oxide Semiconductor
MDCS Maintenance Data MOSFET Metal Oxide Semiconductor
Collection System Field Effect Transistor
MDM Multiplexer/Demultiplexer MP Maintenance Processor
MDR Microcircuit Device MPCAG Military Parts Control
Reliability Advisory Group
MDT Mean Down Time MRAP Microcircuit Reliability
Assessment Program
MELF Metal Electrode Face
MSB Most Significant Bit
MENS Mission Element Needs
Statement MSI Medium Scale Integration
MENS Mission Equipment Needs MTBCF Mean Time Between Critical
Statement Failures
MFLOPS Million Floating Point MTBD Mean Time Between
Operations Per Second Demand
MHz Megahertz MTBDE Mean Time Between
Downing Events
MIL-STD Military Standard

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-102

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ACRONYMS

MTBF Mean Time Between Failure OTS Off-The-Shelf


MTBFF Mean Time Between P Power
Functional Failure Poly Polycrystalline Silicon
MTBM-IND Mean Time Between PtSi Platinum Silicide
Maintenance-Induced (Type PAL Programmable Array Logic
2 Failure)
PAT Programmable Alarm
MTBM-INH Mean Time Between Thresholds
Maintenance-Inherent
(Type 1 Failure) PC Printed Circuit
MTBM-ND Mean Time Between PCA Physical Configuration Audit
Maintenance-No Defect PCB Printed Circuit Board
(Type 6 failure) PCO Procuring Contracting
MTBM-P Mean Time Between Officer
Maintenance-Preventive PD Power Dissipation
MTBM-TOT Mean Time Between PDF Probability Density Function
Maintenance-Total PDL Program Design Language
MTBMA Mean Time Between PDR Preliminary Design Review
Maintenance Actions PEM Program Element Monitor
MTBR Mean Time Between PGA Pin Grid Array
Removals
PIN Positive Intrinsic Negative
MTBUMA Mean Time Between
PLA Programmable Logic Array
Unscheduled Maintenance
Actions PLCC Plastic Leadless Chip
Carrier
MTE Multipurpose Test
Equipment PLD Programmable Logic Device
MTE Minimal Test Equipment PM Program Manager
MTI Moving Target Indicator PMD Program Management
Directive
MTTE Mean Time to Error
PMOS P-Channel Metal Oxide
MTTF Mean Time To Failure
Semiconductor
MUX Multiplexer
PMP Program Management Plan
MV Mega Volt (Million Volt)
PMP Parts, Materials and
MWPS Million Words Per Second Processes
NDI Nondevelopmental Items PMR Program Management
NDT Nondestructive Testing Review
NMOS N-Channel Metal Oxide PMRT Program Management
Semiconductor Responsibility Transfer
ns Nanosecond PPM Parts Per Million
O-Level Organizational Level PPSL Preferred Parts Selection
O&M Operation and Maintenance List
OMB Office of Management and PO Program Office
Budget PROM Programmable Read Only
OPR Office of Primary Memory
Responsibility PRR Production Readiness
OPS Operations Per Second Review
ORD Operational Requirements PRST Probability Ratio Sequential
Document Test
OROM Optical Read Only Memory
OSD Office of the Secretary of
Defense
OT&E Operational Test &
Evaluation

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-103

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ACRONYMS

PS Power Supply SBIR Small Business Innovative


PTH Plated Through Hole Research
PW Pulse Width SC Space Center
PWB Printed Wiring Board SCA Sneak Circuit Analysis
QA Quality Assurance SCARLET Sneak Circuit Analysis
QC Quality Control Rome Laboratory
Engineering Tool
QDR Quality Deficiency Report
SCD Specification Control
QML Qualified Manufacturers List Drawing
QPL Qualified Parts List SCR Silicon Control Rectifier
QT&E Qualification Test and SDI Strategic Defense Initiative
Evaluation
SDL System Description
QUMR Quality Unsatisfactory Language
Material Report
SDR System Design Review
R Reliability
SDS Structured Design System
R&M Reliability and
Maintainability SE Support Equipment
RAD Radiation SECDED Single Error Correction,
Double Error Detection
RAM Random Access Memory
SECDEF Secretary of Defense
RAMS Reliability and
Maintainability Symposium SED Single Error Detection
RD Random Defect SEDS System Engineering
Detailed Schedule
RDGD Reliability Development
Growth Test SEM Standard Electronic Module
RDT Reliability Demonstration SEMP Systems Engineering
Test Management Plan
REG Register SER Soft Error Rate
RF Radio Frequency SERD Support Equipment
Recommended Data
RFP Request for Proposal
SEU Single Event Upset
RH Relative Humidity
SIP Single In-Line Package
RISA Reduced Instruction Set
Architecture SMD Standard Military Drawing
RISC Reduced Instruction Set SMD Surface Mounted Device
Computer SMT Surface Mounted
RIW Reliability Improvement Technology
Warranty S/N Signal to Noise Ratio
RL Rome Laboratory SOA Safe Operating Area
RMS Root Mean Square SOI Silicon On Insulator
ROC Required Operational SOIC Small Outline Integrated
Capability Circuit
ROM Read Only Memory SON Statement of Need
ROM Rough Order of Magnitude SORD Systems Operational
RQT Reliability Qualification Test Requirements Document
RSA Rapid Simulation Aids SOS Silicon On Sapphire
RSR Runtime Status Register SOW Statement of Work
RTL Register Transfer Language SPAD Scratch Pad Memory
RTOK Retest Okay
RTQC Real Time Quality Control
SAF Secretary of the Air Force
SAR Synthetic Aperture Radar
SAW Surface Acoustic Wave

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-104

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ACRONYMS

SPC Statistical Process Control T&E Test and Evaluation


SPO System Program Office TEMP Test & Evaluation Master
SQC Statistical Quality Control Plan
SR Slew Rate TET Technical Evaluation Team
SRA Shop Replaceable TM Test Modules
Assembly TM Technical Manuals
SRD System Requirement TMDE Test Measurement and
Document Diagnostic Equipment
SRAM Static Random Access TMP Test and Maintenance
Memory Processor
SRAP Semiconductor Reliability TO Technical Orders
Assessment Program TPS Test Program Set
SRL Shift Register Latch TPWG Test Plan Working Group
SRR Systems Requirement TQM Total Quality Management
Review TRD Test Requirements
SRU Shop Replaceable Unit Document
SSA Source Selection Authority TRR Test Readiness Review
SSAC Source Selection Advisory TSMD Time Stress Measurement
Council Device
SSEB Source Selections TTL Transistor-Transistor Logic
Evaluation Board UHF Ultra High Frequency
SSI Small Scale Integration ULSI Ultra Large Scale
SSP Source Selection Plan Integration
SSPA Submicron Signal UMF Universal Matched Filter
Processor Architecture UUT Unit Under Test
SSR Software Specification UVPROM Ultra-Violet Programmable
Review Read Only Memory
ST Self Test V Volt
STD Standard VCP Very High Speed Integrated
STE Special Test Equipment Circuit Communications
STINFO Scientific and Technical Processor
Information VHDL Very High Speed Integrated
STV Steerable Television Set Circuit Hardware
S/W Software Description Language
t Time VHSIC Very High Speed Integrated
T Temperature Circuit
Ta Ambient Temperature VIM Very High Speed Integrated
Circuit Insertion Module
Tc Case Temperature
VLSI Very Large Scale
Tj Junction Temperature Integration
Tstg Storage Temperature VSM Very High Speed Integrated
TAC Tactical Air Command Circuit Submicron
TBD To Be Determined VSP Variable Site Parameters
TC Temperature Coefficient VTB Very High Speed Integrated
TCE Thermal Coefficient of Circuit Technology
Expansion Brassboard
TCR Temperature Coefficient of
Resistance
TDDB Time Dependent Dielectric
Breakdown
TDM Time Division Multiplexing

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-105

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.
ACRONYMS

WAM Window Addressable


Memory
WBS Work Breakdown Structure
WRSK War Readiness Spares Kit
WSI Wafer-Scale Integration
WSIC Wafer-Scale Integrated
Circuit
X Reactance
XCVR Transceiver
Y Admittance
Z Impedance
ZIF Zero Insertion Force

ROME LABORATORY RELIABILITY ENGINEER'S TOOLKIT A-106

Copies of this Toolkit may be downloaded free from quanterion.com. Many of the tools in
this Toolkit are implemented in the “Quanterion Automated Reliability Toolkit” (QuART),
which can be download a free from quanterion.com.

You might also like