IEEE 982.1-2005 Measures of The Software Aspects of Dependability
IEEE 982.1-2005 Measures of The Software Aspects of Dependability
IEEE
3 Park Avenue IEEE Std 982.1™-2005
New York, NY 10016-5997, USA (Revision of
IEEE Std 982.1-1988)
8 May 2006
IEEE Std 982.1™-2005
(Revision of
IEEE Std 982.1-1988)
Sponsor
Software Engineering Standards Committee
of the
IEEE Computer Society
_________________________
IEEE is a registered trademark in the U.S. Patent & Trademark Office, owned by the Institute of Electrical and Electronics
Engineers, Incorporated.
No part of this publication may be reproduced in any form, in an electronic retrieval system or otherwise, without the prior
written permission of the publisher.
IEEE Standards documents are developed within the IEEE Societies and the Standards Coordinating Committees of
the IEEE Standards Association (IEEE-SA) Standards Board. The IEEE develops its standards through a consensus
development process, approved by the American National Standards Institute, which brings together volunteers
representing varied viewpoints and interests to achieve the final product. Volunteers are not necessarily members of the
Institute and serve without compensation. While the IEEE administers the process and establishes rules to promote
fairness in the consensus development process, the IEEE does not independently evaluate, test, or verify the accuracy
of any of the information contained in its standards.
Use of an IEEE Standard is wholly voluntary. The IEEE disclaims liability for any personal injury, property or other
damage, of any nature whatsoever, whether special, indirect, consequential, or compensatory, directly or indirectly
resulting from the publication, use of, or reliance upon this, or any other IEEE Standard document.
The IEEE does not warrant or represent the accuracy or content of the material contained herein, and expressly
disclaims any express or implied warranty, including any implied warranty of merchantability or fitness for a specific
purpose, or that the use of the material contained herein is free from patent infringement. IEEE Standards documents
are supplied “AS IS.”
The existence of an IEEE Standard does not imply that there are no other ways to produce, test, measure, purchase,
market, or provide other goods and services related to the scope of the IEEE Standard. Furthermore, the viewpoint
expressed at the time a standard is approved and issued is subject to change brought about through developments in the
state of the art and comments received from users of the standard. Every IEEE Standard is subjected to review at least
every five years for revision or reaffirmation. When a document is more than five years old and has not been
reaffirmed, it is reasonable to conclude that its contents, although still of some value, do not wholly reflect the present
state of the art. Users are cautioned to check to determine that they have the latest edition of any IEEE Standard.
In publishing and making this document available, the IEEE is not suggesting or rendering professional or other
services for, or on behalf of, any person or entity. Nor is the IEEE undertaking to perform any duty owed by any other
person or entity to another. Any person utilizing this, and any other IEEE Standards document, should rely upon the
advice of a competent professional in determining the exercise of reasonable care in any given circumstances.
Interpretations: Occasionally questions may arise regarding the meaning of portions of standards as they relate to
specific applications. When the need for interpretations is brought to the attention of IEEE, the Institute will initiate
action to prepare appropriate responses. Since IEEE Standards represent a consensus of concerned interests, it is
important to ensure that any interpretation has also received the concurrence of a balance of interests. For this reason,
IEEE and the members of its societies and Standards Coordinating Committees are not able to provide an instant
response to interpretation requests except in those cases where the matter has previously received formal consideration.
At lectures, symposia, seminars, or educational courses, an individual presenting information on IEEE standards shall
make it clear that his or her views should be considered the personal views of that individual rather than the formal
position, explanation, or interpretation of the IEEE.
Comments for revision of IEEE Standards are welcome from any interested party, regardless of membership affiliation
with IEEE. Suggestions for changes in documents should be in the form of a proposed change of text, together with
appropriate supporting comments. Comments on standards and requests for interpretations should be addressed to:
NOTE—Attention is called to the possibility that implementation of this standard may require use of subject matter
covered by patent rights. By publication of this standard, no position is taken with respect to the existence or validity of
any patent rights in connection therewith. The IEEE shall not be responsible for identifying patents for which a license
may be required by an IEEE standard or for conducting inquiries into the legal validity or scope of those patents that
are brought to its attention.
Authorization to photocopy portions of any individual standard for internal or personal use is granted by the Institute of
Electrical and Electronics Engineers, Inc., provided that the appropriate fee is paid to Copyright Clearance Center. To
arrange for payment of licensing fee, please contact Copyright Clearance Center, Customer Service, 222 Rosewood
Drive, Danvers, MA 01923 USA; +1 978 750 8400. Permission to photocopy portions of any individual standard for
educational classroom use can also be obtained through the Copyright Clearance Center.
Introduction
This introduction is not part of IEEE Std 982.1-2005, IEEE Standard Dictionary of Measures of the Software Aspects
of Dependability.
Rationale for revision: In accordance with Software Engineering Standards Committee (SESC) policy, the
first standard (this document) is a small document with just a few core measures dealing with reliability,
maintainability, and availability. This will be followed by a second standard that will address safety,
confidentially, and integrity.
This standard has not been revised since 1988. It was reaffirmed in 1996, but there were significant
negative comments. It has been revised because many of the original measures had the following
undesirable characteristics:
⎯ Unrealistic (i.e., there was naivety about the necessary data, personnel capabilities, and training
to effectively use the measures)
⎯ Did not measure what they purported to measure
⎯ Complex equations that were hard to understand and implement
⎯ Immature (i.e., they were not widely used)
⎯ Little field data to back up claims for benefits
History
This standard is 15 years old, and additional information on the topic has been developed since that time.
This project updated the standard to include such information and to expand the scope of the standard.
Specifically, the following topics were included in the standard:
1) Determined whether the measure should be modified, retained, or deleted, based on the
criteria. The modified and retained measures are identified in this standard by their numbers
from the original IEEE 982.1 standard (i.e., IEEE 982 #X).
2) Added any additional information on the measures developed since 1988.
3) Clarified definition and implementation conventions.
4) Identified and incorporated, where appropriate, new measures that have appeared since
1988.
5) Formulated generic measure classes and categorized the measures into these classes. These
classes are reliability, maintainability, and availability.
iv
Copyright © 2006 IEEE. All rights reserved.
Notice to users
Errata
Errata, if any, for this and all other standards can be accessed at the following URL: http://
standards.ieee.org/reading/ieee/updates/errata/index.html. Users are encouraged to check this URL for
errata periodically.
Interpretations
Current interpretations can be accessed at the following URL: http://standards.ieee.org/reading/ieee/interp/
index.html.
Patents
Attention is called to the possibility that implementation of this standard may require use of subject matter
covered by patent rights. By publication of this standard, no position is taken with respect to the existence
or validity of any patent rights in connection therewith. The IEEE shall not be responsible for identifying
patents or patent applications for which a license may be required to implement an IEEE standard or for
conducting inquiries into the legal validity or scope of those patents that are brought to its attention.
Participants
At the time this standard was completed, the P982 Working Group had the following membership:
The following members of the individual balloting committee voted on this standard. Balloters may have
voted for approval, disapproval, or abstention.
v
Copyright © 2006 IEEE. All rights reserved.
When the IEEE-SA Standards Board approved this application guide on 8 November 2005, it had the
following membership:
Steve M. Mills, Chair
Richard H. Hulett, Vice Chair
Don Wright, Past Chair
Judith Gorman, Secretary
*Member Emeritus
Also included are the following nonvoting IEEE-SA Standards Board liaisons:
Don Messina
IEEE Standards Project Editor
vi
Copyright © 2006 IEEE. All rights reserved.
Contents
1. Overview .................................................................................................................................................... 1
2. Definitions .................................................................................................................................................. 3
4.1 Defect density (982 #2) (Fenton and Pfleeger [B5] and Nikora et al. [B17]).................................... 10
4.2 Test coverage index (982 #5) (Binder [B2])...................................................................................... 11
4.3 Requirements compliance (982 #23) (Fischer and Walker [B6]) ...................................................... 11
4.4 Failure rate (982 #31) (Lyu [B12]) .................................................................................................... 12
5.1 Fault density (982 #1) (Musa [B14] and Nikora and Munson [B16]) ............................................... 13
5.2 Requirements traceability (982 #7) (Fenton and Pfleeger [B5])........................................................ 15
5.3 Mean time to failure (MTTF) (982 #30) (Lyu [B12] and Musa et al. [B15]) .................................... 16
6.1 Mean time to repair (MTTR) (Lyu [B12] and Musa et al. [B15]) ..................................................... 17
6.2 Network maintainability (Schneidewind [B19])................................................................................ 18
vii
Copyright © 2006 IEEE. All rights reserved.
Annex A (informative) Analysis of measures in IEEE Software Engineering Collection,
IEEE Std 982.1-2005, IEEE Standard Dictionary of Measures to Produce Reliable Software .................... 23
viii
Copyright © 2006 IEEE. All rights reserved.
IEEE Standard Dictionary of
Measures of the Software Aspects of
Dependability
1. Overview
1.1 General
This standard has the following clauses: Clause 1 provides the scope of this standard. Clause 2 provides a
set of definitions. Clause 3 through Clause 5 specify new, modified, and existing measures for reliability,
respectively. Clause 6 and Clause 7 specify new measures for maintainability and availability, respectively.
In addition, there are three informative annexes. Annex A documents the process that was used to
determine whether a measure in the existing standard should be modified, retained, or deleted; this annex
states the criteria that were used to determine whether a measure should be included in the standard,
including new measures. Annex B describes the Software Reliability Engineering Case Study, which
provides a case study example of how to apply selected measures in Clause 3. Annex C is a glossary of
terms taken from The Authoritative Dictionary of IEEE Standards Terms [B8]. 1 Annex D is a bibliography
that provides additional information about measures in Clause 3 through Clause 7. Table 1 summarizes the
categories of measures. Numerical applications of the measures can be found in the reference(s) that are
keyed to each measure.
1
The numbers in brackets correspond to those of the bibliography in Annex D.
1
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
1.2 Scope
This standard specifies and classifies measures of the software aspects of dependability. It is an expansion
of the scope of the existing standard; the revision includes the following aspects of dependability:
reliability, availability, and maintainability of software. The applicability of this standard is any software
system; in particular, it applies to mission critical systems, where high reliability, availability, and
maintainability are of utmost importance. These systems include, but are not limited to, systems for
military combat, space missions, air traffic control, network operations, stock exchanges, automatic teller
machines, and airline reservation systems.
1.3 Purpose
This standard provides measures that are applicable for continual self-assessment and improvement of the
software aspects of dependability.
1.4 Compliance
An application of the measures specified herein complies with this standard if Clause 2 through Clause 7
are implemented, with the exception of subclauses identified as either “Experience” and “Tools”. These
subclauses are intended to (1) provide information about application domains for a given measure and
(2) identify the types of tools appropriate for computing that measure.
2
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
The measures defined in this dictionary may provide information that can help support certain evaluations
discussed in other IEEE Software Engineering Standards. The following list is meant to be suggestive:
2. Definitions
For the purposes of this document, the following terms and definitions apply. The glossary in Annex C and
The Authoritative Dictionary of IEEE Standards Terms [B8] should be referenced for terms not defined in
this clause.
2.1 defect: A generic term that can refer to either a fault (cause) or a failure (effect). (adapted from
Lyu [B12])
2.2 dependability: Trustworthiness of a computer system such that reliance can be justifiably placed on the
service it delivers. Reliability, availability, and maintainability are aspects of dependability. (adapted from
Lyu [B12])
2.3 maintainability: Speed and ease with which a program can be corrected or changed. (adapted from
Musa [B14])
2.4 measure (noun): (A) The number or symbol assigned to an entity by a mapping from the empirical
world to the formal, relational world in order to characterize an attribute. (adapted from Fenton and
Pfleeger [B5]) (B) The act or process of measuring. (adapted from Webster’s New Collegiate Dictionary
[B23])
2.5 measure (verb): To make a measurement. (adapted from Webster’s New Collegiate Dictionary [B23])
2.6 time: In decreasing order of resolution, CPU execution time, elapsed time (i.e., wall clock time), or
calendar time. (adapted from Musa [B14])
1 EIA publications are available from Global Engineering Documents, 15 Inverness Way East, Englewood, CO 80112, USA
(http://global.ihs.com/).
2
IEEE publications are available from the Institute of Electrical and Electronics Engineers, Inc., 445 Hoes Lane, Piscataway, NJ
08854, USA (http://standards.ieee.org/).
3
The IEEE standards or products referred to in this clause are trademarks of the Institute of Electrical and Electronics Engineers, Inc.
3
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
3.1 General
The software reliability model specified in this clause (Schneidewind [B20]) is one of the four models
recommended in ANSI R-013-1992 [B1]. See this reference for the other three recommended models that
could be used in place of the specified model.
3.2.1 Definition
TF (t) is the predicted time for the next Ft failures to occur, when the current time is t.
3.2.2 Variables
TF (t) is the predicted time for the next Ft failures to occur, when the current time is t.
T is the test or operational time interval when time to next failure prediction is made.
3.2.3 Parameters
s is the starting interval for using observed failure data in parameter estimation.
3.2.4 Application
Given a mission duration requirement tm and number of failures Ft, a prediction is made at time t to see
whether TF(t) > tm.
Failure counts by time interval, in the range [1,t], where time can be execution time, clock time, or calendar
time, for estimating α, β, and s, and for obtaining Xs,t.
4
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
3.2.7 Experience
Space Shuttle Primary Avionics Software Subsystem, United States Navy Tomahawk cruise missile launch,
readiness validation for Trident nuclear missile, and United States Marine Corps Tactical Systems Support
Activity for distributed system software reliability assessment and prediction (Keller and
Schneidewind [B11]).
3.2.8 Tools
Software reliability modeling and analysis tools. Examples of these tools are described in Appendix A of
Lyu [B12].
3.3.1 Definitions
CF = a × CS2 – b × CS + c
CF = d × (exp (e × CI))
RF (risk factor) is the attributes of a requirements change that can induce reliability risk, such as memory
space and requirements issues.
“Memory space” is the amount of memory space required to implement a requirements change (i.e., a
requirements change uses memory to the extent that other functions do not have sufficient memory to
operate effectively, and failures occur).
“Requirements issues” is the number of conflicting requirements (i.e., a requirements change conflicts with
another requirements change, such as requirements to increase the search criteria of a website and
simultaneously decrease its search time, with the added software complexity causing failures).
3.3.2 Variables
3.3.3 Parameters
5
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
3.3.4 Application
Provide a warning to software managers of impending reliability problems early in the development cycle,
during requirements analysis, by using risk factors to predict cumulative failures and the values of the risk
factor thresholds where reliability would degrade significantly. Thus, software managers could anticipate
problems rather than react to them. In addition, more efficient software management would be possible
because with advance warning of reliability problems, management could better schedule and prioritize
development process activities (e.g., inspections, tests).
CF, CI are the cumulative failure count and cumulative requirements issues count: dimensionless numbers.
CS is the cumulative memory space requirement count in suitable units (e.g., words).
3.3.7 Experience
3.3.8 Tools
3.4.1 Definitions
r(tt) are the remaining failures predicted at total test time tt:
3.4.2 Variables
3.4.3 Parameters
s is the starting interval for using observed failure data in parameter estimation.
6
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
rc is the critical value of remaining failures, specified according to the criticality of the application.
3.4.4 Application
Test whether the safety criterion, predicted remaining failures r (tt) < rc, is met.
Failure counts by time interval, in the range [1,tt], where time can be execution time, clock time, or
calendar time, for estimating α, β, and s.
tt is the total test time of a release or module: seconds, minutes, hours, and so on, as appropriate.
3.4.7 Experience
Space Shuttle Primary Avionics Software Subsystem (Keller and Schneidewind [B11]).
3.4.8 Tools
3.5 Total test time to achieve specified remaining failures (Schneidewind [B20])
3.5.1 Definition
tt is the predicted total time from the start of test required to achieve a specified number of remaining
failures r(tt):
3.5.2 Variables
3.5.3 Parameters
S is the starting interval for using observed failure data in parameter estimation.
7
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
3.5.4 Application
Predict total test time as a function of the reliability goal, as specified by number of remaining failures.
Failure counts by time interval, in the range [1, tt], where time can be execution time, clock time, or
calendar time, for estimating α, β, and s.
3.5.7 Experience
Space Shuttle Primary Avionics Software Subsystem, United States Navy Tomahawk cruise missile launch,
readiness validation for Trident nuclear missile, and United States Marine Corps Tactical Systems Support
Activity for distributed system software reliability assessment and prediction (Keller and
Schneidewind [B11]).
3.5.8 Tools
3.6.1 Definition
RN = RC × RS
3.6.2 Variables
nc
∑ TC
i =1
i
RC = 1 −
nc × T
8
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
ns
∑ TS
i =1
i
RS = 1 −
ns × T
TCi is the downtime on client i (down due to software failure and repair) during time T.
TSi is the downtime on server i (down due to software failure and repair) during time T.
3.6.3 Parameters
3.6.4 Application
3.6.7 Experience
United States Marine Corps Tactical Systems Support Activity for distributed system software reliability
assessment and prediction (Schneidewind [B19]).
3.6.8 Tools
Spreadsheet software.
9
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
4.1 Defect density (982 #2) (Fenton and Pfleeger [B5] and Nikora et al. [B17])
4.1.1 Definition
D
DD =
KSLOC
4.1.2 Variables
D is the number of defects (or discrepancy reports) of a specified severity, per release or module.
KSLOC is the number of source lines of executable code and non-executable data declarations in thousands
per release or module.
4.1.3 Parameters
4.1.4 Application
D is the count of defects (or discrepancy reports) of a specified severity, per release or module.
KSLOC is the count of number of source lines of executable code and non-executable data declarations in
thousands per release or module.
Dimensionless numbers.
4.1.7 Experience
Space Shuttle Primary Avionics Software Subsystem (Nikora and Munson [B16]).
4.1.8 Tools
Spreadsheet software, compiler output of source lines of executable code and non-executable data
declarations.
10
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
4.2.1 Definition
NR
TCI =
TR
4.2.2 Variables
NR is the number of requirements that have passed all tests per release or module.
4.2.3 Parameters
None.
4.2.4 Application
NR is the number of requirements that have passed all tests per release or module.
4.2.7 Experience
4.2.8 Tools
Spreadsheet software, data management system for keeping track of requirements and test results.
4.3.1 Definition
11
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
4.3.2 Variables
4.3.3 Parameters
None.
4.3.4 Application
An analysis is made of the percentages for the respective requirements error types: inconsistencies,
incompleteness, and misinterpretation. Identification of inconsistent, incomplete, and misinterpreted
requirements. Requirements problems can have an adverse effect on reliability.
N1 is the count of inconsistent software requirements in a release or module, which is the number of
decomposition elements that do not accurately reflect the system requirement specification.
N2 is the count of incomplete software requirements in a release or module, which is the number of
decomposition elements that do not completely reflect the system requirement specification.
N3 is the count of misinterpreted software requirements in a release or module, which is the number of
decomposition elements that do not correctly reflect the system requirement specification.
Shipboard tactical command center, combat weapons system, and communication system; ground-based
tactical command center; and air traffic control.
4.3.8 Tools
Spreadsheet software.
4.4.1 Definition
12
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
4.4.2 Variables
Δfi is the number of failures that occur during Δti in a release or module.
4.4.3 Parameters
4.4.4 Application
Track software reliability over the periods 1, 2, 3, … , i, … , n to see whether it is increasing (i.e.,
decreasing failure rate) or decreasing (i.e., increasing failure rate).
Δti is the test or operational time expended in period i (e.g., 30 minutes in hour 1) by a release or module.
Δfi is the number of failures that occur during Δti (e.g., two failures in 30 minutes in hour 1) in a release or
module.
Δti is the test or operational time expended in period i: seconds, minutes, hours, and so on, as appropriate,
by a release or module.
Δfi is the number of failures that occur during Δti in a release or module.
4.4.7 Experience
4.4.8 Tools
5.1 Fault density (982 #1) (Musa [B14] and Nikora and Munson [B16])
5.1.1 Definitions
F
FD =
KSLOC
13
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
5.1.2 Variables
F is the number of unique faults found that resulted in failures of a specified severity level, per release or
module.
KSLOC is the number of source lines of executable code and non-executable data declarations in
thousands, per release or module.
5.1.3 Parameters
5.1.4 Application
F is the count of number of unique faults found that resulted in failures of a specified severity level, per
release or module.
KSLOC is the count of number of source lines of executable code and non-executable data declarations in
thousands, per release or module.
5.1.7 Experience
Space Shuttle Primary Avionics Software Subsystem (Nikora and Munson [B16]).
5.1.8 Tools
Spreadshset software, compiler output of source lines of executable code and non-executable data
declarations.
14
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
5.2.1 Definition
⎛ R1 ⎞
TM = ⎜ ⎟ × 100%
⎝ R2 ⎠
5.2.2 Variables
5.2.3 Parameters
None.
5.2.4 Application
This measure aids in identifying requirements that are either missing from, or in addition to, the original
requirements. Missing requirements negatively affect reliability. Excess requirements negatively affect the
software development budget.
A set of mappings from the requirements in the software implementation to the original requirements is
created. Count each requirement met by the implementation (R1), and count each of the original
requirements (R2). Compute the traceability measure:
⎛ R1 ⎞
TM = ⎜ ⎟ × 100%
⎝ R2 ⎠
15
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
5.2.7 Experience
5.2.8 Tools
Spreadsheet software.
5.3 Mean time to failure (MTTF) (982 #30) (Lyu [B12] and Musa et al. [B15])
5.3.1 Definition
∑ti
i
MTTF =
n
where ti is the ith time between failures, ignoring repair times, for i ≥ 1, and n is the number of times
between failures.
5.3.2 Variables
5.3.3 Parameters
5.3.4 Application
Estimate the expected time to failure (i.e., time between failures, ignoring repair times) of a release or
module.
5.3.7 Experience
5.3.8 Tools
Spreadsheet software.
16
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
6.1 Mean time to repair (MTTR) (Lyu [B12] and Musa et al. [B15])
6.1.1 Definition
∑R
i
i
MTTR =
n
where Ri is the ith repair time, for i ≥ 1, and n is the number of repairs. A repair may consist of
1) identifying and fixing the fault(s) that caused the system to fail, or 2) restoring the system to service
without identifying or repairing the fault(s) causing the failure (e.g., rebooting the system). Repairs may be
unscheduled (e.g., in response to a failure) or scheduled (as part of regular system maintenance).
6.1.2 Variables
6.1.3 Parameters
6.1.4 Application
Estimate the time that will be required to repair a fault after a failure has occurred in a release or module.
6.1.7 Experience
6.1.8 Tools
Spreadsheet software.
17
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
6.2.1 Definition
MN = ( MC ) × ( MS )
6.2.2 Variables
MC is the maintainability of clients (probability of maintenance required on nc clients during time T).
nc
∑ mci
MC = i =1
T
MS is the maintainability of servers (probability of maintenance required on ns servers during time T).
ns
∑ msi
MS = i =1
T
6.2.3 Parameters
6.2.4 Application
mci, msi, T are the seconds, minutes, hours, and days, as appropriate.
18
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
6.2.7 Experience
U.S. Marine Corps Tactical Systems Support Activity for distributed system software reliability assessment
and prediction (Schneidewind [B19]).
6.2.8 Tools
Spreadsheet software.
7.1.1 Definition
MTTF
Availability =
MTTF + MTTR
7.1.2 Variables
7.1.3 Parameters
None.
7.1.4 Application
Compute the probability that software (e.g., release or module) is available when needed.
MTTF (time to failure measured from end of repair to next failure; see 5.3) and MTTR (see 6.1).
MTTF and MTTR are the seconds, minutes, hours, and days, as appropriate.
7.1.7 Experience
7.1.8 Tools
Spreadsheet software.
19
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
7.2.1 Definition
A is the network availability (probability that the network is available when needed).
7.2.2 Variables
⎛ nc ⎞ ⎛ ns ⎞
MTBFn = MTBFc × ⎜ ⎟ + MTBFs × ⎜ ⎟
⎝ nc + ns ⎠ ⎝ nc + ns ⎠
T
MTBFc =
FC
nc
FC = ∑ fc
i =1
1
T
MTBFs =
FS
ns
FS = ∑ fs
i =1
1
20
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
⎛ nc ⎞ ⎛ ns ⎞
MTTRn = MTTRc × ⎜ ⎟ + MTTRs × ⎜ ⎟
⎝ nc + ns ⎠ ⎝ nc + ns ⎠
nc
∑ mc
i =1
i
MTTR c =
FC
ns
∑ ms
i =1
i
MTTR s =
FS
7.2.3 Parameters
7.2.4 Application
21
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
mci, msi, T are the seconds, minutes, hours, and days, as appropriate.
7.2.7 Experience
U.S. Marine Corps Tactical Systems Support Activity for distributed system software reliability assessment
and prediction (Schneidewind [B19]).
7.2.8 Tools
Spreadsheet software.
22
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
Annex A
(informative)
Analysis of measures in IEEE Software Engineering Collection,
IEEE Std 982.1-2005, Standard Dictionary of Measures to Produce Reliable
Software
This annex, in Table A.1, provides the justification for modifying, retaining, or deleting existing measures.
The criteria that were used in deciding whether to modify, retain, or delete a measure are listed as follows.
The same criteria were applied to new measures. To be eligible for inclusion, a measure should satisfy one
or more of the following criteria:
a) A measure should have some minimum number of recognized uses
b) A measure should have demonstrated or potential utility in producing reliable, maintainable, and
available software
c) A measure should not be overly complex, difficult to understand, difficult to implement, or
require expensive tools to implement
d) A measure should be development paradigm independent (i.e., have wide applicability)
Object-oriented measures do not satisfy criteria 1 and 3. With respect to criterion 1, there is not a minimum
number of recognized industry uses or applications. Although object-oriented measures have potential
utility in producing reliable, maintainable, and available software, they are not sufficiently mature to
include in this standard at this time.
23
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
24
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
25
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
26
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
Annex B
(informative)
Software Reliability Engineering Case Study
(Keller and Schneidewind [B11])
B.1 General
The goals of the American National Standards Institute/American Institute of Aeronautics and Astronautics
Recommended Practice on Software Reliability, R-013-1992 [B1] are as follows: Provide a common basis
for discussion among individuals within a project and across projects; remind practitioners about the many
aspects of the software reliability process that are important to consider; and advise them on how to achieve
good results and avoid bad practices. Next, are described the steps on how to implement a Software
Reliability Engineering (SRE) program, using the Space Shuttle as an example, and keying the appropriate
measure to the recommended practice.
The following are major steps in SRE (not necessarily in chronological order):
a) State the safety criteria. This might be stated, for example, as “no failure that would result in
loss of life or mission.”
b) Collect fault and failure data. For each system, there should be a brief description of its
purpose and functions and the fault and failure data, as shown below. Days # could be hours,
minutes, as appropriate. Code the Problem Report Identification to indicate Software (S) failure,
Hardware (H) failure, or People (P) failure.
1) System Identification
2) Purpose
3) Functions
4) Days # (since start of test)
5) Problem Report Identification
6) Problem Severity
7) Failure Date
8) Module with Fault
9) Description of Problem
c) Establish problem severity levels. Use a problem severity classification, such as the following:
1) Loss of life, loss of mission, abort mission
2) Degradation in performance
3) Operator annoyance
27
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
d) Implement the safety criteria. Two criteria for software reliability levels will be defined. Then
these criteria will be applied to the risk analysis of safety critical software. In the case of the
Shuttle example, the “risk” will represent the degree to which the occurrence of failures does
not meet required reliability levels. Three prediction quantities, remaining failures (3.4), time to
next failure (3.2), and total test time to achieve specified remaining failures (3.5), are applied in
item 1) and item 2) below. This is followed by a risk assessment based on the degree to which
the predictions satisfy the risk criteria. Finally, guidance is provided on how to interpret
reliability predictions.
If the safety goal is the reduction of failures of a specified severity to an acceptable level of risk
(Lyu [B12]), then for software to be ready to deploy, after having been tested for total time tt, it
would satisfy the following criteria:
1) Predicted remaining failures r(tt) < rc, where rc is a specified critical value.
2) Predicted time to next failure TF(tt) > tm, where tm is mission duration.
The predicted value of r(tt) would be obtained in accordance with section 3.3 Remaining
Failures of Schneidewind [B20].
The predicted value of TF(tt) would be obtained in accordance with section 3.1 Time to Next
Failure(s) of Schneidewind [B20].
For systems that are tested and operated continuously like the Shuttle, tt, TF(tt), and tm are
measured in execution time. Note that, as with any methodology for assuring software safety,
there is no guarantee that the expected level will be achieved. Rather, with these criteria, the
objective is to reduce the risk of deploying the software to a “desired” level.
Apply the remaining failures criterion. Criterion 1) sets the threshold on remaining failures that
would be satisfied to deploy the software (i.e., no more than a specified number of failures).
If we predict r(tt) ≥ rc, we would continue to test for a total time tt′ > tt that is predicted to
achieve r(tt′) < rc, using the assumption that more failures will be experienced and more faults
will be corrected so that the remaining failures will be reduced by the quantity r(tt) – r(tt′). If the
developer does not have the resources to satisfy the criterion or cannot satisfy the criterion
through additional testing, the risk of deploying the software prematurely should be assessed
(see the next section). It is known that it is impossible to demonstrate the absence of faults
(Dijkstra [B3]); however, the risk of failures occurring can be reduced to an acceptable level, as
represented by rc. This scenario is shown in Figure B.1. In case A, r(tt) < rc is predicted and the
mission begins at tt. In case B, r(tt) ≥ rc is predicted and the mission would be postponed until
the software is tested for total time tt′ when r(tt′) < rc is predicted. In both cases, criterion 2)
would also be required for the mission to begin.
4
Notes in text, tables, and figures are given for information only and do not contain requirements needed to implement the standard.
28
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
Apply the time to next failure criterion. Criterion 2 specifies that the software needs to survive
for a time greater than the duration of the mission. If TF(tt) ≤ tm is predicted, the software is
tested for a total time tt" > tt that is predicted to achieve TF(tt") > tm, using the assumption that
more failures will be experienced and faults corrected so that the time to next failure will be
increased by the quantity TF(tt") – TF(tt). Again, if it is infeasible for the developer to satisfy the
criterion for lack of resources or failure to achieve test objectives, the risk of deploying the
software prematurely should be assessed (see the next section). This scenario is shown in
Figure B.2. In case A, TF(tt) > tm is predicted and the mission begins at tt. In case B, TF(tt) ≤ tm is
predicted, and in this case, the mission would be postponed until the software is tested for total
time tt" when TF(tt") > tm is predicted. In both cases, criterion 1) would also be required for the
mission to begin. If neither criterion is satisfied, the software is tested for a time that is the
greater of tt' or tt".
29
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
e) Make a risk assessment. Safety Risk pertains to executing the software of a safety critical
system where there is the chance of injury (e.g., astronaut injury or fatality), damage (e.g.,
destruction of the Shuttle), or loss (e.g., loss of the mission), if a serious software failure occurs
during a mission.
The amount of total test time tt can be considered a measure of the degree to which software
reliability goals have been achieved. This is particularly the case for systems like the Space
Shuttle where the software is subjected to continuous and rigorous testing for several years in
multiple facilities, using a variety of operational and training scenarios (e.g., by the contractor in
Houston, by NASA in Houston for astronaut training, and by NASA at Cape Canaveral). Total
test time tt can be viewed as an input to a risk reduction process and r(tt) and TF(tt) as the
outputs, with rc and tm as “risk criteria levels” of reliability that control the process. Although it
can be recognized that total test time is not the only consideration in developing test strategies
and that there are other important factors, like the consequences for reliability and cost, in
selecting test cases (Nikora et al. [B17]), nevertheless, for the foregoing reasons, total test time
has been found to be strongly positively correlated with reliability growth for the Space Shuttle
(Musa [B14]).
Evaluate remaining failures risk. The mean value of the risk criterion metric (RCM) for
criterion 1 is formulated as in Equation (B.1):
(r (t t ) − rc ) ⎛ r (t ) ⎞
RCM r (t t ) = = ⎜⎜ t ⎟⎟ − 1 (B.1)
rc ⎝ rc ⎠
Equation (B.1) is plotted in Figure B.3 as a function of tt for rc = 1, for OID, where positive,
zero, and negative values correspond to r(tt) > rc, r(tt) = rc, and r(tt) < rc, respectively. In
Figure B.3, these values correspond to the following regions: CRITICAL (i.e., above the X-axis,
predicted remaining failures are greater than the specified value); NEUTRAL (i.e., on the X-
axis, predicted remaining failures are equal to the specified value); and DESIRED (i.e., below
the X-axis, predicted remaining failures are less than the specified value, which could represent
a “safe” threshold or, in the Shuttle example, an “error-free” condition boundary). This graph is
for the Shuttle Operational Increment OID (with many years of shelf life): a software system
comprising modules and configured from a series of builds to meet Shuttle mission functional
requirements. In this example it can be seen that at approximately tt = 57, the risk transitions
from the CRITICAL region to the DESIRED region.
30
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
Evaluate Time to Next Failure Risk. Similarly the mean value of the risk criterion metric (RCM)
for criterion 2 is formulated as in Equation (B.2):
(t m − TF (t t )) ⎛ T (t ) ⎞
RCM TF (t t ) = = ⎜⎜ F t ⎟⎟ − 1 (B.2)
tm ⎝ tm ⎠
Equation (B.2) is plotted in Figure B.4 as a function of tt for tm = 8 days (a typical mission
duration), for OIC, where positive, zero, and negative risk corresponds to TF(tt) < tm, TF(tt) = tm,
and TF(tt) > tm, respectively. In Figure B.4, these values correspond to the following regions:
CRITICAL (i.e., above the X-axis, predicted time to next failure is less than the specified
value); NEUTRAL (i.e., on the X-axis, predicted time to next failure is equal to the specified
value); and DESIRED (i.e., below the X-axis, predicted time to next failure is greater than the
specified value). This graph is for the Shuttle operational increment OIC. In this example, the
RCM is in the DESIRED region at all values of tt.
31
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
Annex C
(informative)
Glossary
For the purposes of this document, the following terms and definitions apply. These and other terms within
IEEE standards are found in The Authoritative Dictionary of IEEE Standards Terms [B8]. 5
availability: The degree to which a system or component is operational and accessible when required for
use. Often expressed as a probability. (adapted from IEEE Std 610.12TM-1990 [B10])
failure: The inability of a system or component to perform its required functions within specified
performance requirements. (adapted from IEEE Std 610.12-1990 [B10])
fault: An incorrect step, process, or data definition in a computer program. (adapted from
IEEE Std 610.12-1990 [B10])
reliability: The ability of a system or component to perform its required functions under stated conditions
for a specified period of time. (adapted from IEEE Std 610.12-1990 [B10])
5
IEEE publications are available from the Institute of Electrical and Electronics Engineers, Inc., 445 Hoes Lane, Piscataway, NJ
08854, USA (http://standards.ieee.org/).
32
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
Annex D
(informative)
Bibliography
[B2] Binder, R. V., Testing Object-Oriented Systems: Models, Patterns, and Tools. Reading, MA:
Addison-Wesley, 2000.
[B3] Dijkstra, E., “Structured programming,” In: Buxton, J.N., and Randell, B., eds. Software
Engineering Techniques. Brussels Belgium: NATO Scientific Affairs Division, April 1970, pp. 84–88.
[B4] Evangislist, W. M., “Software complexity metric sensitivity to program restructuring rules,” Journal
of Systems and Software, vol. 3, pp. 231–243, 1983.
[B5] Fenton, N. E. and Pfleeger, S. L., Software Metrics: A Rigorous & Practical Approach, 2d ed.
Boston, MA: PWS Publishing Company, 1997.
[B6] Fischer, K. F. and Walker, M. G., “Improved software reliability through requirement verification,”
IEEE Transactions on Reliability, vol. 28, no. 3, pp. 233–239, August 1979.
[B7] Henninger, K., “Specifying software requirements for complex systems, new techniques and their
application,” IEEE Transactions on Software Engineering, vol. SE-6, no. 1, pp. 1–14, Jan. 1980.
[B8] IEEE 100, The Authoritative Dictionary of IEEE Standards Terms, Seventh Edition, New York,
Institute of Electrical and Electronics Engineers, Inc. 7
[B9] IEEE Std 1044TM-1993, IEEE Standard Classification for Software Anomalies. 8
[B10] IEEE Std 610.12-1990, IEEE Standard Glossary of Software Engineering Terminology.
[B11] Keller, T. and Schneidewind, N. F., “A successful application of software reliability engineering for
the NASA Space Shuttle,” Software Reliability Engineering Case Studies, International Symposium on
Software Reliability Engineering, Albuquerque, NM, pp. 71–82, Nov. 4, 1997.
[B12] Lyu, M. R., Handbook of Software Reliability Engineering. New York: IEEE Computer Society
Press and McGraw-Hill, 1996.
[B13] McCabe, T. J., “A complexity measure,” IEEE Transactions on Software Engineering, vol. SE-2, no.
4, pp. 308–320, Dec. 1976.
[B14] Musa, J. D., Software Reliability Engineering: More Reliable Software, Faster Development and
Testing. New York: McGraw-Hill, 1999.
6
ANSI publications are available from the Sales Department, American National Standards Institute, 25 West 43rd Street, 4th Floor,
New York, NY 10036, USA (http://www.ansi.org/).
7
IEEE publications are available from the Institute of Electrical and Electronics Engineers, 445 Hoes Lane, P.O. Box 1331,
Piscataway, NJ 08855-1331, USA (http://standards.ieee.org/).
8
The IEEE standards or products referred to in this clause are trademarks of the Institute of Electrical and Electronics Engineers, Inc.
33
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability
[B15] Musa, J. D., et al., Software Reliability: Measurement, Prediction, Application. New York:
McGraw-Hill, 1987.
[B16] Nikora, A. and Munson, J., “Determining fault insertion rates for evolving software systems,”
Proceedings of the Ninth International Symposium on Software Reliability Engineering, Paderborn,
Germany, pp. 306–315, Nov. 4–7, 1998.
[B17] Nikora, A., Schneidewind, N., and Munson, J., “IV&V issues in achieving high reliability and safety
in critical control software,” Final Report, Volume 1—Measuring and Evaluating the Software
Maintenance Process and Metrics-Based Software Quality Control, Volume 2—Measuring Defect Insertion
Rates and Risk of Exposure to Residual Defects in Evolving Software Systems, and Volume 3—
Appendices, Jet Propulsion Laboratory, National Aeronautics and Space Administration, Pasadena, CA,
Jan 19, 1998.
[B18] Schneidewind, N. F., “Application of program graphs and complexity analysis to software
development and testing,” IEEE Transactions on Reliability, vol. R-28, no.3, pp. 192–198, Aug. 1979.
[B19] Schneidewind, N. F., “Software reliability engineering for client-server systems,” Proceedings of
The Seventh International Symposium on Software Reliability Engineering, White Plains, NY, pp. 226–235,
Oct. 30–Nov. 2, 1996.
[B20] Schneidewind, N. F., “Reliability modeling for safety critical software,” IEEE Transactions on
Reliability, vol. 46, no.1, pp.88–98, Mar. 1997.
[B21] Schneidewind, N. F., “Measuring and evaluating maintenance process using reliability, risk, and test
metrics,” IEEE Transactions on Software Engineering, vol. 25, no. 6, pp. 768–781, Nov./Dec. 1999.
[B22] Schneidewind, N. F., “Investigation of the risk to software reliability and maintainability of
requirements changes,” Proceedings of the International Conference on Software Maintenance, Florence,
Italy, pp. 127–136, Nov. 7–9, 2001.
34
Copyright © 2006 IEEE. All rights reserved.