0% found this document useful (0 votes)
163 views44 pages

IEEE 982.1-2005 Measures of The Software Aspects of Dependability

Uploaded by

Harry Han
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
163 views44 pages

IEEE 982.1-2005 Measures of The Software Aspects of Dependability

Uploaded by

Harry Han
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

IEEE Standard Dictionary of

Measures of the Software Aspects


of Dependability

IEEE Computer Society


Sponsored by the
Software Engineering Standards Committee

IEEE
3 Park Avenue IEEE Std 982.1™-2005
New York, NY 10016-5997, USA (Revision of
IEEE Std 982.1-1988)
8 May 2006
IEEE Std 982.1™-2005
(Revision of
IEEE Std 982.1-1988)

IEEE Standard Dictionary of


Measures of the Software Aspects of
Dependability

Sponsor
Software Engineering Standards Committee
of the
IEEE Computer Society

Approved 8 November 2005


IEEE-SA Standards Board
Abstract: A standard dictionary of measures of the software aspects of dependability for
assessing and predicting the reliability, maintainability, and availability of any software system; in
particular, it applies to mission critical software systems.
Keywords: availability, dependability, maintainability, reliability

_________________________

The Institute of Electrical and Electronics Engineers, Inc.


3 Park Avenue, New York, NY 10016-5997, USA

Copyright © 2006 by the Institute of Electrical and Electronics Engineers, Inc.


All rights reserved. Published 8 May 2006. Printed in the United States of America.

IEEE is a registered trademark in the U.S. Patent & Trademark Office, owned by the Institute of Electrical and Electronics
Engineers, Incorporated.

Print: ISBN 0-7381-4846-6 SH95392


PDF: ISBN 0-7381-4847-4 SS95392

No part of this publication may be reproduced in any form, in an electronic retrieval system or otherwise, without the prior
written permission of the publisher.
IEEE Standards documents are developed within the IEEE Societies and the Standards Coordinating Committees of
the IEEE Standards Association (IEEE-SA) Standards Board. The IEEE develops its standards through a consensus
development process, approved by the American National Standards Institute, which brings together volunteers
representing varied viewpoints and interests to achieve the final product. Volunteers are not necessarily members of the
Institute and serve without compensation. While the IEEE administers the process and establishes rules to promote
fairness in the consensus development process, the IEEE does not independently evaluate, test, or verify the accuracy
of any of the information contained in its standards.

Use of an IEEE Standard is wholly voluntary. The IEEE disclaims liability for any personal injury, property or other
damage, of any nature whatsoever, whether special, indirect, consequential, or compensatory, directly or indirectly
resulting from the publication, use of, or reliance upon this, or any other IEEE Standard document.
The IEEE does not warrant or represent the accuracy or content of the material contained herein, and expressly
disclaims any express or implied warranty, including any implied warranty of merchantability or fitness for a specific
purpose, or that the use of the material contained herein is free from patent infringement. IEEE Standards documents
are supplied “AS IS.”

The existence of an IEEE Standard does not imply that there are no other ways to produce, test, measure, purchase,
market, or provide other goods and services related to the scope of the IEEE Standard. Furthermore, the viewpoint
expressed at the time a standard is approved and issued is subject to change brought about through developments in the
state of the art and comments received from users of the standard. Every IEEE Standard is subjected to review at least
every five years for revision or reaffirmation. When a document is more than five years old and has not been
reaffirmed, it is reasonable to conclude that its contents, although still of some value, do not wholly reflect the present
state of the art. Users are cautioned to check to determine that they have the latest edition of any IEEE Standard.

In publishing and making this document available, the IEEE is not suggesting or rendering professional or other
services for, or on behalf of, any person or entity. Nor is the IEEE undertaking to perform any duty owed by any other
person or entity to another. Any person utilizing this, and any other IEEE Standards document, should rely upon the
advice of a competent professional in determining the exercise of reasonable care in any given circumstances.

Interpretations: Occasionally questions may arise regarding the meaning of portions of standards as they relate to
specific applications. When the need for interpretations is brought to the attention of IEEE, the Institute will initiate
action to prepare appropriate responses. Since IEEE Standards represent a consensus of concerned interests, it is
important to ensure that any interpretation has also received the concurrence of a balance of interests. For this reason,
IEEE and the members of its societies and Standards Coordinating Committees are not able to provide an instant
response to interpretation requests except in those cases where the matter has previously received formal consideration.
At lectures, symposia, seminars, or educational courses, an individual presenting information on IEEE standards shall
make it clear that his or her views should be considered the personal views of that individual rather than the formal
position, explanation, or interpretation of the IEEE.

Comments for revision of IEEE Standards are welcome from any interested party, regardless of membership affiliation
with IEEE. Suggestions for changes in documents should be in the form of a proposed change of text, together with
appropriate supporting comments. Comments on standards and requests for interpretations should be addressed to:

Secretary, IEEE-SA Standards Board


445 Hoes Lane
Piscataway, NJ 08854
USA

NOTE—Attention is called to the possibility that implementation of this standard may require use of subject matter
covered by patent rights. By publication of this standard, no position is taken with respect to the existence or validity of
any patent rights in connection therewith. The IEEE shall not be responsible for identifying patents for which a license
may be required by an IEEE standard or for conducting inquiries into the legal validity or scope of those patents that
are brought to its attention.

Authorization to photocopy portions of any individual standard for internal or personal use is granted by the Institute of
Electrical and Electronics Engineers, Inc., provided that the appropriate fee is paid to Copyright Clearance Center. To
arrange for payment of licensing fee, please contact Copyright Clearance Center, Customer Service, 222 Rosewood
Drive, Danvers, MA 01923 USA; +1 978 750 8400. Permission to photocopy portions of any individual standard for
educational classroom use can also be obtained through the Copyright Clearance Center.
Introduction
This introduction is not part of IEEE Std 982.1-2005, IEEE Standard Dictionary of Measures of the Software Aspects
of Dependability.

Rationale for revision: In accordance with Software Engineering Standards Committee (SESC) policy, the
first standard (this document) is a small document with just a few core measures dealing with reliability,
maintainability, and availability. This will be followed by a second standard that will address safety,
confidentially, and integrity.

This standard has not been revised since 1988. It was reaffirmed in 1996, but there were significant
negative comments. It has been revised because many of the original measures had the following
undesirable characteristics:

⎯ Unrealistic (i.e., there was naivety about the necessary data, personnel capabilities, and training
to effectively use the measures)
⎯ Did not measure what they purported to measure
⎯ Complex equations that were hard to understand and implement
⎯ Immature (i.e., they were not widely used)
⎯ Little field data to back up claims for benefits

History
This standard is 15 years old, and additional information on the topic has been developed since that time.
This project updated the standard to include such information and to expand the scope of the standard.
Specifically, the following topics were included in the standard:

a) Determined specific criteria for inclusion of measures in the standard


b) Performed a measure-by-measure review of the measures in the standard, with the following
goals:

1) Determined whether the measure should be modified, retained, or deleted, based on the
criteria. The modified and retained measures are identified in this standard by their numbers
from the original IEEE 982.1 standard (i.e., IEEE 982 #X).
2) Added any additional information on the measures developed since 1988.
3) Clarified definition and implementation conventions.
4) Identified and incorporated, where appropriate, new measures that have appeared since
1988.
5) Formulated generic measure classes and categorized the measures into these classes. These
classes are reliability, maintainability, and availability.

iv
Copyright © 2006 IEEE. All rights reserved.
Notice to users

Errata
Errata, if any, for this and all other standards can be accessed at the following URL: http://
standards.ieee.org/reading/ieee/updates/errata/index.html. Users are encouraged to check this URL for
errata periodically.

Interpretations
Current interpretations can be accessed at the following URL: http://standards.ieee.org/reading/ieee/interp/
index.html.

Patents
Attention is called to the possibility that implementation of this standard may require use of subject matter
covered by patent rights. By publication of this standard, no position is taken with respect to the existence
or validity of any patent rights in connection therewith. The IEEE shall not be responsible for identifying
patents or patent applications for which a license may be required to implement an IEEE standard or for
conducting inquiries into the legal validity or scope of those patents that are brought to its attention.

Participants
At the time this standard was completed, the P982 Working Group had the following membership:

Allen Nikora, Chair


Thomas Antczak Jairus Hihn Norman Schneidewind
William Everett J. Dennis Lawrence George Stark
William Farr John Munson Linda Wilbanks

The following members of the individual balloting committee voted on this standard. Balloters may have
voted for approval, disapproval, or abstention.

Richard Biehl Jon Hagar Terence Rout


Mitchell Bonnett John Harauz James Ruggieri
Juris Borzovs Mark Henley Robert Schaaf
Antonio M Cicu John Horch Hans Schaefer
Rita Creel William Junk Robert W Shillato
Geoffrey Darnton Ron Kenett Mitchell Smith
Einar Dragstedt Thomas M. Kurihara Joyce Statz
Sourav Dutta J. Dennis Lawrence Udo Voges
John Emrich Denis Meredith Paul Wolfgang
William Eventoff Rajesh Moorkath Oren Yuen
John Fendrich Lou Pinto Janusz Zalewski
Yaacov Fenster Garry Roedler Charles Zumba

v
Copyright © 2006 IEEE. All rights reserved.
When the IEEE-SA Standards Board approved this application guide on 8 November 2005, it had the
following membership:
Steve M. Mills, Chair
Richard H. Hulett, Vice Chair
Don Wright, Past Chair
Judith Gorman, Secretary

Mark D. Bowman William B. Hopf T. W. Olsen


Dennis B. Brophy Lowell G. Johnson Glenn Parsons
Joseph Bruder Herman Koch Ronald C. Petersen
Richard Cox Joseph L. Koepfinger* Gary S. Robinson
Bob Davis David J. Law Frank Stone
Julian Forster* Daleep C. Mohla Malcolm V. Thaden
Joanna N. Guenin Paul Nikolich Richard L. Townsend
Mark S. Halpin Joe D. Watson
Raymond Hapeman Howard L. Wolfman

*Member Emeritus

Also included are the following nonvoting IEEE-SA Standards Board liaisons:

Satish K. Aggarwal, NRC Representative


Richard DeBlasio, DOE Representative
Alan H. Cookson, NIST Representative

Don Messina
IEEE Standards Project Editor

vi
Copyright © 2006 IEEE. All rights reserved.
Contents

1. Overview .................................................................................................................................................... 1

1.1 General ................................................................................................................................................ 1


1.2 Scope ................................................................................................................................................... 2
1.3 Purpose ................................................................................................................................................ 2
1.4 Compliance.......................................................................................................................................... 2
1.5 Interaction with other standards........................................................................................................... 3

2. Definitions .................................................................................................................................................. 3

3. New reliability measures ............................................................................................................................ 4

3.1 General ................................................................................................................................................ 4


3.2 Time to next failure (s) (Lyu [B12]).................................................................................................... 4
3.3 Risk factor regression model (Schneidewind [B22])........................................................................... 5
3.4 Remaining failures (Keller and Schneidewind [B11])......................................................................... 6
3.5 Total test time to achieve specified remaining failures (Schneidewind [B20]) ................................... 7
3.6 Network reliability (Schneidewind [B19]) .......................................................................................... 8

4. Modified reliability measures................................................................................................................... 10

4.1 Defect density (982 #2) (Fenton and Pfleeger [B5] and Nikora et al. [B17]).................................... 10
4.2 Test coverage index (982 #5) (Binder [B2])...................................................................................... 11
4.3 Requirements compliance (982 #23) (Fischer and Walker [B6]) ...................................................... 11
4.4 Failure rate (982 #31) (Lyu [B12]) .................................................................................................... 12

5. Retained reliability measures.................................................................................................................... 13

5.1 Fault density (982 #1) (Musa [B14] and Nikora and Munson [B16]) ............................................... 13
5.2 Requirements traceability (982 #7) (Fenton and Pfleeger [B5])........................................................ 15
5.3 Mean time to failure (MTTF) (982 #30) (Lyu [B12] and Musa et al. [B15]) .................................... 16

6. New maintainability measures.................................................................................................................. 17

6.1 Mean time to repair (MTTR) (Lyu [B12] and Musa et al. [B15]) ..................................................... 17
6.2 Network maintainability (Schneidewind [B19])................................................................................ 18

7. New availability measures........................................................................................................................ 19

7.1 Availability (Lyu [B12] and Musa et al. [B15]) ................................................................................ 19


7.2 Network availability (Schneidewind [B19])...................................................................................... 20

vii
Copyright © 2006 IEEE. All rights reserved.
Annex A (informative) Analysis of measures in IEEE Software Engineering Collection,
IEEE Std 982.1-2005, IEEE Standard Dictionary of Measures to Produce Reliable Software .................... 23

Annex B (informative) Software Reliability Engineering Case Study


(Keller and Schneidewind [B11])................................................................................................................ 27

Annex C (informative) Glossary .................................................................................................................. 32

Annex D (informative) Bibliography ........................................................................................................... 33

viii
Copyright © 2006 IEEE. All rights reserved.
IEEE Standard Dictionary of
Measures of the Software Aspects of
Dependability

1. Overview

1.1 General

This standard has the following clauses: Clause 1 provides the scope of this standard. Clause 2 provides a
set of definitions. Clause 3 through Clause 5 specify new, modified, and existing measures for reliability,
respectively. Clause 6 and Clause 7 specify new measures for maintainability and availability, respectively.
In addition, there are three informative annexes. Annex A documents the process that was used to
determine whether a measure in the existing standard should be modified, retained, or deleted; this annex
states the criteria that were used to determine whether a measure should be included in the standard,
including new measures. Annex B describes the Software Reliability Engineering Case Study, which
provides a case study example of how to apply selected measures in Clause 3. Annex C is a glossary of
terms taken from The Authoritative Dictionary of IEEE Standards Terms [B8]. 1 Annex D is a bibliography
that provides additional information about measures in Clause 3 through Clause 7. Table 1 summarizes the
categories of measures. Numerical applications of the measures can be found in the reference(s) that are
keyed to each measure.

1
The numbers in brackets correspond to those of the bibliography in Annex D.

1
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

Table 1 —Summary of measures


Identification Name
Clause 3. New reliability measures
3.2 Time to next failure (s)
3.3 Risk factor regression model
3.4 Remaining failures
3.5 Total test time to achieve specified remaining failures
3.6 Network reliability
Clause 4. Modified reliability measures
4.1 Defect density (982 #2)
4.2 Test coverage index (982 #5)
4.3 Requirements compliance (982 #23)
4.4 Failure rate (982 #31)
Clause 5. Retained reliability measures
5.1 Fault density (982 #1)
5.2 Requirements traceability (982 #7)
5.3 Mean time to failure (MTTF) (982 #30)
Clause 6. New maintainability measures
6.1 Mean time to repair
6.2 Network maintainability
Clause 7. New availability measures
7.1 Availability
7.2 Network availability

1.2 Scope

This standard specifies and classifies measures of the software aspects of dependability. It is an expansion
of the scope of the existing standard; the revision includes the following aspects of dependability:
reliability, availability, and maintainability of software. The applicability of this standard is any software
system; in particular, it applies to mission critical systems, where high reliability, availability, and
maintainability are of utmost importance. These systems include, but are not limited to, systems for
military combat, space missions, air traffic control, network operations, stock exchanges, automatic teller
machines, and airline reservation systems.

1.3 Purpose

This standard provides measures that are applicable for continual self-assessment and improvement of the
software aspects of dependability.

1.4 Compliance

An application of the measures specified herein complies with this standard if Clause 2 through Clause 7
are implemented, with the exception of subclauses identified as either “Experience” and “Tools”. These
subclauses are intended to (1) provide information about application domains for a given measure and
(2) identify the types of tools appropriate for computing that measure.

2
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

1.5 Interaction with other standards

The measures defined in this dictionary may provide information that can help support certain evaluations
discussed in other IEEE Software Engineering Standards. The following list is meant to be suggestive:

IEEE/EIA Std 12207.0


Subclause 6.4.2 of IEEE/EIA Std 12207.0 lists verification tasks, and 6.5 lists validation tasks.
Dependability measures may provide useful quantitative information that will help satisfy these tasks. For
example, in 6.4.2.5, one criterion states (among other requirements) that “the code implements … correct
data and control flow.” 1

IEEE Std 1074TM


Subclause A.5.1 discusses evaluation activities, including reviews, traceability, testing, and reporting
evaluation results. Dependability measures can provide useful quantitative information that can be used to
assist these activities. 2, 3

IEEE Std 1012TM


This standard lists verification and validation tasks, inputs, and outputs in Table 1. The dependability
measures provided in this standard can provide useful quantitative information that can assist many of the
tasks listed in this table.

2. Definitions
For the purposes of this document, the following terms and definitions apply. The glossary in Annex C and
The Authoritative Dictionary of IEEE Standards Terms [B8] should be referenced for terms not defined in
this clause.
2.1 defect: A generic term that can refer to either a fault (cause) or a failure (effect). (adapted from
Lyu [B12])

2.2 dependability: Trustworthiness of a computer system such that reliance can be justifiably placed on the
service it delivers. Reliability, availability, and maintainability are aspects of dependability. (adapted from
Lyu [B12])

2.3 maintainability: Speed and ease with which a program can be corrected or changed. (adapted from
Musa [B14])

2.4 measure (noun): (A) The number or symbol assigned to an entity by a mapping from the empirical
world to the formal, relational world in order to characterize an attribute. (adapted from Fenton and
Pfleeger [B5]) (B) The act or process of measuring. (adapted from Webster’s New Collegiate Dictionary
[B23])

2.5 measure (verb): To make a measurement. (adapted from Webster’s New Collegiate Dictionary [B23])

2.6 time: In decreasing order of resolution, CPU execution time, elapsed time (i.e., wall clock time), or
calendar time. (adapted from Musa [B14])

1 EIA publications are available from Global Engineering Documents, 15 Inverness Way East, Englewood, CO 80112, USA
(http://global.ihs.com/).
2
IEEE publications are available from the Institute of Electrical and Electronics Engineers, Inc., 445 Hoes Lane, Piscataway, NJ
08854, USA (http://standards.ieee.org/).
3
The IEEE standards or products referred to in this clause are trademarks of the Institute of Electrical and Electronics Engineers, Inc.

3
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

3. New reliability measures

3.1 General
The software reliability model specified in this clause (Schneidewind [B20]) is one of the four models
recommended in ANSI R-013-1992 [B1]. See this reference for the other three recommended models that
could be used in place of the specified model.

3.2 Time to next failure (s) (Lyu [B12])

3.2.1 Definition

TF (t) is the predicted time for the next Ft failures to occur, when the current time is t.

TF (t) = [(log[α / (α – β(Xs,t + Ft))]) / β] – (t – s + 1)

3.2.2 Variables

TF (t) is the predicted time for the next Ft failures to occur, when the current time is t.

Xs,t is the observed failure count in the range [s,t].

T is the test or operational time interval when time to next failure prediction is made.

3.2.3 Parameters

α is the failure rate at the beginning of interval s.

β is the negative of derivative of failure rate divided by failure rate.

Ft is the given number of failures to occur after interval t.

s is the starting interval for using observed failure data in parameter estimation.

tm is the mission duration.

3.2.4 Application

Given a mission duration requirement tm and number of failures Ft, a prediction is made at time t to see
whether TF(t) > tm.

3.2.5 Data requirements

Failure counts by time interval, in the range [1,t], where time can be execution time, clock time, or calendar
time, for estimating α, β, and s, and for obtaining Xs,t.

3.2.6 Units of measurement

TF (t), t, tm is the seconds, minutes, hours, and so on, as appropriate.

Ft is the number of failures.

4
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

3.2.7 Experience

Space Shuttle Primary Avionics Software Subsystem, United States Navy Tomahawk cruise missile launch,
readiness validation for Trident nuclear missile, and United States Marine Corps Tactical Systems Support
Activity for distributed system software reliability assessment and prediction (Keller and
Schneidewind [B11]).

3.2.8 Tools

Software reliability modeling and analysis tools. Examples of these tools are described in Appendix A of
Lyu [B12].

3.3 Risk factor regression model (Schneidewind [B22])

3.3.1 Definitions

CF is the cumulative memory space reliability prediction equation:

CF = a × CS2 – b × CS + c

CF is the cumulative requirements issues reliability prediction equation:

CF = d × (exp (e × CI))

RF (risk factor) is the attributes of a requirements change that can induce reliability risk, such as memory
space and requirements issues.

“Memory space” is the amount of memory space required to implement a requirements change (i.e., a
requirements change uses memory to the extent that other functions do not have sufficient memory to
operate effectively, and failures occur).

“Requirements issues” is the number of conflicting requirements (i.e., a requirements change conflicts with
another requirements change, such as requirements to increase the search criteria of a website and
simultaneously decrease its search time, with the added software complexity causing failures).

3.3.2 Variables

CF is the cumulative failure (over a set of requirements changes).

CI is the cumulative issue (over a set of requirements changes).

CS is the cumulative space (over a set of requirements changes).

3.3.3 Parameters

Coefficients of nonlinear regression equations: a, b, c, d, and e.

5
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

3.3.4 Application

Provide a warning to software managers of impending reliability problems early in the development cycle,
during requirements analysis, by using risk factors to predict cumulative failures and the values of the risk
factor thresholds where reliability would degrade significantly. Thus, software managers could anticipate
problems rather than react to them. In addition, more efficient software management would be possible
because with advance warning of reliability problems, management could better schedule and prioritize
development process activities (e.g., inspections, tests).

3.3.5 Data requirements

CF is the cumulative failure count.

CI is the cumulative requirements issues count.

CS is the cumulative memory space count.

3.3.6 Units of measurement

CF, CI are the cumulative failure count and cumulative requirements issues count: dimensionless numbers.

CS is the cumulative memory space requirement count in suitable units (e.g., words).

3.3.7 Experience

Space Shuttle Primary Avionics Software Subsystem (Schneidewind [B22]).

3.3.8 Tools

Spreadsheet or statistical analysis software.

3.4 Remaining failures (Keller and Schneidewind [B11])

3.4.1 Definitions

r(tt) are the remaining failures predicted at total test time tt:

r(tt) = (α/β)(exp – β[tt – (s – 1)])

3.4.2 Variables

r(tt) are the remaining failures.

tt is the total test time of a release or module.

3.4.3 Parameters

α is the failure rate at the beginning of interval s.

β is the negative of derivative of failure rate divided by failure rate.

s is the starting interval for using observed failure data in parameter estimation.

6
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

rc is the critical value of remaining failures, specified according to the criticality of the application.

3.4.4 Application

Test whether the safety criterion, predicted remaining failures r (tt) < rc, is met.

3.4.5 Data requirements

Failure counts by time interval, in the range [1,tt], where time can be execution time, clock time, or
calendar time, for estimating α, β, and s.

tt is the total test time of a release or module.

3.4.6 Units of measurement

r(tt) is the remaining failures: failure count.

tt is the total test time of a release or module: seconds, minutes, hours, and so on, as appropriate.

rc is the critical value of remaining failures: number of failures.

3.4.7 Experience

Space Shuttle Primary Avionics Software Subsystem (Keller and Schneidewind [B11]).

3.4.8 Tools

Software reliability modeling and analysis tools, spreadsheet software.

3.5 Total test time to achieve specified remaining failures (Schneidewind [B20])

3.5.1 Definition

tt is the predicted total time from the start of test required to achieve a specified number of remaining
failures r(tt):

tt = [log[α /(β [r(tt)])]] /β + (s – 1)

3.5.2 Variables

tt is the predicted total test time for a release or module.

3.5.3 Parameters

r(tt) is the specified number of remaining failures for predicting tt.

α is the failure rate at the beginning of interval s.

β is the negative of derivative of failure rate divided by failure rate.

S is the starting interval for using observed failure data in parameter estimation.

7
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

3.5.4 Application

Predict total test time as a function of the reliability goal, as specified by number of remaining failures.

3.5.5 Data requirements

Failure counts by time interval, in the range [1, tt], where time can be execution time, clock time, or
calendar time, for estimating α, β, and s.

r(tt) is the specified number of remaining failures.

3.5.6 Units of measurement

r(tt) is the failure count.

tt are the seconds, minutes, hours, and so on, as appropriate.

3.5.7 Experience

Space Shuttle Primary Avionics Software Subsystem, United States Navy Tomahawk cruise missile launch,
readiness validation for Trident nuclear missile, and United States Marine Corps Tactical Systems Support
Activity for distributed system software reliability assessment and prediction (Keller and
Schneidewind [B11]).

3.5.8 Tools

Software reliability modeling and analysis tools, spreadsheet software.

3.6 Network reliability (Schneidewind [B19])

3.6.1 Definition

RN is the reliability of network (probability of network surviving).

RN = RC × RS

3.6.2 Variables

RC is the reliability of nc clients (probability of all clients surviving).

nc

∑ TC
i =1
i

RC = 1 −
nc × T

RS is the reliability of ns servers (probability of all servers surviving).

8
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

ns

∑ TS
i =1
i

RS = 1 −
ns × T

TCi is the downtime on client i (down due to software failure and repair) during time T.

TSi is the downtime on server i (down due to software failure and repair) during time T.

3.6.3 Parameters

T is the scheduled operational time.

nc is the number of client nodes in network.

ns is the number of server nodes in network.

3.6.4 Application

Reliability assessment of networks.

3.6.5 Data requirements

TCi is the downtime of client i during time T.

TSi is the downtime of server i during time T.

T is the scheduled operational time.

nc is the number of client nodes in network.

ns is the number of server nodes in network.

3.6.6 Units of measurement

TCi, TSi, T are the minutes, hours, and days, as appropriate.

nc, ns are the dimensionless numbers.

3.6.7 Experience

United States Marine Corps Tactical Systems Support Activity for distributed system software reliability
assessment and prediction (Schneidewind [B19]).

3.6.8 Tools

Spreadsheet software.

9
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

4. Modified reliability measures

4.1 Defect density (982 #2) (Fenton and Pfleeger [B5] and Nikora et al. [B17])

4.1.1 Definition

DD is the defect density.

D
DD =
KSLOC

4.1.2 Variables

D is the number of defects (or discrepancy reports) of a specified severity, per release or module.

KSLOC is the number of source lines of executable code and non-executable data declarations in thousands
per release or module.

4.1.3 Parameters

Specified severity level.

4.1.4 Application

Track software quality across a series of releases or modules.

4.1.5 Data requirements

D is the count of defects (or discrepancy reports) of a specified severity, per release or module.

KSLOC is the count of number of source lines of executable code and non-executable data declarations in
thousands per release or module.

4.1.6 Units of measurement

Dimensionless numbers.

4.1.7 Experience

Space Shuttle Primary Avionics Software Subsystem (Nikora and Munson [B16]).

4.1.8 Tools

Spreadsheet software, compiler output of source lines of executable code and non-executable data
declarations.

10
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

4.2 Test coverage index (982 #5) (Binder [B2])

4.2.1 Definition

TCI is the test coverage index

NR
TCI =
TR

4.2.2 Variables

NR is the number of requirements that have passed all tests per release or module.

TR is the total number of requirements tested per release or module.

4.2.3 Parameters

None.

4.2.4 Application

Determine whether requirements have been covered in testing.

4.2.5 Data requirements

NR is the number of requirements that have passed all tests per release or module.

TR is the total number of requirements tested per release or module.

4.2.6 Units of measurement

NR, TR are the dimensionless numbers.

4.2.7 Experience

Commercial software development organization test procedures (Lyu [B12]).

4.2.8 Tools

Spreadsheet software, data management system for keeping track of requirements and test results.

4.3 Requirements compliance (982 #23) (Fischer and Walker [B6])

4.3.1 Definition

Percentage of errors due to inconsistent requirements = N1/(N1 + N2 + N3) × 100

Percentage of errors due to incomplete requirements = N2/(N1 + N2 + N3) × 100

11
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

Percentage of errors due to misinterpreted requirements = N3/(N1 + N2 + N3) × 100

4.3.2 Variables

N1 is the number of inconsistent requirements in a release or module.

N2 is the number of incomplete requirements in a release or module.

N3 is the number of misinterpreted requirements in a release or module.

4.3.3 Parameters

None.

4.3.4 Application

An analysis is made of the percentages for the respective requirements error types: inconsistencies,
incompleteness, and misinterpretation. Identification of inconsistent, incomplete, and misinterpreted
requirements. Requirements problems can have an adverse effect on reliability.

4.3.5 Data requirements

N1 is the count of inconsistent software requirements in a release or module, which is the number of
decomposition elements that do not accurately reflect the system requirement specification.

N2 is the count of incomplete software requirements in a release or module, which is the number of
decomposition elements that do not completely reflect the system requirement specification.

N3 is the count of misinterpreted software requirements in a release or module, which is the number of
decomposition elements that do not correctly reflect the system requirement specification.

4.3.6 Units of measurement

N1, N2, N3 are the dimensionless numbers.

4.3.7 Experience (Fischer and Walker [B6])

Shipboard tactical command center, combat weapons system, and communication system; ground-based
tactical command center; and air traffic control.

4.3.8 Tools

Spreadsheet software.

4.4 Failure rate (982 #31) (Lyu [B12])

4.4.1 Definition

f is the incremental failure count/incremental test or operational time.

f = Δfi/Δti is the incremental failure count/incremental test or operational time.

12
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

4.4.2 Variables

Δti is the test or operational time expended in period i by a release or module.

Δfi is the number of failures that occur during Δti in a release or module.

4.4.3 Parameters

i is the period identifier.

n is the number of periods.

4.4.4 Application

Track software reliability over the periods 1, 2, 3, … , i, … , n to see whether it is increasing (i.e.,
decreasing failure rate) or decreasing (i.e., increasing failure rate).

4.4.5 Data requirements

Δti is the test or operational time expended in period i (e.g., 30 minutes in hour 1) by a release or module.

Δfi is the number of failures that occur during Δti (e.g., two failures in 30 minutes in hour 1) in a release or
module.

4.4.6 Units of measurement

Δti is the test or operational time expended in period i: seconds, minutes, hours, and so on, as appropriate,
by a release or module.

Δfi is the number of failures that occur during Δti in a release or module.

4.4.7 Experience

Military distributed processing system (Lyu [B12]).

4.4.8 Tools

Software reliability modeling and analysis tools, spreadsheet software.

5. Retained reliability measures

5.1 Fault density (982 #1) (Musa [B14] and Nikora and Munson [B16])

5.1.1 Definitions

FD is the fault density.

F
FD =
KSLOC

13
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

5.1.2 Variables

FD is the fault density.

F is the number of unique faults found that resulted in failures of a specified severity level, per release or
module.

KSLOC is the number of source lines of executable code and non-executable data declarations in
thousands, per release or module.

5.1.3 Parameters

Specified severity level.

5.1.4 Application

This measure is used to perform the following functions:

a) Establish target fault density by severity class.


b) Compare calculated fault density with target value by severity class.
c) Use function b) to determine whether sufficient testing has been completed.

5.1.5 Data requirements

F is the count of number of unique faults found that resulted in failures of a specified severity level, per
release or module.

KSLOC is the count of number of source lines of executable code and non-executable data declarations in
thousands, per release or module.

5.1.6 Units of measurement

F, KSLOC are the dimensionless numbers.

5.1.7 Experience

Space Shuttle Primary Avionics Software Subsystem (Nikora and Munson [B16]).

5.1.8 Tools

Spreadshset software, compiler output of source lines of executable code and non-executable data
declarations.

14
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

5.2 Requirements traceability (982 #7) (Fenton and Pfleeger [B5])

5.2.1 Definition

TM is the traceability measure.

⎛ R1 ⎞
TM = ⎜ ⎟ × 100%
⎝ R2 ⎠

If TM ≤ 100%, then no excess requirements are implemented in the release or module;


Else there are excess requirements in the amount (TM – 100)%.

5.2.2 Variables

TM is the traceability measure.

R1 is the number of requirements implemented in the release or module.

R2 is the number of original requirements specified for the release or module.

5.2.3 Parameters

None.

5.2.4 Application

This measure aids in identifying requirements that are either missing from, or in addition to, the original
requirements. Missing requirements negatively affect reliability. Excess requirements negatively affect the
software development budget.

A set of mappings from the requirements in the software implementation to the original requirements is
created. Count each requirement met by the implementation (R1), and count each of the original
requirements (R2). Compute the traceability measure:

⎛ R1 ⎞
TM = ⎜ ⎟ × 100%
⎝ R2 ⎠

5.2.5 Data requirements

R1 is the count of the requirements implemented in the release or module.

R2 is the count of the original requirements specified in the release or module.

5.2.6 Units of measurement

R1, R2 are the dimensionless numbers.

15
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

5.2.7 Experience

U.S. Navy’s A7 aircraft software (Henninger [B5]).

5.2.8 Tools

Spreadsheet software.

5.3 Mean time to failure (MTTF) (982 #30) (Lyu [B12] and Musa et al. [B15])

5.3.1 Definition

∑ti
i
MTTF =
n

where ti is the ith time between failures, ignoring repair times, for i ≥ 1, and n is the number of times
between failures.

5.3.2 Variables

ti is the ith time between failures of a release or module.

5.3.3 Parameters

n is the number of times between failures.

5.3.4 Application

Estimate the expected time to failure (i.e., time between failures, ignoring repair times) of a release or
module.

5.3.5 Data requirements

Times when failures occur in a release or module.

5.3.6 Units of measurement

ti are the seconds, minutes, hours, and so on, as appropriate.

5.3.7 Experience

Space Shuttle Primary Avionics Software Subsystem (Schneidewind [B21]).

5.3.8 Tools

Spreadsheet software.

16
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

6. New maintainability measures

6.1 Mean time to repair (MTTR) (Lyu [B12] and Musa et al. [B15])

6.1.1 Definition

∑R
i
i
MTTR =
n

where Ri is the ith repair time, for i ≥ 1, and n is the number of repairs. A repair may consist of
1) identifying and fixing the fault(s) that caused the system to fail, or 2) restoring the system to service
without identifying or repairing the fault(s) causing the failure (e.g., rebooting the system). Repairs may be
unscheduled (e.g., in response to a failure) or scheduled (as part of regular system maintenance).

6.1.2 Variables

Ri is the ith repair time of a release or module.

6.1.3 Parameters

n is the number of repair times.

6.1.4 Application

Estimate the time that will be required to repair a fault after a failure has occurred in a release or module.

6.1.5 Data requirements

Ri is the repair time.

6.1.6 Units of measurement

Ri are the seconds, minutes, hours, and so on, as appropriate.

6.1.7 Experience

Large multi-release telecommunications switching system (Lyu [B12]).

6.1.8 Tools

Spreadsheet software.

17
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

6.2 Network maintainability (Schneidewind [B19])

6.2.1 Definition

MN is the maintainability of a network (probability of maintenance required on a network):

MN = ( MC ) × ( MS )

6.2.2 Variables

MC is the maintainability of clients (probability of maintenance required on nc clients during time T).

nc

∑ mci
MC = i =1
T

MS is the maintainability of servers (probability of maintenance required on ns servers during time T).

ns

∑ msi
MS = i =1
T

mci is the maintenance time on client i during time T.

msi is the maintenance time on server i during time T.

6.2.3 Parameters

T is the scheduled operational time.

nc is the number of client nodes in the network.

ns is the number of server nodes in the network.

6.2.4 Application

Maintainability assessment of networks.

6.2.5 Data requirements

mci is the maintenance time on client i during time T.

msi is the maintenance time on server i during time T.

T is the scheduled operational time.

6.2.6 Units of measurement

mci, msi, T are the seconds, minutes, hours, and days, as appropriate.

nc, ns are the dimensionless numbers.

18
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

6.2.7 Experience

U.S. Marine Corps Tactical Systems Support Activity for distributed system software reliability assessment
and prediction (Schneidewind [B19]).

6.2.8 Tools

Spreadsheet software.

7. New availability measures

7.1 Availability (Lyu [B12] and Musa et al. [B15])

7.1.1 Definition

Availability is the probability that software is available when needed.

MTTF
Availability =
MTTF + MTTR

7.1.2 Variables

MTTF and MTTR.

7.1.3 Parameters

None.

7.1.4 Application

Compute the probability that software (e.g., release or module) is available when needed.

7.1.5 Data requirements

MTTF (time to failure measured from end of repair to next failure; see 5.3) and MTTR (see 6.1).

7.1.6 Units of measurement

MTTF and MTTR are the seconds, minutes, hours, and days, as appropriate.

7.1.7 Experience

Large multi-release telecommunications switching system (Lyu [B12]).

7.1.8 Tools

Spreadsheet software.

19
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

7.2 Network availability (Schneidewind [B19])

7.2.1 Definition

A is the network availability (probability that the network is available when needed).

MTTFn MTBFn − MTTRn


A= =
MTTFn + MTTR n MTBFn

7.2.2 Variables

7.2.2.1 Mean time between failures

MTBFn is the MTBF for a network.

⎛ nc ⎞ ⎛ ns ⎞
MTBFn = MTBFc × ⎜ ⎟ + MTBFs × ⎜ ⎟
⎝ nc + ns ⎠ ⎝ nc + ns ⎠

(weighted average of MTBFc and MTBFs).

MTBFc is the MTBF for clients.

T
MTBFc =
FC

FC is the total number of client failures during time T.

nc
FC = ∑ fc
i =1
1

fci is the number of failures of client i during time T.

nc is the number of client nodes in a network.

MTBFs is the MTBF for servers.

T
MTBFs =
FS

FS is the total number of server failures during time T.

ns
FS = ∑ fs
i =1
1

fsi is the number of failures of server i during time T.

20
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

ns is the number of server nodes in a network.

7.2.2.2 Mean time to repair

MTTRn is the MTTR for a network.

⎛ nc ⎞ ⎛ ns ⎞
MTTRn = MTTRc × ⎜ ⎟ + MTTRs × ⎜ ⎟
⎝ nc + ns ⎠ ⎝ nc + ns ⎠

(weighted average of MTTRc and MTTRs).

MTTRc is the MTTR for clients.

nc

∑ mc
i =1
i
MTTR c =
FC

mci is the maintenance time on client i during time T.

MTTRs is the MTTR for servers.

ns

∑ ms
i =1
i
MTTR s =
FS

msi is the maintenance time on server i during time T.

7.2.3 Parameters

nc is the number of client nodes in a network.

ns is the number of server nodes in a network.

T is the scheduled operational time.

7.2.4 Application

Availability assessment of networks.

7.2.5 Data requirements

fci is the number of failures of client i during time T.

fsi is the number of failures of server i during time T.

mci is the maintenance time on client i during time T.

msi is the maintenance time on server i during time T.

T is the scheduled operational time.

21
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

7.2.6 Units of measurement

fci, fsi are the failure counts.

mci, msi, T are the seconds, minutes, hours, and days, as appropriate.

nc, ns are the dimensionless numbers.

7.2.7 Experience

U.S. Marine Corps Tactical Systems Support Activity for distributed system software reliability assessment
and prediction (Schneidewind [B19]).

7.2.8 Tools

Spreadsheet software.

22
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

Annex A
(informative)
Analysis of measures in IEEE Software Engineering Collection,
IEEE Std 982.1-2005, Standard Dictionary of Measures to Produce Reliable
Software

This annex, in Table A.1, provides the justification for modifying, retaining, or deleting existing measures.

The criteria that were used in deciding whether to modify, retain, or delete a measure are listed as follows.
The same criteria were applied to new measures. To be eligible for inclusion, a measure should satisfy one
or more of the following criteria:
a) A measure should have some minimum number of recognized uses
b) A measure should have demonstrated or potential utility in producing reliable, maintainable, and
available software
c) A measure should not be overly complex, difficult to understand, difficult to implement, or
require expensive tools to implement
d) A measure should be development paradigm independent (i.e., have wide applicability)
Object-oriented measures do not satisfy criteria 1 and 3. With respect to criterion 1, there is not a minimum
number of recognized industry uses or applications. Although object-oriented measures have potential
utility in producing reliable, maintainable, and available software, they are not sufficiently mature to
include in this standard at this time.

Table A.1—Justification for modifying, retaining, or deleting measures in


previous version
Analysis of existing Modify Retain Delete Rationale
measures
4.1 Fault density X Substantial justification and good documentation
presented for this measure.
4.2 Defect density X Generalize and simplify. Use the Space Shuttle
Discrepancy Report approach.
4.3 Cumulative failure X The accumulation of failures to an asymptote does
profile not necessarily mean that high reliability has been
achieved. Better measures are predictions of
remaining failures and time to failure. The
remaining failures could be disastrous.
4.4 Fault-days number X This unnormalized measure is not meaningful
because the sum of fault days is not comparable
among systems. Need to divide by number of
faults, but thus calculation would yield MTBF.
4.5 Functional or modular X This metric assumes a one-to-one relationship
test coverage between test cases and requirements. Typically,
there are many test cases per requirement; hence,
the metric does not work as defined. It should be
modified to show test cases per requirement
passed.
4.6 Cause and effect X Ambiguous. Difficult to interpret. Low usage.
graphing
4.7 Requirements X This measure aids in identifying requirements that
traceability are either missing from or in addition to the
original requirements.
4.8 Defect indices X Too subjective; data difficult to collect.

23
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

Analysis of existing Modify Retain Delete Rationale


measures
4.9 Error distributions X Use IEEE 1044 fault classification standard [B9].
4.10 Software maturity X This is not a measure of maturity. It is a measure
index of module change rate but not a good one. The
value could be negative.
4.11 Manhours per major X No convincing argument on interpretation. Large
defect detected value could be due to critical module inspection or
poor efficiency in inspection. Low value could be
due to a few resources applied or efficient
inspection.
4.12 Number of conflicting X Should map from requirements to specifications.
requirements One could have multiple requirements associated
with a single specification and vice versa.
Example is incorrect.
4.13 Number of entries and X All that is needed is a rule about single entry and
exits per module exit.
4.14 Software science X n1, n2, N1, and N2 proved useful on the Space
measures Shuttle as quality discriminants. However, these
measures are difficult to implement because
thresholds that distinguish low from high quality
must be estimated statistically. This process is
beyond the skills of most practitioners. The
remaining measures are very controversial.
4.15 Graph-theoretic X Some components of the measures (e.g., dk) are
complexity for architecture highly subjective and depend on judgments by the
user. It would be infeasible to collect the data
necessary to implement the measures.
4.16 Cyclomatic complexity X There is no scientific evidence that threshold
values that must be used with this measure, such
as 10, have general applicability. The number 10,
as suggested by McCabe [B13], applied to
software developed in the National Security
Agency. Also, McCabe never suggested that his
measure be used for reliability, maintainability,
and availability. Rather, he was applying his
measure to formulating test strategies.
4.17 Minimal unit test case X A definition is needed for an independent path.
determination Here is one: The concept of independent path
comes from graph theory. Independent paths (i.e.,
fundamental circuits in graph theory) have the
property that no path in the fundamental circuit
matrix can be obtained by a linear combination of
other paths in the matrix (Schneidewind [B18]).
Also, it has been shown that confining testing to
independent paths does not necessarily cover the
minimal number of test cases in a given program
(Evangelist [B4]).
4.18 Run reliability X “Correct results” is not defined. In order for Run
Reliability Rk = Prk, independent runs must be
assumed. This assumption is unrealistic. If Pr is
assumed uniformly distributed, the assumption is
unrealistic. If Pr is assumed not to be uniformly
distributed, the data to support this assumption
would be very difficult to obtain. Today, the
objectives for using this measure would be
obtained by using the Operational Profile.
4.19 Design structure X This measure adds non-commensurate quantities
(i.e., apples and oranges). The resultant quantity is
meaningless. Also, some primitives required in the
computation would be difficult to collect.

24
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

Analysis of existing Modify Retain Delete Rationale


measures
4.20 Mean time to discover X The predicted time to next K faults (failures) is
the next K faults much more useful than the mean time. This new
measure is proposed to replace the current
measure. It is shown as #3.1 under “New
Measures” (Schneidewind [B20]). The equation
has been used on the following projects: Space
Shuttle flight software, Naval Surface Warfare
Center Trident software, U.S. Navy Program
Executive Office for Theater Combatants
Software Dependability Handbook, and Marine
Corps Tactical System Support Activity
distributed systems. Similar equations have been
applied to AT&T switching systems and widely
applied elsewhere (Musa [B14] and Musa et al.
[B15]).
4.21 Software purity level X Quoting from the Consideration’s section of the
measure’s description: “The application of this
measure is especially applicable for those classes
of models in which the total number of remaining
faults in the program cannot be predicted (e.g.,
Moranda’s geometric model). Reliability
prediction should not have to depend on such
constraints. Furthermore, this measure would not
be easy for the practitioner to interpret. Lastly,
there are more meaningful predictors of reliability
such as Time to Next Failure (e.g., New Measure
#3.1) and Remaining Failures (e.g., New Measure
#3.3)” (ANSI R-013-1992 [B1], Musa [B14],
Musa et al. [B15], and Schneidewind [B20]).
4.22 Estimated number of X Quoting from IEEE Std 982: “Before seeding, a
faults remaining (by fault analysis is needed to determine the types of
seeding) faults and their relative frequency of occurrence
expected under a particular set of software
development conditions.” If a fault analysis has
already been conducted, what is the point of doing
the seeding?
4.23 Requirements X This measure retains the basic concept of
compliance identifying and computing requirements
deficiencies, but it updates the tool used for
automating the detection of deficiencies.
4.24 Test coverage X This measure is similar to 4.5, Functional or
modular test coverage. The latter has been used.
4.25 Data or information X This measure has a hodgepodge of unrelated terms
flow complexity that make no sense. Just multiplying Fan In * Fan
Out has this effect. Squaring this term exacerbates
the problem. Weighting with length confounds the
measure even more. Also, the procedure and data
flow methodologies could be considered out of
date.
4.26 Reliability growth X Least squares is not a good reliability estimation
function technique. R(k) = (R(u) – A/K) does not make
sense. Why is a reliability growth parameter
divided by K, number of stages? R(k) and R(u) are
never defined. Could use predicted remaining
faults instead. Could achieve the objective easier
with reliability growth models that have the
advantage of being implemented in the software
reliability tools such as those identified in
Appendix A of Lyu [B12]. No units of
computation given.

25
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

Analysis of existing Modify Retain Delete Rationale


measures
4.27 Residual fault count X No computation given, only primitives. “Software
integrity” not defined. The statement: “Ideally, the
software failure rate is constant between failures”
makes no sense. For reliability growth, want the
failure rate to decrease. Assumes using NHPP will
solve the problem of inter-failure time
dependencies.
4.28 Failure analysis using X No computation given, only one primitive!
elapsed time Achieving good fit with historical data does not
mean model will predict well in the future.
4.29 Testing sufficiency X This measure is similar to 4.5, Functional or
modular test coverage. The latter has been used.
4.30 Mean-time-to-failure X Substantial justification and good documentation
presented for this measure.
4.31 Failure rate X Instead of the theoretical failure rate given in the
measure, use the empirical failure rate computed
from (Incremental Number of
Failures)/(Incremental Test or Operational Time).
This measure would be much easier for the
practitioner to compute and apply.
4.32 Software X A measure should not rely on the use of a
documentation and source questionnaire. The measure should stand on its
listings own merits. Math models should not be a
subcharacteristic of the measure.
4.33 Rely (required X This is not a reliability measure. It is set of
software reliability) reliability attributes (e.g., severity levels).
4.34 Software release X This measure is a hodgepodge of undefined and
readiness vague primitives like “software operator
interface.” The implementation of the measure is
so impractical that it cannot be computed in the
real world.
4.35 Completeness X Covered by Requirements Compliance (982 #23)
(modified existing measure). Existing measure is
very complex to implement.
4.36 Test accuracy X This measure is much too complex for practical
application. Much of the data needed for its
computation cannot be collected. The test
coverage measure Gamma is not defined.
4.37 System performance X This measure is beyond the scope of the standard;
reliability it is not a software reliability measure. It is a
system performance measure, a function of
hardware and software.
4.38 Independent process X This measure is much too complex for practical
reliability application. Much of the data needed for its
computation cannot be collected. Assumes
programs have “no loops and no local
dependencies exist.” Assumes that a large
program is composed of “logically independent
modules which can be designed, implemented and
tested independently.”
4.39 Combined hardware X As stated in the Experience section of the
and software (system) standard: “Few hardware/software models have
operational availability been developed that can be applied to the
measurement of system operational availability.”
In addition, the equations for this measure are very
complex and difficult to use, and the data to
support the measure would be difficult to obtain.

26
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

Annex B
(informative)
Software Reliability Engineering Case Study
(Keller and Schneidewind [B11])

B.1 General

In broad terms, implementing a software reliability program is a two-phased process. It consists of


(1) identifying the reliability goals and (2) testing the software to see whether it conforms to the goals. The
reliability goals can be ideal (e.g., zero defects) but should have some basis in reality based on tradeoffs
between reliability and cost. The testing phase is more complex because it involves collecting raw defect
data and using it for assessment and prediction.

The goals of the American National Standards Institute/American Institute of Aeronautics and Astronautics
Recommended Practice on Software Reliability, R-013-1992 [B1] are as follows: Provide a common basis
for discussion among individuals within a project and across projects; remind practitioners about the many
aspects of the software reliability process that are important to consider; and advise them on how to achieve
good results and avoid bad practices. Next, are described the steps on how to implement a Software
Reliability Engineering (SRE) program, using the Space Shuttle as an example, and keying the appropriate
measure to the recommended practice.

B.2 Implementing a Software Reliability Engineering program

The following are major steps in SRE (not necessarily in chronological order):
a) State the safety criteria. This might be stated, for example, as “no failure that would result in
loss of life or mission.”
b) Collect fault and failure data. For each system, there should be a brief description of its
purpose and functions and the fault and failure data, as shown below. Days # could be hours,
minutes, as appropriate. Code the Problem Report Identification to indicate Software (S) failure,
Hardware (H) failure, or People (P) failure.
1) System Identification
2) Purpose
3) Functions
4) Days # (since start of test)
5) Problem Report Identification
6) Problem Severity
7) Failure Date
8) Module with Fault
9) Description of Problem
c) Establish problem severity levels. Use a problem severity classification, such as the following:
1) Loss of life, loss of mission, abort mission
2) Degradation in performance
3) Operator annoyance

27
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

4) System OK, but documentation in error


5) Error in classifying a problem (i.e., no problem existed in the first place)

NOTE—Not all problems result in failures. 4

d) Implement the safety criteria. Two criteria for software reliability levels will be defined. Then
these criteria will be applied to the risk analysis of safety critical software. In the case of the
Shuttle example, the “risk” will represent the degree to which the occurrence of failures does
not meet required reliability levels. Three prediction quantities, remaining failures (3.4), time to
next failure (3.2), and total test time to achieve specified remaining failures (3.5), are applied in
item 1) and item 2) below. This is followed by a risk assessment based on the degree to which
the predictions satisfy the risk criteria. Finally, guidance is provided on how to interpret
reliability predictions.
If the safety goal is the reduction of failures of a specified severity to an acceptable level of risk
(Lyu [B12]), then for software to be ready to deploy, after having been tested for total time tt, it
would satisfy the following criteria:
1) Predicted remaining failures r(tt) < rc, where rc is a specified critical value.
2) Predicted time to next failure TF(tt) > tm, where tm is mission duration.
The predicted value of r(tt) would be obtained in accordance with section 3.3 Remaining
Failures of Schneidewind [B20].
The predicted value of TF(tt) would be obtained in accordance with section 3.1 Time to Next
Failure(s) of Schneidewind [B20].
For systems that are tested and operated continuously like the Shuttle, tt, TF(tt), and tm are
measured in execution time. Note that, as with any methodology for assuring software safety,
there is no guarantee that the expected level will be achieved. Rather, with these criteria, the
objective is to reduce the risk of deploying the software to a “desired” level.
Apply the remaining failures criterion. Criterion 1) sets the threshold on remaining failures that
would be satisfied to deploy the software (i.e., no more than a specified number of failures).
If we predict r(tt) ≥ rc, we would continue to test for a total time tt′ > tt that is predicted to
achieve r(tt′) < rc, using the assumption that more failures will be experienced and more faults
will be corrected so that the remaining failures will be reduced by the quantity r(tt) – r(tt′). If the
developer does not have the resources to satisfy the criterion or cannot satisfy the criterion
through additional testing, the risk of deploying the software prematurely should be assessed
(see the next section). It is known that it is impossible to demonstrate the absence of faults
(Dijkstra [B3]); however, the risk of failures occurring can be reduced to an acceptable level, as
represented by rc. This scenario is shown in Figure B.1. In case A, r(tt) < rc is predicted and the
mission begins at tt. In case B, r(tt) ≥ rc is predicted and the mission would be postponed until
the software is tested for total time tt′ when r(tt′) < rc is predicted. In both cases, criterion 2)
would also be required for the mission to begin.

4
Notes in text, tables, and figures are given for information only and do not contain requirements needed to implement the standard.

28
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

Figure B.1—Remaining failures criterion scenario

Apply the time to next failure criterion. Criterion 2 specifies that the software needs to survive
for a time greater than the duration of the mission. If TF(tt) ≤ tm is predicted, the software is
tested for a total time tt" > tt that is predicted to achieve TF(tt") > tm, using the assumption that
more failures will be experienced and faults corrected so that the time to next failure will be
increased by the quantity TF(tt") – TF(tt). Again, if it is infeasible for the developer to satisfy the
criterion for lack of resources or failure to achieve test objectives, the risk of deploying the
software prematurely should be assessed (see the next section). This scenario is shown in
Figure B.2. In case A, TF(tt) > tm is predicted and the mission begins at tt. In case B, TF(tt) ≤ tm is
predicted, and in this case, the mission would be postponed until the software is tested for total
time tt" when TF(tt") > tm is predicted. In both cases, criterion 1) would also be required for the
mission to begin. If neither criterion is satisfied, the software is tested for a time that is the
greater of tt' or tt".

Figure B.2—Time to Next Failure criterion scenario

29
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

e) Make a risk assessment. Safety Risk pertains to executing the software of a safety critical
system where there is the chance of injury (e.g., astronaut injury or fatality), damage (e.g.,
destruction of the Shuttle), or loss (e.g., loss of the mission), if a serious software failure occurs
during a mission.
The amount of total test time tt can be considered a measure of the degree to which software
reliability goals have been achieved. This is particularly the case for systems like the Space
Shuttle where the software is subjected to continuous and rigorous testing for several years in
multiple facilities, using a variety of operational and training scenarios (e.g., by the contractor in
Houston, by NASA in Houston for astronaut training, and by NASA at Cape Canaveral). Total
test time tt can be viewed as an input to a risk reduction process and r(tt) and TF(tt) as the
outputs, with rc and tm as “risk criteria levels” of reliability that control the process. Although it
can be recognized that total test time is not the only consideration in developing test strategies
and that there are other important factors, like the consequences for reliability and cost, in
selecting test cases (Nikora et al. [B17]), nevertheless, for the foregoing reasons, total test time
has been found to be strongly positively correlated with reliability growth for the Space Shuttle
(Musa [B14]).
Evaluate remaining failures risk. The mean value of the risk criterion metric (RCM) for
criterion 1 is formulated as in Equation (B.1):

(r (t t ) − rc ) ⎛ r (t ) ⎞
RCM r (t t ) = = ⎜⎜ t ⎟⎟ − 1 (B.1)
rc ⎝ rc ⎠

Equation (B.1) is plotted in Figure B.3 as a function of tt for rc = 1, for OID, where positive,
zero, and negative values correspond to r(tt) > rc, r(tt) = rc, and r(tt) < rc, respectively. In
Figure B.3, these values correspond to the following regions: CRITICAL (i.e., above the X-axis,
predicted remaining failures are greater than the specified value); NEUTRAL (i.e., on the X-
axis, predicted remaining failures are equal to the specified value); and DESIRED (i.e., below
the X-axis, predicted remaining failures are less than the specified value, which could represent
a “safe” threshold or, in the Shuttle example, an “error-free” condition boundary). This graph is
for the Shuttle Operational Increment OID (with many years of shelf life): a software system
comprising modules and configured from a series of builds to meet Shuttle mission functional
requirements. In this example it can be seen that at approximately tt = 57, the risk transitions
from the CRITICAL region to the DESIRED region.

Figure B.3—RCM for Remaining Failures (r =1), OID

30
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

Evaluate Time to Next Failure Risk. Similarly the mean value of the risk criterion metric (RCM)
for criterion 2 is formulated as in Equation (B.2):

(t m − TF (t t )) ⎛ T (t ) ⎞
RCM TF (t t ) = = ⎜⎜ F t ⎟⎟ − 1 (B.2)
tm ⎝ tm ⎠

Equation (B.2) is plotted in Figure B.4 as a function of tt for tm = 8 days (a typical mission
duration), for OIC, where positive, zero, and negative risk corresponds to TF(tt) < tm, TF(tt) = tm,
and TF(tt) > tm, respectively. In Figure B.4, these values correspond to the following regions:
CRITICAL (i.e., above the X-axis, predicted time to next failure is less than the specified
value); NEUTRAL (i.e., on the X-axis, predicted time to next failure is equal to the specified
value); and DESIRED (i.e., below the X-axis, predicted time to next failure is greater than the
specified value). This graph is for the Shuttle operational increment OIC. In this example, the
RCM is in the DESIRED region at all values of tt.

Figure B.4—RCM for Time to Next Failure (tm = 8 days), OIC


f) Interpret Software Reliability Predictions: Successful use of statistical modeling in
predicting the reliability of a software system requires a thorough understanding of precisely
how the resulting predictions are to be interpreted and applied (Musa [B14]). The Shuttle
software (430 KLOC) is frequently modified, at the request of NASA, to add or change
capabilities using a constantly improving process. Each of these successive versions constitutes
an upgrade to the preceding software version. Each new version of the software (designated as
an Operational Increment, OI) contains software code that has been carried forward from each
of the previous versions (“previous-version subset”) as well as new code generated for that new
version (“new-version subset”). We have found that by applying a reliability model
independently to the code subsets we can obtain satisfactory composite predictions for the total
version.
It is essential to recognize that this approach requires an accurate code change history so that
every failure can be uniquely attributed to the version in which the defective line(s) of code
were first introduced. In this way, it is possible to build a separate failure history for the new
code in each release. To apply SRE to your software system, you should consider breaking your
systems and processes down into smaller elements to which a reliability model can be more
accurately applied. Using this approach, we have been successful in applying SRE to predict the
reliability of the Shuttle software for NASA.

31
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

Annex C

(informative)

Glossary

For the purposes of this document, the following terms and definitions apply. These and other terms within
IEEE standards are found in The Authoritative Dictionary of IEEE Standards Terms [B8]. 5

availability: The degree to which a system or component is operational and accessible when required for
use. Often expressed as a probability. (adapted from IEEE Std 610.12TM-1990 [B10])

failure: The inability of a system or component to perform its required functions within specified
performance requirements. (adapted from IEEE Std 610.12-1990 [B10])

fault: An incorrect step, process, or data definition in a computer program. (adapted from
IEEE Std 610.12-1990 [B10])

reliability: The ability of a system or component to perform its required functions under stated conditions
for a specified period of time. (adapted from IEEE Std 610.12-1990 [B10])

5
IEEE publications are available from the Institute of Electrical and Electronics Engineers, Inc., 445 Hoes Lane, Piscataway, NJ
08854, USA (http://standards.ieee.org/).

32
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

Annex D

(informative)

Bibliography

[B1] ANSI/AIAA R-013-1992, Recommended Practice for Software Reliability. 6

[B2] Binder, R. V., Testing Object-Oriented Systems: Models, Patterns, and Tools. Reading, MA:
Addison-Wesley, 2000.

[B3] Dijkstra, E., “Structured programming,” In: Buxton, J.N., and Randell, B., eds. Software
Engineering Techniques. Brussels Belgium: NATO Scientific Affairs Division, April 1970, pp. 84–88.

[B4] Evangislist, W. M., “Software complexity metric sensitivity to program restructuring rules,” Journal
of Systems and Software, vol. 3, pp. 231–243, 1983.

[B5] Fenton, N. E. and Pfleeger, S. L., Software Metrics: A Rigorous & Practical Approach, 2d ed.
Boston, MA: PWS Publishing Company, 1997.

[B6] Fischer, K. F. and Walker, M. G., “Improved software reliability through requirement verification,”
IEEE Transactions on Reliability, vol. 28, no. 3, pp. 233–239, August 1979.

[B7] Henninger, K., “Specifying software requirements for complex systems, new techniques and their
application,” IEEE Transactions on Software Engineering, vol. SE-6, no. 1, pp. 1–14, Jan. 1980.

[B8] IEEE 100, The Authoritative Dictionary of IEEE Standards Terms, Seventh Edition, New York,
Institute of Electrical and Electronics Engineers, Inc. 7

[B9] IEEE Std 1044TM-1993, IEEE Standard Classification for Software Anomalies. 8

[B10] IEEE Std 610.12-1990, IEEE Standard Glossary of Software Engineering Terminology.

[B11] Keller, T. and Schneidewind, N. F., “A successful application of software reliability engineering for
the NASA Space Shuttle,” Software Reliability Engineering Case Studies, International Symposium on
Software Reliability Engineering, Albuquerque, NM, pp. 71–82, Nov. 4, 1997.

[B12] Lyu, M. R., Handbook of Software Reliability Engineering. New York: IEEE Computer Society
Press and McGraw-Hill, 1996.

[B13] McCabe, T. J., “A complexity measure,” IEEE Transactions on Software Engineering, vol. SE-2, no.
4, pp. 308–320, Dec. 1976.

[B14] Musa, J. D., Software Reliability Engineering: More Reliable Software, Faster Development and
Testing. New York: McGraw-Hill, 1999.

6
ANSI publications are available from the Sales Department, American National Standards Institute, 25 West 43rd Street, 4th Floor,
New York, NY 10036, USA (http://www.ansi.org/).
7
IEEE publications are available from the Institute of Electrical and Electronics Engineers, 445 Hoes Lane, P.O. Box 1331,
Piscataway, NJ 08855-1331, USA (http://standards.ieee.org/).
8
The IEEE standards or products referred to in this clause are trademarks of the Institute of Electrical and Electronics Engineers, Inc.

33
Copyright © 2006 IEEE. All rights reserved.
IEEE Std 982.1-2005
IEEE Standard Dictionary of Measures of the Software Aspects of Dependability

[B15] Musa, J. D., et al., Software Reliability: Measurement, Prediction, Application. New York:
McGraw-Hill, 1987.

[B16] Nikora, A. and Munson, J., “Determining fault insertion rates for evolving software systems,”
Proceedings of the Ninth International Symposium on Software Reliability Engineering, Paderborn,
Germany, pp. 306–315, Nov. 4–7, 1998.

[B17] Nikora, A., Schneidewind, N., and Munson, J., “IV&V issues in achieving high reliability and safety
in critical control software,” Final Report, Volume 1—Measuring and Evaluating the Software
Maintenance Process and Metrics-Based Software Quality Control, Volume 2—Measuring Defect Insertion
Rates and Risk of Exposure to Residual Defects in Evolving Software Systems, and Volume 3—
Appendices, Jet Propulsion Laboratory, National Aeronautics and Space Administration, Pasadena, CA,
Jan 19, 1998.

[B18] Schneidewind, N. F., “Application of program graphs and complexity analysis to software
development and testing,” IEEE Transactions on Reliability, vol. R-28, no.3, pp. 192–198, Aug. 1979.

[B19] Schneidewind, N. F., “Software reliability engineering for client-server systems,” Proceedings of
The Seventh International Symposium on Software Reliability Engineering, White Plains, NY, pp. 226–235,
Oct. 30–Nov. 2, 1996.

[B20] Schneidewind, N. F., “Reliability modeling for safety critical software,” IEEE Transactions on
Reliability, vol. 46, no.1, pp.88–98, Mar. 1997.

[B21] Schneidewind, N. F., “Measuring and evaluating maintenance process using reliability, risk, and test
metrics,” IEEE Transactions on Software Engineering, vol. 25, no. 6, pp. 768–781, Nov./Dec. 1999.

[B22] Schneidewind, N. F., “Investigation of the risk to software reliability and maintainability of
requirements changes,” Proceedings of the International Conference on Software Maintenance, Florence,
Italy, pp. 127–136, Nov. 7–9, 2001.

[B23] Webster’s New Collegiate Dictionary. Springfield, MA: Merriam-Webster, Inc.

34
Copyright © 2006 IEEE. All rights reserved.

You might also like