Software Quality Metrics
Software Quality Metrics
1
“You can’t control what you can’t measure”
Tom DeMarco
2
Measurement, Measures, Metrics
Measurement
◦ is the act of obtaining a measure
Measure
◦ provides a quantitative indication of the size of some product or
process attribute, E.g., Number of errors
Metric
◦ is a quantitative measure of the degree to which a system,
component, or process possesses a given attribute (IEEE Software
Engineering Standards 1993) : Software Quality - E.g., Number of
errors found per person hours expended
3
IEEE definitions of
software quality metrics
A quantitative measure of the degree to )1(
which an item possesses a given quality
.attribute
A function whose inputs are software )2(
data and whose output is a single
numerical value that can be interpreted as
the degree to which the software
.possesses a given quality attribute
4
Objectives of quality measurement
1. Facilitate management control, planning and
managerial intervention.
Based on:
· Deviations of actual from planned performance.
· Deviations of actual timetable and budget
performance from planned.
2. Identify situations for development or maintenance
process improvement (preventive or corrective
actions). Based on:
· Accumulation of metrics information regarding the
performance of teams, units, etc.
5
Software quality
metrics — Requirements
General requirements
Relevant
Valid
Reliable
Comprehensive
Mutually exclusive
Operative requirements
Easy and simple
Does not require independent data collection
Immune to biased interventions by interested parties
6
What to measure
• Process
Measure the efficacy of processes. What works, what
doesn't.
• Project
Assess the status of projects. Track risk. Identify problem
areas. Adjust work flow.
• Product
Measure predefined product attributes (generally related
to ISO9126 Software Characteristics)
7
Classification of software quality
metrics
Three kinds of Software Quality Metrics
◦ Product Metrics - describe the characteristics of product
size, complexity, design features, performance, and quality level
8
Software size (volume) measures
KLOC — classic metric that measures the
size of software by thousands of code lines.
Number of function points (NFP) — a
measure of the development resources
(human resources) required to develop a
program, based on the functionality
specified for the software system
9
Process metrics
S/W development process metrics fall into
one of the following categories:
1-Software process quality metrics
Error density metrics
Error severity metrics
Error removal effectiveness metrics
2- Software process timetable metrics
3- Software process productivity metrics
10
Software process quality metrics
Software process quality metrics may be
classified into three classes:
Error density metrics
Error severity metrics
Error removal effectiveness metrics
11
Error density metrics
Calculation of error density metrics involves two
measures:
1- Software volume measures. Some density metrics use
the number of lines of code while others apply function
points.
2- Errors counted measures. Some relate to the number
of errors(NCE) and others to the weighted number of
errors (WCE).
this measure is classification of the detected errors into
severity classes, followed by weighting each class.
12
Example
A software development department applies two alternative
measures, NCE and WCE, to the code errors detected in
its software development projects. Three classes of error
severity and their relative weights are also defined:
there were 42 low severity errors, 17 medium severity
errors, and 11 high severity errors
13
Calculation of NCE and WCE
14
Code Name Calculation formula
NCE
CED Code Error Density CED = -----------
KLOC
NDE
DED Development Error Density DED = -----------
KLOC
WCE
WCED Weighted Code Error Density WCDE = ---------
KLOC
Weighted Development Error Density WDE
WDED WDED = ---------
KLOC
Weighted Code Errors per Function WCE
WCEF Point WCEF = ----------
NFP
Weighted Development Errors per WDE
WDEF Function Point WDEF = ----------
NFP
NCE = The number of code errors detected by code inspections and testing.
NDE = total number of development (design and code) errors) detected in the development
process.
WCE = weighted total code errors detected by code inspections and testing.
WDE = total weighted development (design and code) errors detected in development process.
Calculation Of
CED and WCED Example
•The unit determined the following indicators for unacceptable software
quality: CED > 2 and WCED > 4.
•The software system size is 40 KLOC. Calculation of the two metrics
resulted in the following:
Error severity metrics
The metrics belonging to this group are
used to detect adverse situations of
increasing numbers of severe errors in
situations where errors and weighted
errors, as measured by error density
metrics, are generally decreasing.
Two error severity metrics are presented as
follows:
Code Name Calculation formula
WCE
ASCE Average Severity of Code ASCE = -----------
Errors NCE
WDE
ASDE Average Severity of ASDE = -----------
Development Errors NDE
NCE = The number of code errors detected by code inspections and testing.
NDE = total number of development (design and code) errors detected in
the development process.
WCE = weighted total code errors detected by code inspections and testing.
WDE = total weighted development (design and code) errors detected in
development process.
Error removal
effectiveness metrics
• Software developers can measure the effectiveness of error
removal by the software quality assurance system after a
period of regular operation (usually 6 or 12 months) of the
system.
• The metrics combine the error records of the development
stage with the failures records compiled during the first year
(or any defined period) of regular operation. Two error
removal effectiveness metrics are as follows.
Code Name Calculation formula
Development Errors Removal NDE
DERE DERE = ----------------
Effectiveness NDE + NYF
WDE
DWERE Development Weighted DWERE = ------------------
Errors Removal Effectiveness WDE+WYF
NDE = total number of development (design and code) errors) detected in the
development process.
WCE = weighted total code errors detected by code inspections and testing.
WDE = total weighted development (design and code) errors detected in
development process.
NYF = number software failures detected during a year of maintenance service.
WYF = weighted number of software failures detected during a year of maintenance
service.
Software process
timetable metrics
• Software process timetable metrics may be based on
accounts of success (completion of milestones per schedule)
in addition to failure events (non-completion per schedule).
•An alternative approach calculates the average delay in
completion of milestones
•TTO(Time Table Observance) and ADMC(Average Delay of
Milestone Completion ) metrics are based on data for all
relevant milestones scheduled in the project plan. In other
words, only milestones that were designated for completion
in the project plan stage are considered in the metrics’
computation. Therefore, these metrics can be applied
throughout development and need not wait for the project’s
completion.
Code Name Calculation formula
Time Table Observance MSOT
TTO TTO = -----------
MS
TCDAM
ADMC Average Delay of Milestone ADMC = -----------
Completion MS
NHYNOT = Number of yearly HD calls completed on time during one year of service.
NHYC = the number of HD calls during a year of service.
HD productivity and effectiveness
metrics
Productivity metrics relate to the total of
resources invested during a specified
period, while effectiveness metrics relate to
the resources invested in responding to a
HD customer call.
HD productivity metrics
HD productivity metrics makes use of the easy-to-
apply KLMC measure of maintained software
system’s size or according to function point
evaluation of the software system. Two HD
productivity metrics are presented as follows.
HD effectiveness metrics
The metrics in this group refer to the resources
invested in responding to customers’ HD calls.
Code Name Calculation Formula
HDYH
HDP HD Productivity -------------- =HDP
KLMC
HDYH = Total yearly working hours invested in HD servicing of the software system.
KLMC = Thousands of lines of maintained software code.
NMFP = number of function points to be maintained.
NHYC = the number of HD calls during a year of service.
Corrective maintenance quality
metrics
Software corrective maintenance metrics deal
with several aspects of the quality of maintenance
services. A distinction is needed between
software system failures treated by the
maintenance teams and failures of the
maintenance service that refer to cases where the
maintenance failed to provide a repair that meets
the designated standards or contract
requirements. Thus, software maintenance
metrics are classified as follows:
Corrective maintenance quality
metrics
■ Software system failures density metrics – deal with the extent of
demand for corrective maintenance, based on the records of
failures identified during regular operation of the software system.
■ Software system failures severity metrics – deal with the severity
of software system failures attended to by the corrective
maintenance team.
■ Failures of maintenance services metrics – deal with cases
where maintenance services were unable to complete the failure
correction on time or that the correction performed failed.
■ Software system availability metrics – deal with the extent of
disturbances caused to the customer as realized by periods of time
where the services of the software system are unavailable or only
partly available.
Code Name Calculation Formula