Software Quality Metrics Overview
Software Quality Metrics Overview
Metrics Overview
Types of Software Metrics
Possible variations
Count only executable lines
Count executable lines plus data definitions
Count executable lines, data definitions, and
comments
Count executable lines, data definitions,
comments, and job control language
Count lines as physical lines on an input screen
Count lines as terminated by logical delimiters
Lines of Code (Cont’d)
Other difficulties
LOC measures are language dependent
Can’t make comparisons when different
languages are used or different operational
definitions of LOC are used
For productivity studies the problems in using
LOC are greater since LOC is negatively
correlated with design efficiency
Code enhancements and revisions complicates
the situation – must calculate defect rate of new
and changed lines of code only
Defect Rate for New and
Changed Lines of Code
Depends on the availability on having
LOC counts for both the entire produce
as well as the new and changed code
Depends on tracking defects to the
release origin (the portion of code that
contains the defects) and at what
release that code was added, changed,
or enhanced
Function Points
1. Data Communications
2. Distributed functions
3. Performance
4. Heavily used configuration
5. Transaction rate
6. Online data entry
7. End-user efficiency
The 14 System Characteristics
(Cont’d)
8. Online update
9. Complex processing
10. Reusability
11. Installation ease
12. Operational ease
13. Multiple sites
14. Facilitation of change
The 14 System Characteristics
(Cont’d)
VAF is the sum of these 14
characteristics divided by 100 plus
0.65.
Notice that if an average rating is given
each of the 14 factors, their sum is 35
and therefore VAF =1
The final function point total is then the
function count multiplied by VAF
FP = FC x VAF
Customer Problems Metric
Customer problems are all the difficulties
customers encounter when using the product.
They include:
Valid defects
Usability problems
Unclear documentation or information
Duplicates of valid defects (problems already fixed
but not known to customer)
User errors
The problem metric is usually expressed in
terms of problems per user month (PUM)
Customer Problems Metric
(Cont’d)
PUM = Total problems that customers
reported for a time period <divided by>
Total number of license-months of the
software during the period
where
Number of license-months = Number
of the install licenses of the software
x Number of months in the
calculation period
Approaches to Achieving a
Low PUM
Improve the development process and
reduce the product defects.
Reduce the non-defect-oriented
problems by improving all aspects of the
products (e.g., usability, documentation),
customer education, and support.
Increase the sale (number of installed
licenses) of the product.
Defect Rate and Customer
Problems Metrics
Defect Rate Problems per User-
Month (PUM)
Numerator Valid and unique All customer problems
product defects (defects and nondefects, first
time and repeated)
Denominator Size of product (KLOC Customer usage of the
or function point) product (user-months)
Measurement Producer—software Customer
perspective development
organization
Scope Intrinsic product quality Intrinsic product quality plus
other factors
Customer Satisfaction Metrics
Customer
Satisfaction
Issues
Customer
Problems
Defects
Customer Satisfaction Metrics
(Cont’d)
Customer satisfaction is often measured
by customer survey data via the five-
point scale:
Very satisfied
Satisfied
Neutral
Dissatisfied
Very dissatisfied
IBM Parameters of Customer
Satisfaction
CUPRIMDSO
Capability (functionality)
Usability
Performance
Reliability
Installability
Maintainability
Documentation
Service
Overall
HP Parameters of Customer
Satisfaction
FURPS
Functionality
Usability
Reliability
Performance
Service
Examples Metrics for
Customer Satisfaction
1. Percent of completely satisfied
customers
2. Percent of satisfied customers
(satisfied and completely satisfied)
3. Percent of dissatisfied customers
(dissatisfied and completely
dissatisfied)
4. Percent of nonsatisfied customers
(neutral, dissatisfied, and completely
dissatisfied)
In-Process Quality Metrics
Motorola
Follows the Goal/Question/Metric paradigm of
Basili and Weiss
Goals:
1. Improve project planning
2. Increase defect containment
3. Increase software reliability
4. Decrease software defect density
5. Improve customer service
6. Reduce the cost of nonconformance
7. Increase software productivity
Examples of Metrics Programs
(Cont’d)
Motorola (cont’d)
Measurement Areas
Delivered defects and delivered defects per
size
Total effectiveness throughout the process
Adherence to schedule
Accuracy of estimates
Number of open customer problems
Time that problems remain open
Cost of nonconformance
Software reliability
Examples of Metrics Programs
(Cont’d)
Motorola (cont’d)
For each goal the questions to be asked
and the corresponding metrics were
formulated:
Goal 1: Improve Project Planning
Question 1.1: What was the accuracy of
estimating the actual value of project
schedule?
Metric 1.1: Schedule Estimation Accuracy
(SEA)
SEA = (Actual project duration)/(Estimated
project duration)
Examples of Metrics Programs
(Cont’d)
Hewlett-Packard
The software metrics program includes
both primitive and computed metrics.
Primitive metrics are directly measurable
Computed metrics are mathematical
combinations of primitive metrics
(Average fixed defects)/(working day)
(Average engineering hours)/(fixed defect)
(Average reported defects)/(working day)
Bang – A quantitative indicator of net usable
function from the user’s point of view
Examples of Metrics Programs
(Cont’d)
Hewlett-Packard (cont’d)
Computed metrics are mathematical
combinations of primitive metrics (cont’d)
(Branches covered)/(total branches)
Defects/KNCSS (thousand noncomment source
statements)
Defects/LOD (lines of documentation not
included in program source code)
Defects/(testing time)
Design weight – sum of module weights
(function of token and decision counts) over the
set of all modules in the design
Examples of Metrics Programs
(Cont’d)
Hewlett-Packard (cont’d)
Computed metrics are mathematical
combinations of primitive metrics (cont’d)
NCSS/(engineering month)
Percent overtime – (average overtime)/(40
hours per week)
Phase – (engineering months)/(total
engineering months)
Examples of Metrics Programs
(Cont’d)
IBM Rochester
Selected quality metrics
Overall customer satisfaction
Postrelease defect rates
Customer problem calls per month
Fix response time
Number of defect fixes
Backlog management index
Postrelease arrival patterns for defects and
problems
Examples of Metrics Programs
(Cont’d)
IBM Rochester (cont’d)
Selected quality metrics (cont’d)
Defect removal model for the software
development process
Phase effectiveness
Inspection coverage and effort
Compile failures and build/integration defects
Weekly defect arrivals and backlog during
testing
Defect severity
Examples of Metrics Programs
(Cont’d)
IBM Rochester (cont’d)
Selected quality metrics (cont’d)
Defect cause and problem component analysis
Reliability (mean time to initial program loading
during testing)
Stress level of the system during testing
Number of system crashes and hangs during
stress testing and system testing
Various customer feedback metrics
S curves for project progress
Collecting Software
Engineering Data
The challenge is to collect the necessary
data without placing a significant burden
on development teams.
Limit metrics to those necessary to avoid
collecting unnecessary data.
Automate the data collection whenever
possible.
Data Collection Methodology
(Basili and Weiss)
1. Establish the goal of the data
collection
2. Develop a list of questions of interest
3. Establish data categories
4. Design and test data collection forms
5. Collect and validate data
6. Analyze data
Reliability of Defect Data