Chap 4.
Using Metrics To Manage
Software Risks
1. Introduction
2. Software Measurement Concepts
3. Case Study: Measuring Maintainability
4. Metrics and Quality
1
1. Introduction
Definition
Measurement is the process by which numbers or symbols
are assigned to attributes of entities in the real world so
as to describe them according to specified rules.
-There are two broad use of measurement: assessment and prediction:
÷Predictive measurement of some attribute A relies on a mathematical model relating A to
some existing measures of attributes A1, …, An.
÷Assessment is more straightforward and relies on the current status of the attribute.
-There are 3 classes of software metrics: process, product, and project
÷Process metrics: measure process effectiveness; Example: defect removal effectiveness.
÷Product metrics: measure product characteristics such as size, cost, defect count etc.
÷Project metrics: used to keep track of project execution; Examples: development time,
development effort, productivity etc.
2
-Software metrics provide a quantitative vehicle for evaluating and
managing quality factors and risks related to a given software
product.
-The software artifacts concerned by metrics include analysis, and
design models, as well as program code.
-Metrics can be used at early stages as leading quality indicators of
the software architecture design. They can also be used to drive an
iterative design process (such as the Rational Unified Process).
-Metrics may be collected either dynamically or statically.
-Dynamic metrics require execution of the software system,
which restrict their applicability to later phases of the development.
-Static metrics, in contrast can be collected and used at early stages
of the design.
3
2. Software Measurement Concepts
-Measurement always targets specific software attribute or concept:
÷Examples: complexity, cohesion, coupling, size, time, effort, maintainability etc.
-In software measurement studies, a distinction is made between
internal and external attributes:
÷Internal attributes: are those which can be measured purely in terms of the product,
process, or project itself. Examples: size for product and elapsed time for process.
÷External attributes: are those which can only be measured with respect to how the
product, process, or project relates to other entities in its environment.
Examples: reliability for product and productivity for project (e.g., people).
-Software Managers and Users would like to measure and
predict external attributes.
÷External attributes are easy to interpret but hard to measure directly, while
internal attributes are hard to interpret but relatively easy to collect directly.
4
-In practice, measurement of external attributes are derived indirectly
from internal (attributes) measures, through correlation or statistical
analysis such as regression or Bayesian probabilistic models.
÷Example:
Product Cost = f(effort, time); Effort (person/month)= g(size)
5
External Internal
Number of procedure
parameters
Maintainability
Cyclomatic complexity
Reliability
Program size (in lines
of code)
Number of error
messages
Usability
Length of user manual
Performance
Number of daily requests
received
Number of daily requests
processed
6
3. Case Study: Measuring Maintainability
-Important aspects of maintainability include understandability,
flexibility, reusability, and testability.
÷Complex code is difficult to understand, and thereby to maintain and evolve. Complex
code increases the cost of testing, because the likelihood of faults is higher.
-Complexity is mastered by applying the principle of “divide and
conquer”, which typically underlies another common design
principle, namely modular design.
-Good modular design requires high cohesion of modules, and less
coupling between them.
•Less cohesion means more complexity.
•Strong coupling means reduced reusability.
-Several software product metrics have been proposed to evaluate the
complexity factors that affect the creation, comprehension,
modification, and maintenance of a piece of software.
7
Metrics Available at Design Constructs/
Concepts
Cyclomatic complexity (CC) N Method/
Complexity
Lines of Code (LOC) N Method/
Size, complexity
Comment percentage (CP) N Method/
Complexity
Weighted methods per class (WMC) Y Class,Method/
Complexity
Response for a class (RFC) N Class, Method/
Complexity
Lack of cohesion of methods (LCOM) N Class/Cohesion
Coupling between objects classes Y Class/Coupling
(CBO)
Depth of inheritance tree (DIT) Y Inheritance/
Complexity
Number of children (NOC) Y Inheritance/
Complexity 8
Cyclomatic Complexity (CC)
-Also called McCabe complexity metric
-Evaluate the complexity of algorithms involved in a method.
-Give a count of the number of test cases needed to test a method
comprehensively;
-Use a control flow graph (CFG) to describe the software module or
piece of code under study:
÷Each node corresponds to a block of sequential code.
÷Each edge corresponds to a path created by a decision.
-CC is defined as the number of edges minus the number of nodes
plus 2: CC= edges - nodes + 2
Low CC means reduced
CC= e- n + 2 = 8 – 7 + 2 = 3
testing, and better
understandability.
9
Primitive Operations of Structured Programming
sequence if/then/else while for loop
y=2+x; if (x>2) y=2x while (x>2)
else y=2; y=2x;
CC=1-2+2 = 1 for(int i=0;i<5;
CC = 4 – 4 + 2 = 2 CC=3-3+2=2 i++)
x=x+i;
CC=5-5+2=2
10
Size
-The size of a piece of code can be measured using different metrics.
÷(Basic) Lines of code (LOC) count all lines, including comments;
÷Non-comment non-blank (NCNB) counts all lines except
comments and blanks.
÷Executable statements (EXEC) count the number of
executable statements.
High size decreases
understandability, and
therefore
increases risk and faults.
Examples:
if x>2 /*evaluates…*/
then y=x+z; if x>2
then y=x+z;
x=2z;
LOC=2 , NCNB= 2, EXEC=1 LOC=5 , NCNB= 3 , EXEC=2
11
Comment Percentage (CP)
-Is obtained by the total number of comments divided by the total
number of lines of code less the number of blank lines.
Higher comment percentage
means better understandability
Example: and maintainability.
/*evaluates…*/
if x>2
then y=x+z;
x=2z;
/*computes…*/
z=x*x-y;
CP = 2/(8-2)=33%
12
Weighted Methods per Class (WMC)
-Is measured either by counting the number of methods associated
with a class, or by summing the complexities (CC) of the methods.
n
WMC = Σ ci, ci = CCi
Example: i=1
Person High WMC value is a sign of high
complexity, and less reusability.
name: Name
employeeID: Integer
title: String
getContactInformation():
ContactInformation
getPersonalRecords():
Personalrecords WMC=2
13
Response For a Class (RFC)
-Measure the number of methods that can be invoked in response to a message
to an object of the class or by some methods in the class; this includes all the
methods accessible in the class hierarchy.
Example: Higher RFC value is a predictor of
larger number of communications with
other classes, so more complexity.
StoreDepartments
manager
employees
display()
credit()
RFC (StoreDepartments)
=2=1+3=6
Appliances
Clothing
RFC(Clothing) =1+2=3
category
customer_gender
size_range delivery() RFC(Appliances)=3+2=5
service()
exchange() 14
parts_ordering()
Lack of Cohesion (LCOM)
-Measure the cohesion or lack of a class; evaluate the dissimilarity
of methods in a class by instance variables or attributes.
-LCOM is measured by counting the number of pairs of methods that
have no attributes in common, minus the number of methods that do.
A negative difference corresponds to LCOM value of zero.
Low cohesion is a sign of high complexity,
and shows that the class can be subdivided.
High cohesion indicates simplicity and high
Example: potential for reuse.
Device Class Device {
int reading, type;
type:int
boolean mode=false;
reading:int
mode: boolean
public int update (int a) {return a + reading; }
compute(x:int,y:int):int public int compute(int x, int y) {return x*y*type - reading;}
update(a: int):int public void test (int t) { if t ==1 mode=true;}
test(t:int) }
LCOM(Device) = 2-1 =1 15
Coupling Between Object Classes (CBO)
-Measure the number of classes to which a class is coupled.
-Class A is coupled to class B iff A uses B’s methods or instance variables.
-Coupling is calculated by counting the number of distinct non-inheritance related
class hierarchies on which a class depends.
Example:
Supplier High coupling
StoreDepartments
means increased
manager products dependency among
employees the classes; this
restricts reusability.
stock
display()
Warehouse
credit()
Useful for
determining
SlacksDepartment reusability
JacketDepartment
customer_type
customer_type size_range CBO(StoreDepartments) =1
size_range
CBO(Warehouse) =1
exchange()
exchange() purchase()
CBO(Supplier) =0 16
Depth of Inheritance Tree (DIT)
-Measure the number of ancestor classes of a given class involved in
an inheritance relation.
Greater value of DIT means more
methods to be inherited, so increased
complexity; but at the same time
that means increased reusability;
StoreDepartments so a trade-off must be made here.
Example:
manager
Department employees
display()
credit() DIT (Appliances) =2
DIT(StoreDepartments)=1
DIT(Department)=0
Appliances
Clothing
category
customer_gender
size_range delivery()
service()
exchange() parts_ordering()
17
Number of Children (NOC)
-Measure the number of immediate subclasses of a class in an
inheritance hierarchy.
High NOC means high reuse, but may also
be the sign of improper abstraction or
misuse of inheritance. High NOC may also
Example: be the sign of increased complexity. So a
trade-off must be made for this metric.
StoreDepartments
manager
Division employees NOC(Division) = 1
NOC(StoreDepartments)= 2
display() NOC(Appliances)= 0
credit()
Appliances
Clothing
category
customer_gender
size_range delivery()
service()
exchange() 18
parts_ordering()
EXAMPLE: compute relevant CK metrics
Message PeriodicMsgs
text: string
WarningLtrs
Bill
Customer
issue_date:DateTime
name:string payment_date:DateTime
address:string Purchase
birth_date:DateTime date:DateTime price()
account_num:long tax_rate:float tax()
customer()
create_customer() price() list_purchases()
tax()
Metric
Metric Bill
Bill Purchase
Purchase Warning
Warning Periodic
Periodic Message
Message Customer
Customer
Ltrs
Ltrs Msgs
Msgs
Coupling
Depth of
Weighted
Number 0104 0102 120 10 0120 001
Inheritance
Between
Methods/Class
of Children
Tree
Objects 19
Metric Bill Purchase Warning Periodic Message Customer
Ltrs Msgs
Weighted 4 2 0 0 0 1
Methods/Class
Number 0 0 0 0 2 0
of Children
Depth of 0 0 1 1 0 0
Inheritance
Tree
Response for a - - - - - -
Class
Coupling 1 1 2 1 1 0
Between
Objects
Lack of - - - - - -
Cohesion in
Methods
20
4. Metrics and Quality
Metrics can be useful indicators of unhealthy code and design, pointing
out areas where problems are likely to occur, by focusing on specific
quality attributes.
M e tr ic Source OO O b je c tiv e s Q u a lity A ttr ib u te
C o n str u c t
CC T ra d itio n a l M e th o d Low T e sta b ility
U n d e rs ta n da b ility
LOC T ra d itio n a l M e th o d Low U n d e rs ta n da b ility
R e u sa b ility
M a in ta in a b ility
CP T ra d itio n a l M e th o d ~ 2 0 -3 0 % U n d e rs ta n da b ility
M a in ta in a b ility
WMC N ew O O C la s s/ Low T e sta b ility
M e th o d R e u sa b ility
D IT N ew O O In h e rita n c e Low R e u se
(T ra d e -o ff) U n d e rs ta n da b ility
M a in ta in a b ility
NOC N ew O O In h e rita n c e Low R e u sa b ility
(T ra d e -o ff) T e sta b ility
CBO N ew O O C o u p lin g Low U s a b ility
M a in ta in a b ility
R e u sa b ility
R FC N ew O O C la s s/ Low U s a b ility
M e th o d R e u sa b ility
T e sta b ility
LCOM N ew O O C la s s/ L o w H ig h C o m p le x ity
21
C o h e sio n R e u sa b ility