0% found this document useful (0 votes)
12 views27 pages

5 Security Engg

Security engineering focuses on developing systems that can withstand malicious attacks, emphasizing confidentiality, integrity, and availability. It involves three levels: infrastructure, application, and operational security, with a strong emphasis on user behavior and risk management. Effective security policies and design guidelines are crucial for balancing security with usability while ensuring systems are trustworthy and resilient against threats.

Uploaded by

warbler9469
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views27 pages

5 Security Engg

Security engineering focuses on developing systems that can withstand malicious attacks, emphasizing confidentiality, integrity, and availability. It involves three levels: infrastructure, application, and operational security, with a strong emphasis on user behavior and risk management. Effective security policies and design guidelines are crucial for balancing security with usability while ensuring systems are trustworthy and resilient against threats.

Uploaded by

warbler9469
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Security Engineering

Security engineering is a sub-field of the broader field of computer security. It encompasses tools,
techniques and methods to support the development and maintenance of systems that can resist
malicious attacks that are intended to damage a computer-based system or its data.

Dimensions of security:

 Confidentiality Information in a system may be disclosed or made accessible to people or


programs that are not authorized to have access to that information.
 Integrity Information in a system may be damaged or corrupted making it unusual or
unreliable.
 Availability Access to a system or its data that is normally available may not be possible.

Three levels of security:

 Infrastructure security is concerned with maintaining the security of all systems and
networks that provide an infrastructure and a set of shared services to the organization.

 Application security is concerned with the security of individual application systems or


related groups of systems.

 Operational security is concerned with the secure operation and use of the
organization's systems.

Application security is a software engineering problem where the system is designed to resist
attacks. Infrastructure security is a systems management problem where the infrastructure is
configured to resist attacks.

System security management involves user and permission management (adding and
removing users from the system and setting up appropriate permissions for users), software
deployment and maintenance (installing application software and middleware and configuring
these systems so that vulnerabilities are avoided), attack monitoring, detection and
recovery (monitoring the system for unauthorized access, design strategies for resisting attacks and
develop backup and recovery strategies).

Operational security is primarily a human and social issue, which is concerned with ensuring the
people do not take actions that may compromise system security. Users sometimes take insecure
actions to make it easier for them to do their jobs. There is therefore a trade-off between system
security and system effectiveness.

Security and dependability


The security of a system is a property that reflects the system's ability to protect itself from
accidental or deliberate external attack. Security is essential as most systems are networked so
that external access to the system through the Internet is possible. Security is an essential pre-
requisite for availability, reliability and safety.
Reliability terminology
Term Description
Something of value which has to be protected. The asset may be the software system itself or data
Asset
used by that system.
An exploitation of a system's vulnerability. Generally, this is from outside the system and is a
Attack
deliberate attempt to cause some damage.
A protective measure that reduces a system's vulnerability. Encryption is an example of a control
Control
that reduces a vulnerability of a weak access control system.
Possible loss or harm to a computing system. This can be loss or damage to data, or can be a loss of
Exposure
time and effort if recovery is necessary after a security breach.
Circumstances that have potential to cause loss or harm. You can think of these as a system
Threat
vulnerability that is subjected to an attack.
Vulnerabili
A weakness in a computer-based system that may be exploited to cause loss or harm.
ty

Four types of security threats:


 Interception threats that allow an attacker to gain access to an asset.
 Interruption threats that allow an attacker to make part of the system unavailable.
 Modification threats that allow an attacker to tamper with a system asset.
 Fabrication threats that allow an attacker to insert false information into a system.

Types of threats | Interception | Interruption | Modification | Fabrication


December 13, 2010 — genesisdatabase

65

Rate This

Full details on types of threats can be read here.

 An interception means that some unauthorized party has gained access to an asset. The outside party
can be a person, a program, or a computing system. Examples of this type of failure are illicit copying of
program or data files, or wiretapping to obtain data in a network. Although a loss may be discovered fairly
quickly, a silent interceptor may leave no traces by which the interception can be readily detected.
 In an interruption, an asset of the system becomes lost, unavailable, or unusable. An example is
malicious destruction of a hardware device, erasure of a program or data file, or malfunction of an
operating system file manager so that it cannot find a particular disk file.
 If an unauthorized party not only accesses but tampers with an asset, the threat is a modification. For
example, someone might change the values in a database, alter a program so that it performs an
additional computation, or modify data being transmitted electronically. It is even possible to modify
hardware. Some cases of modification can be detected with simple measures, but other, more subtle,
changes may be almost impossible to detect.
 Finally, an unauthorized party might create a fabrication of counterfeit objects on a computing system.
The intruder may insert spurious transactions to a network communication system or add records to an
existing database. Sometimes these additions can be detected as forgeries, but if skillfully done, they are
virtually indistinguishable from the real thing.

Security assurance strategies:

Vulnerability avoidance : The system is designed so that vulnerabilities do not occur. For example,
if there is no external network connection then external attack is impossible.

Attack detection and elimination : The system is designed so that attacks on vulnerabilities are
detected and neutralised before they result in an exposure. For example, virus checkers find and
remove viruses before they infect a system.

Exposure limitation and recovery : The system is designed so that the adverse consequences of
a successful attack are minimised. For example, a backup policy allows damaged information to be
restored.

Security and attributes of dependability:

Security and reliability : If a system is attacked and the system or its data are corrupted as a
consequence of that attack, then this may induce system failures that compromise the reliability of
the system.

Security and availability : A common attack on a web-based system is a denial of service attack,
where a web server is flooded with service requests from a range of different sources. The aim of this
attack is to make the system unavailable.

Security and safety : An attack that corrupts the system or its data means that assumptions about
safety may not hold. Safety checks rely on analyzing the source code of safety critical software and
assume the executing code is a completely accurate translation of that source code. If this is not the
case, safety-related failures may be induced and the safety case made for the software is invalid.

Security and resilience : Resilience is a system characteristic that reflects its ability to resist and
recover from damaging events. The most probable damaging event on networked software systems
is a cyberattack of some kind so most of the work now done in resilience is aimed at deterring,
detecting and recovering from such attacks.

Security and organizations


Security is expensive and it is important that security decisions are made in a cost-effective way.
There is no point in spending more than the value of an asset to keep that asset secure.
Organizations use a risk-based approach to support security decision making and should have a
defined security policy based on security risk analysis. Security risk analysis is a business rather than
a technical process.
Security policies should set out general information access strategies that should apply across
the organization. The point of security policies is to inform everyone in an organization about security
so these should not be long and detailed technical documents. From a security engineering
perspective, the security policy defines, in broad terms, the security goals of the organization. The
security engineering process is concerned with implementing these goals.
Security policies principles:

The assets that must be protected-It is not cost-effective to apply stringent security procedures
to all organizational assets. Many assets are not confidential and can be made freely available.

The level of protection that is required for different types of asset-For sensitive personal
information, a high level of security is required; for other information, the consequences of loss may
be minor so a lower level of security is adequate.

The responsibilities of individual users, managers and the organization-The security policy
should set out what is expected of users e.g. strong passwords, log out of computers, office security,
etc.

Existing security procedures and technologies that should be maintained-For reasons of


practicality and cost, it may be essential to continue to use existing approaches to security even
where these have known limitations.

Risk assessment and management is concerned with assessing the possible losses that might
ensue from attacks on the system and balancing these losses against the costs of security
procedures that may reduce these losses. Risk management should be driven by an organizational
security policy. Risk management involves:

Preliminary risk assessment-The aim of this initial risk assessment is to identify generic risks that
are applicable to the system and to decide if an adequate level of security can be achieved at a
reasonable cost. The risk assessment should focus on the identification and analysis of high-level
risks to the system. The outcomes of the risk assessment process are used to help identify security
requirements.

Design risk assessment-This risk assessment takes place during the system development life cycle
and is informed by the technical system design and implementation decisions. The results of the
assessment may lead to changes to the security requirements and the addition of new requirements.
Known and potential vulnerabilities are identified, and this knowledge is used to inform decision
making about the system functionality and how it is to be implemented, tested, and deployed.

Operational risk assessment-This risk assessment process focuses on the use of the system and
the possible risks that can arise from human behavior. Operational risk assessment should continue
after a system has been installed to take account of how the system is used. Organizational changes
may mean that the system is used in different ways from those originally planned. These changes
lead to new security requirements that have to be implemented as the system evolves.

Security requirements
Security specification has something in common with safety requirements specification - in
both cases, your concern is to avoid something bad happening. Four major differences:
 Safety problems are accidental - the software is not operating in a hostile
environment. In security, you must assume that attackers have knowledge of system
weaknesses.
 When safety failures occur, you can look for the root cause or weakness that led to the
failure. When failure results from a deliberate attack, the attacker may conceal the
cause of the failure.
 Shutting down a system can avoid a safety-related failure. Causing a shut down may
be the aim of an attack.
 Safety-related events are not generated from an intelligent adversary. An attacker
can probe defenses over time to discover weaknesses.
Security requirement classification
 Risk avoidance requirements set out the risks that should be avoided by designing the
system so that these risks simply cannot arise.
 Risk detection requirements define mechanisms that identify the risk if it arises and
neutralize the risk before losses occur.
 Risk mitigation requirements set out how the system should be designed so that it can
recover from and restore system assets after some loss has occurred.
Security risk assessment
 Asset identification: identify the key system assets (or services) that have to be
protected.
 Asset value assessment: estimate the value of the identified assets.
 Exposure assessment: assess the potential losses associated with each asset.
 Threat identification: identify the most probable threats to the system assets.
 Attack assessment: decompose threats into possible attacks on the system and the
ways that these may occur.
 Control identification: propose the controls that may be put in place to protect an asset.
 Feasibility assessment: assess the technical feasibility and cost of the controls.
 Security requirements definition: define system security requirements. These can be
infrastructure or application system requirements.
Misuse cases are instances of threats to a system:
 Interception threats: attacker gains access to an asset.
 Interruption threats: attacker makes part of a system unavailable.
 Modification threats: a system asset if tampered with.
 Fabrication threats: false information is added to a system.

Secure systems design


Security should be designed into a system - it is very difficult to make an insecure system
secure after it has been designed or implemented.
Adding security features to a system to enhance its security affects other attributes of the
system:
 Performance: additional security checks slow down a system so its response time or
throughput may be affected.
 Usability: security measures may require users to remember information or require
additional interactions to complete a transaction. This makes the system less usable and
can frustrate system users.

Design risk assessment is done while the system is being developed and after it has been
deployed. More information is available - system platform, middleware and the system architecture
and data organization. Vulnerabilities that arise from design choices may therefore be identified.
During architectural design, two fundamental issues have to be considered when designing an
architecture for security:
Protection: how should the system be organized so that critical assets can be protected
against external attack?
Layered protection architecture:
Platform-level protection: top-level controls on the platform on which a system runs.
Application-level protection: specific protection mechanisms built into the application itself
e.g. additional password protection.
Record-level protection: protection that is invoked when access to specific information is
requested.
Distribution: how should system assets be distributed so that the effects of a successful
attack are minimized?
Distributing assets means that attacks on one system do not necessarily lead to complete loss
of system service. Each platform has separate protection features and may be different from
other platforms so that they do not share a common vulnerability. Distribution is particularly
important if the risk of denial of service attacks is high.
These are potentially conflicting. If assets are distributed, then they are more expensive to protect. If
assets are protected, then usability and performance requirements may be compromised.

Design guidelines for security engineering


Design guidelines encapsulate good practice in secure systems design. Design guidelines serve two
purposes: they raise awareness of security issues in a software engineering team, and they can be
used as the basis of a review checklist that is applied during the system validation process.
Design guidelines here are applicable during software specification and design.

Base decisions on an explicit security policy : Define a security policy for the organization that
sets out the fundamental security requirements that should apply to all organizational systems.

Avoid a single point of failure : Ensure that a security failure can only result when there is more
than one failure in security procedures. For example, have password and question-based
authentication.

Fail securely : When systems fail, for whatever reason, ensure that sensitive information cannot be
accessed by unauthorized users even although normal security procedures are unavailable.

Balance security and usability : Try to avoid security procedures that make the system difficult to
use. Sometimes you have to accept weaker security to make the system more usable.

Log user actions : Maintain a log of user actions that can be analyzed to discover who did what. If
users know about such a log, they are less likely to behave in an irresponsible way.

Use redundancy and diversity to reduce risk : Keep multiple copies of data and use diverse
infrastructure so that an infrastructure vulnerability cannot be the single point of failure.

Specify the format of all system inputs : If input formats are known then you can check that all
inputs are within range so that unexpected inputs don't cause problems.

Compartmentalize your assets : Organize the system so that assets are in separate areas and
users only have access to the information that they need rather than all system information.

Design for deployment : Design the system to avoid deployment problems

Design for recoverability : Design the system to simplify recoverability after a successful attack.
Verification of Trustworthy Systems

Merriam-Webster’s Dictionary defines trustworthy as “deserving of trust or confidence; dependable;


reliable.”In a system context, dependability is “the trustworthiness of a computer system such that
reliance can justifiably be placed on the service;” the dependability of a system can be defined by a
set of attributes that include availability, reliability, safety, security (confidentiality and integrity), and
maintainability

Software engineers have become competent at verification: we can build portions of systems to their
applicable specifications with relative success.

However, we still build systems that don’t meet customers’ expectations and requirements. This is
because people mistakenly combine V&V into one element, treating validation as the user’s
operational evaluation of the system, resulting in the discovery of requirement errors late in the
development process, when it’s costly, if not impossible, to x those errors and produce the right
product.

Figure 1 illustrates how the validation process should be proactive and continuous—enacted prior to
development and verification, with closure at the end of each phase. Validation is required whenever
a requirements derivation process (that is, a translation of requirements from one domain to another)
occurs.

FIGURE 1. A continuous validation and verification process. Validation ensures the requirements
correctly capture the users’ and stakeholders’ expectations and should be performed whenever a
translation of requirements from one domain to another occurs.

Typically, the requirements-discovery process begins with constructing scenarios involving the
system and its environment. From these scenarios, analysts informally express their understanding of
the system’s expected behavior or properties using natural language and then translate them into a
specification.

Specifications based on natural language statements can be ambiguous. For example, consider the
following requirement for a project management system: the system shall generate a project status
report once every month. Will the system meet the customer’s expectation if it generates one report
each calendar month? Does it matter if the system generates one report in the last week of May and
another in the first week of June? What happens if a project lasts only 15 days? Does the system
have to generate at least one report for such a project? Because only the customer who supplied the
requirements can answer these questions, the analyst must validate his or her own cognitive
understanding of the requirements with the customer to ensure that the specification is correct.

Research has shown that formal specifications and methods help improve the clarity and precision of
requirements specifications (for example, see the work of Steve Easterbrook and his colleagues).[5]
However, formal specifications are useful only if they match the true intent of the customer’s
requirements. Let’s assume that the customer expects the system to generate a report at the end of
each calendar month, even if the project ends earlier.
A system that satisfies the formal specification always (project_active last_day_of_month =>
report_generation) might still fail to meet the customer’s requirement regarding reports for projects
that end before the last day of a month.

Possible reasons for an incorrect assertion include


 incorrect translation of the natural language specification to a formal specification,
 incorrect translation of the requirement, as understood by the analyst, to natural language, and
 incorrect cognitive understanding of the requirement.

These situations typically occur when the requirement was driven from the use case’s main success
scenario, with insufficient investigation into other scenarios. Consequently, we propose the iterative
process for assertion validation shown in Figure 2. This process encodes requirements as Unified
Modeling Language (UML) statecharts augmented with Java action statements and validates the
assertions by executing a series of scenarios against the statechart generated executable code to
determine whether the specification captures the intended behavior. This approach helps provide
evidence for building assurance cases as proposed by Robin E. Bloomfield and his colleagues.[7]

(Click on the image to enlarge it)

FIGURE 2. Iterative process for assertion validation.[6] By encoding the system requirements as
statechart assertions and testing the generated code against different use case scenarios, analysts
can validate the formal specifications and make necessary modifications early in the development
process.
Software Measurement and Metrics

Software Measurement: A measurement is an manifestation of the size,


quantity, amount or dimension of a particular attributes of a product or process.
Software measurement is a titrate impute of a characteristic of a software
product or the software process. It is an authority within software engineering.
Software measurement process is defined and governed by ISO Standard.

Need of Software Measurement:


Software is measured to:
1. Create the quality of the current product or process.
2. Anticipate future qualities of the product or process.
3. Enhance the quality of a product or process.
4. Regulate the state of the project in relation to budget and schedule.

Classification of Software Measurement:


There are 2 types of software measurement:
1. Direct Measurement:
In direct measurement the product, process or thing is measured directly
using standard scale.
2. Indirect Measurement:
In indirect measurement the quantity or quality to be measured is
measured using related parameter i.e. by use of reference.
Metrics:
A metrics is a measurement of the level that any impute belongs to a system
product or process. There are 4 functions related to software metrics:
1. Planning
2. Organizing
3. Controlling
4. Improving

Characteristics of software Metrics:

1. Quantitative: Metrics must possess quantitative nature. It means


metrics can be expressed in values.
2. Understandable: Metric computation should be easily understood ,the
method of computing metric should be clearly defined.
3. Applicability: Metrics should be applicable in the initial phases of
development of the software.
4. Repeatable: The metric values should be same when measured
repeatedly and consistent in nature.
5. Economical: Computation of metric should be economical.
6. Language Independent: Metrics should not depend on any
programming language.

Classification of Software Metrics:

There are 2 types of software metrics:


1. Product Metrics:
Product metrics are used to evaluate the state of the product, tracing
risks and undercovering prospective problem areas. The ability of team to
control quality is evaluated.
2. Process Metrics:
Process metrics pay particular attention on enhancing the long term
process of the team or organisation.
3. Project Metrics:
Project matrix is describes the project characteristic and execution
process.
 Number of software developer
 Staffing pattern over the life cycle of software
 Cost and schedule
 Productivity

Product Metrics in Software Engineering

Product metrics are software product measures at any stage of their development, from
requirements to established systems. Product metrics are related to software features only.

Product metrics fall into two classes:


1. Dynamic metrics that are collected by measurements made from a program in execution.
2. Static metrics that are collected by measurements made from system representations such
as design, programs, or documentation.
Dynamic metrics help in assessing the efficiency and reliability of a program while static metrics
help in understanding, understanding and maintaining the complexity of a software system.
Dynamic metrics are usually quite closely related to software quality attributes. It is relatively
easy to measure the execution time required for particular tasks and to estimate the time
required to start the system. These are directly related to the efficiency of the system failures
and the type of failure can be logged and directly related to the reliability of the software.
On the other hand, static matrices have an indirect relationship with quality attributes. A large
number of these matrices have been proposed to try to derive and validate the relationship
between the complexity, understandability, and maintainability. several static metrics which
have been used for assessing quality attributes, given in table of these, program or component
length and control complexity seem to be the most reliable predictors of understandability,
system complexity, and maintainability.

Software Product Metrics :


S.No
. S/W Metric Description

Fan-in is a measure of the number of functions that call some other


function (say X). Fan-out is the number of functions which are called by
function X. A high value for fan-in means that X is tightly coupled to
the rest of the design and changes to X will have extensive knock-on
Fan-in/Fan- effects. A high value for fan-out suggests that the overall complexity of
(1) out the control logic needed to coordinate the called components.

This is measure of the size of a program. Generally, the large the size
Length of of the code of a program component, the more complex and error-
(2) code prone that component is likely to be.

Cyclomatic This is a measure of the control complexity of a program. This control


(3) complexity complexity may be related to program understandability.

This is a measure of the average length of distinct identifier in a


Length of program. The longer the identifiers, the more understandable the the
(4) identifiers program.
S.No
. S/W Metric Description

Depth of This is a measure of the depth of nesting of if statements in aa


conditional program. Deeply nested if statements are hard to understand and are
(5) nesting potentially error-prone.

This is a measure of the average length of words and sentences in


documents. The higher the value for the Fog index, the more difficult
(6) Fog index the document may be to understand.
Software Engineering-Metrics for Analysis model

Technical work in software engineering begins with the creation of the analysis model. It is
at this stage that requirements are derived and that a foundation for design is established.
Therefore, technical metrics that provide insight into the quality of the analysis model are
desirable.

Although relatively few analysis and specification metrics have appeared in the literature, it
is possible to adapt metrics derived for project application for use in this context. These
metrics examine the analysis model with the intent of predicting the “size” of the resultant
system. It is likely that size and design complexity will be directly correlated.

Function-Based Metrics
The function point metric an be used effectively as a means for predicting the size of a
system that will be derived from the analysis model. To illustrate the use of the FP metric in
this context, we consider a simple analysis model representation, illustrated in figure.
Referring to the figure, a data flow diagram for a function within the SafeHome software is
represented. The function manages user interaction, accepting a user password to activate
or deactivate the system, and allows inquiries on the status of security zones and various
security sensors. The function displays a series of prompting messages and sends
appropriate control signals to various components of the security system.

The data flow diagram is evaluated to determine the key measures required for computation
of the function point metric :
• number of user inputs
• number of user outputs
• number of user inquiries
• number of files
• number of external interfaces

Three user inputs—password, panic button, and activate/deactivate—are shown in the figure
along with two inquires—zone inquiry and sensor inquiry. One file (system configuration file)
is shown. Two user outputs (messages and sensor status) and four external interfaces (test
sensor, zone setting, activate/deactivate, and alarm alert) are also present. These data,
along with the appropriate complexity, are shown in figure.

The count total shown in Figure 19.4 must be adjusted using Equation :

FP = count total [0.65 + 0.01 (Fi)]


where count total is the sum of all FP entries obtained from the first figure and Fi (i = 1 to
14) are "complexity adjustment values." For the purposes of this example, we assume that
(Fi) is 46 (a moderately complex product). Therefore,

FP = 50 [0.65 + (0.01 46)] = 56

Based on the projected FP value derived from the analysis model, the project team can
estimate the overall implemented size of the SafeHome user interaction function. Assume
that past data indicates that one FP translates into 60 lines of code (an objectoriented
language is to be used) and that 12 FPs are produced for each person-month of effort. These
historical data provide the project manager with important planning information that is
based on the analysis model rather than preliminary estimates. Assume further that past
projects have found an average of three errors per function point during analysis and design
reviews and four errors per function point during unit and integration testing. These data
can help software engineers assess the completeness of their review and testing activities.

The Bang Metric


Like the function point metric, the bang metric can be used to develop an indication of the
size of the software to be implemented as a consequence of the analysis model. Developed
by DeMarco, the bang metric is “an implementation independent indication of system size.”
To compute the bang metric, the software engineer must first evaluate a set of primitives—
elements of the analysis model that are not further subdivided at the analysis level.
Primitives are determined by evaluating the analysis model and developing counts for the
following forms:

Functional primitives (FuP). The number of transformations (bubbles) that appear at the
lowest level of a data flow diagram.

Data elements (DE). The number of attributes of a data object, data elements are not
composite data and appear within the data dictionary.

Objects (OB). The number of data objects .

Relationships (RE). The number of connections between data objects .


States (ST). The number of user observable states in the state transition diagram.

Transitions (TR). The number of state transitions in the state transition diagram.

In addition to these six primitives, additional counts are determined for

Modified manual function primitives (FuPM). Functions that lie outside the system
boundary but must be modified to accommodate the new system.

Input data elements (DEI). Those data elements that are input to the system.

Output data elements. (DEO). Those data elements that are output from the system.

Retained data elements. (DER). Those data elements that are retained (stored) by the
system.

Data tokens (TCi). The data tokens (data items that are not subdivided within a functional
primitive) that exist at the boundary of the ith functional primitive (evaluated for each
primitive).

Relationship connections (REi). The relationships that connect the ith object in the data
model to other objects.

DeMarco suggests that most software can be allocated to one of two domains: function
strong or data strong, depending upon the ratio RE/FuP. Function-strong applications (often
encountered in engineering and scientific applications) emphasize the transformation of
data and do not generally have complex data structures. Data-strong applications (often
encountered in information systems applications) tend to have complex data models.

RE/FuP < 0.7 implies a function-strong application.


0.8 < RE/FuP < 1.4 implies a hybrid application.
RE/FuP > 1.5 implies a data-strong application.

Because different analysis models will partition the model to greater or lessor degrees of
refinement, DeMarco suggests that an average token count per primitive is

TCavg = TCi /FuP

be used to control uniformity of partitioning across many different models within an


application domain.

To compute the bang metric for function-strong applications, the following algorithm is used:
set initial value of bang = 0;
do while functional primitives remain to be evaluated
Compute token-count around the boundary of primitive i
Compute corrected FuP increment (CFuPI)
Allocate primitive to class
Assess class and note assessed weight
Multiply CFuPI by the assessed weight
bang = bang + weighted CFuPI
enddo

The token-count is computed by determining how many separate tokens are “visible” within
the primitive. It is possible that the number of tokens and the number of data elements will
differ, if data elements can be moved from input to output without any internal
transformation. The corrected CFuPI is determined from a table published by DeMarco. A
much abbreviated version follows:

TCi CFuPI
2 1.0
5 5.8
10 16.6
15 29.3
20 43.2

The assessed weight noted in the algorithm is determined from 16 different classes of
functional primitives defined by DeMarco. A weight ranging from 0.6 (simple data routing) to
2.5 (data management functions) is assigned, depending on the class of the primitive.

For data-strong applications, the bang metric is computed using the following algorithm:

set initial value of bang = 0;


do while objects remain to be evaluated in the data model
compute count of relationships for object i
compute corrected OB increment (COBI)
bang = bang + COBI
enddo

The COBI is determined from a table published by DeMarco. An abbreviated version follows:
REi COBI1 1.0
3 4.0
6 9.0

Once the bang metric has been computed, past history can be used to associate it with size
and effort. DeMarco suggests that an organization build its own versions of the CFuPI and
COBI tables using calibration information from completed software projects.

Metrics for Specification Quality


Davis and his colleagues propose a list of characteristics that can be used to assess the
quality of the analysis model and the corresponding requirements specification: specificity
(lack of ambiguity), completeness, correctness, understandability, verifiability, internal and
external consistency, achievability, concision, traceability, modifiability, precision, and
reusability. In addition, the authors note that high-quality specifications are electronically
stored, executable or at least interpretable, annotated by relative importance and stable,
versioned, organized, cross-referenced, and specified at the right level of detail.

Although many of these characteristics appear to be qualitative in nature, Davis et al.


suggest that each can be represented using one or more metrics. For example, we assume
that there are nr requirements in a specification, such that

nr = nf + nnf

where nf is the number of functional requirements and nnf is the number of nonfunctional
(e.g., performance) requirements.

To determine the specificity (lack of ambiguity) of requirements, Davis et al. suggest a


metric that is based on the consistency of the reviewers’ interpretation of each requirement:

Q1 = nui/nr

where nui is the number of requirements for which all reviewers had identical
interpretations. The closer the value of Q to 1, the lower is the ambiguity of the
specification.

The completeness of functional requirements can be determined by computing the ratio

Q2 = nu/[ni x ns]

where nu is the number of unique function requirements, ni is the number of inputs (stimuli)
defined or implied by the specification, and ns is the number of states specified. The Q2
ratio measures the percentage of necessary functions that have been specified for a system.
However, it does not address nonfunctional requirements. To incorporate these into an
overall metric for completeness, we must consider the degree to which requirements have
been validated:

Q3 = nc/[nc + nnv]

where nc is the number of requirements that have been validated as correct and nnv is the
number of requirements that have not yet been validated.

Software Engineering-Metrics for Design model


SOFTWARE ENGINEERING

It is inconceivable that the design of a new aircraft, a new computer chip, or a new office
building would be conducted without defining design measures, determining metrics for
various aspects of design quality, and using them to guide the manner in which the design
evolves. And yet, the design of complex software-based systems often proceeds with
virtually no measurement. The irony of this is that design metrics for software are available,
but the vast majority of software engineers continue to be unaware of their existence.

Design metrics for computer software, like all other software metrics, are not perfect.
Debate continues over their efficacy and the manner in which they should be applied. Many
experts argue that further experimentation is required before design measures can be used.
And yet, design without measurement is an unacceptable alternative .
We can examine some of the more common design metrics for computer software. Each can
provide the designer with improved insight and all can help the design to evolve to a higher
level of quality.

Architectural Design Metrics


Architectural design metrics focus on characteristics of the program architecture with an
emphasis on the architectural structure and the effectiveness of modules. These metrics are
black box in the sense that they do not require any knowledge of the inner workings of a
particular software component.

Card and Glass define three software design complexity measures: structural complexity,
data complexity, and system complexity.

Structural complexity of a module i is defined in the following manner:

S(i) = f 2 out(i)
where fout(i) is the fan-out7 of module i.

Data complexity provides an indication of the complexity in the internal interface for a
module i and is defined as

D(i) = v(i)/[ fout(i) +1]


where v(i) is the number of input and output variables that are passed to and from module i.

Finally, system complexity is defined as the sum of structural and data complexity, specified
as

C(i) = S(i) + D(i)


As each of these complexity values increases, the overall architectural complexity of the
system also increases. This leads to a greater likelihood that integration and testing effort
will also increase.

An earlier high-level architectural design metric proposed by Henry and Kafura also makes
use the fan-in and fan-out. The authors define a complexity metric (applicable to call and
return architectures) of the form

HKM = length(i) x [ fin(i) + fout(i)]2


where length(i) is the number of programming language statements in a module i and fin(i)
is the fan-in of a module i. Henry and Kafura extend the definitions of fanin and fan-out
presented in this book to include not only the number of module control connections
(module calls) but also the number of data structures from which a module i retrieves (fan-
in) or updates (fan-out) data. To compute HKM during design, the procedural design may be
used to estimate the number of programming language statements for module i. Like the
Card and Glass metrics noted previously, an increase in the Henry-Kafura metric leads to a
greater likelihood that integration and testing effort will also increase for a module.
Fenton suggests a number of simple morphology (i.e., shape) metrics that enable different
program architectures to be compared using a set of straightforward dimensions. Referring
to figure, the following metrics can be defined:
size = n + a
where n is the number of nodes and a is the number of arcs. For the architecture shown in
figure,

size = 17 + 18 = 35
depth = the longest path from the root (top) node to a leaf node. For the architecture shown
infigure, depth = 4.
width = maximum number of nodes at any one level of the architecture. For the architecture
shown in figure, width = 6.
arc-to-node ratio, r = a/n,

which measures the connectivity density of the architecture and may provide a simple
indication of the coupling of the architecture. For the architecture shown in figure, r = 18/17
= 1.06.

The U.S. Air Force Systems Command ] has developed a number of software quality
indicators that are based on measurable design characteristics of a computer program.
Using concepts similar to those proposed in IEEE Std. 982.1-1988 , the Air Force uses
information obtained from data and architectural design to derive a design structure quality
index (DSQI) that ranges from 0 to 1. The following values must be ascertained to compute
the DSQI :

S1 = the total number of modules defined in the program architecture.


S2 = the number of modules whose correct function depends on the source of data input or
that produce data to be used elsewhere (in general, control modules, among others, would
not be counted as part of S2).
S3 = the number of modules whose correct function depends on prior processing.
S4 = the number of database items (includes data objects and all attributes that define
objects).
S5 = the total number of unique database items.
S6 = the number of database segments (different records or individual objects).
S7 = the number of modules with a single entry and exit (exception processing is not
considered to be a multiple exit).

Once values S1 through S7 are determined for a computer program, the following
intermediate values can be computed:

Program structure: D1, where D1 is defined as follows: If the architectural design was
developed using a distinct method (e.g., data flow-oriented design or object-oriented
design), then D1 = 1, otherwise D1 = 0.
Module independence: D2 = 1 (S2/S1)
Modules not dependent on prior processing: D3 = 1 (S3/S1)
Database size: D4 = 1 (S5/S4)
Database compartmentalization: D5 = 1 (S6/S4)
Module entrance/exit characteristic: D6 = 1 (S7/S1)

With these intermediate values determined, the DSQI is computed in the following manner:
DSQI = wiDi
where i = 1 to 6, wi is the relative weighting of the importance of each of the intermediate
values, and wi = 1 (if all Di are weighted equally, then wi = 0.167).

The value of DSQI for past designs can be determined and compared to a design that is
currently under development. If the DSQI is significantly lower than average, further design
work and review are indicated. Similarly, if major changes are to be made to an existing
design, the effect of those changes on DSQI can be calculated.
Process Metrics

To improve any process, it is necessary to measure its specified attributes, develop a set of
meaningful metrics based on these attributes, and then use these metrics to obtain indicators in
order to derive a strategy for process improvement.
Using software process metrics, software engineers are able to assess the efficiency of the software
process that is performed using the process as a framework. Process is placed at the centre of the
triangle connecting three factors (product, people, and technology), which have an important
influence on software quality and organization performance. The skill and motivation of the people,
the complexity of the product and the level of technology used in the software development have an
important influence on the quality and team performance. The process triangle exists within the
circle of environmental conditions, which includes development environment, business conditions,
and customer /user characteristics.

To measure the efficiency and effectiveness of the software process, a set of metrics is formulated
based on the outcomes derived from the process. These outcomes are listed below.

 Number of errors found before the software release


 Defect detected and reported by the user after delivery of the software
 Time spent in fixing errors
 Work products delivered
 Human effort used
 Time expended
 Conformity to schedule
 Wait time
 Number of contract modifications
 Estimated cost compared to actual cost.

Note that process metrics can also be derived using the characteristics of a particular software
engineering activity. For example, an organization may measure the effort and time spent by
considering the user interface design.
It is observed that process metrics are of two types, namely, private and public. Private Metrics are
private to the individual and serve as an indicator only for the specified individual(s). Defect rates by
a software module and defect errors by an individual are examples of private process metrics. Note
that some process metrics are public to all team members but private to the project. These include
errors detected while performing formal technical reviews and defects reported about various
functions included in the software.
Public metrics include information that was private to both individuals and teams. Project-level defect
rates, effort and related data are collected, analyzed and assessed in order to obtain indicators that
help in improving the organizational process performance.
Process Metrics Etiquette

Process metrics can provide substantial benefits as the organization works to improve its process
maturity. However, these metrics can be misused and create problems for the organization. In order
to avoid this misuse, some guidelines have been defined, which can be used both by managers and
software engineers. These guidelines are listed below.
$1· Rational thinking and organizational sensitivity should be considered while analyzing metrics
data.
$1· Feedback should be provided on a regular basis to the individuals or teams involved in
collecting measures and metrics.
$1· Metrics should not appraise or threaten individuals.
$1· Since metrics are used to indicate a need for process improvement, any metric indicating this
problem should not be considered harmful.
$1· Use of single metrics should be avoided.
As an organization becomes familiar with process metrics, the derivation of simple indicators leads
to a stringent approach called Statistical Software Process Improvement (SSPI). SSPI uses software
failure analysis to collect information about all errors (it is detected before delivery of the software)
and defects (it is detected after software is delivered to the user) encountered during the
development of a product or system.
Product Metrics

In software development process, a working product is developed at the end of each successful
phase. Each product can be measured at any stage of its development. Metrics are developed for
these products so that they can indicate whether a product is developed according to the user
requirements. If a product does not meet user requirements, then the necessary actions are taken in
the respective phase.
Product metrics help software engineer to detect and correct potential problems before they result in
catastrophic defects. In addition, product metrics assess the internal product attributes in order to
know the efficiency of the following.

 Analysis, design, and code model


 Potency of test cases
 Overall quality of the software under development.

Various metrics formulated for products in the development process are listed below.

 Metrics for analysis model: These address various aspects of the analysis model such as
system functionality, system size, and so on.
 Metrics for design model: These allow software engineers to assess the quality of design and
include architectural design metrics, component-level design metrics, and so on.
 Metrics for source code: These assess source code complexity, maintainability, and other
characteristics.
 Metrics for testing: These help to design efficient and effective test cases and also evaluate the
effectiveness of testing.
 Metrics for maintenance: These assess the stability of the software product.

Metrics for the Analysis Model

There are only a few metrics that have been proposed for the analysis model. However, it is possible
to use metrics for project estimation in the context of the analysis model. These metrics are used to
examine the analysis model with the objective of predicting the size of the resultant system. Size
acts as an indicator of increased coding, integration, and testing effort; sometimes it also acts as an
indicator of complexity involved in the software design. Function point and lines of code are the
commonly used methods for size estimation.
Function Point (FP) Metric

The function point metric, which was proposed by A.J Albrecht, is used to measure the functionality
delivered by the system, estimate the effort, predict the number of errors, and estimate the number
of components in the system. Function point is derived by using a relationship between the
complexity of software and the information domain value. Information domain values used in
function point include the number of external inputs, external outputs, external inquires, internal
logical files, and the number of external interface files.
Lines of Code (LOC)

Lines of code (LOC) is one of the most widely used methods for size estimation. LOC can be
defined as the number of delivered lines of code, excluding comments and blank lines. It is highly
dependent on the programming language used as code writing varies from one programming
language to another. Fur example, lines of code written (for a large program) in assembly language
are more than lines of code written in C++.
From LOC, simple size-oriented metrics can be derived such as errors per KLOC (thousand lines of
code), defects per KLOC, cost per KLOC, and so on. LOC has also been used to predict program
complexity, development effort, programmer performance, and so on. For example, Hasltead
proposed a number of metrics, which are used to calculate program length, program volume,
program difficulty, and development effort.
Metrics for Specification Quality

To evaluate the quality of analysis model and requirements specification, a set of characteristics has
been proposed. These characteristics include specificity, completeness, correctness,
understandability, verifiability, internal and external consistency, &achievability, concision,
traceability, modifiability, precision, and reusability.
Most of the characteristics listed above are qualitative in nature. However, each of these
characteristics can be represented by using one or more metrics. For example, if there are
nr requirements in a specification, then nr can be calculated by the following equation.
nr =nf +nrf
Where
nf = number of functional requirements
nnf = number of non-functional requirements.
In order to determine the specificity of requirements, a metric based on the consistency of the
reviewer’s understanding of each requirement has been proposed. This metric is represented by the
following equation.
Q1 = nui/nr
Where
nui = number of requirements for which reviewers have same understanding
Q1 = specificity.
Ambiguity of the specification depends on the value of Q. If the value of Q is close to 1 then the
probability of having any ambiguity is less.
Completeness of the functional requirements can be calculated by the following equation.
Q2 = nu / [nj*ns]
Where
nu = number of unique function requirements
ni = number of inputs defined by the specification
ns = number of specified state.
Q2 in the above equation considers only functional requirements and ignores non-functional
requirements. In order to consider non-functional requirements, it is necessary to consider the
degree to which requirements have been validated. This can be represented by the following
equation.
Q3 = nc/ [nc + nnv]
Where
nc= number of requirements validated as correct
nnv= number of requirements, which are yet to be validated.
Metrics for Software Design

The success of a software project depends largely on the quality and effectiveness of the software
design. Hence, it is important to develop software metrics from which meaningful indicators can be
derived. With the help of these indicators, necessary steps are taken to design the software
according to the user requirements. Various design metrics such as architectural design metrics,
component-level design metrics, user-interface design metrics, and metrics for object-oriented
design are used to indicate the complexity, quality, and so on of the software design.
Architectural Design Metrics

These metrics focus on the features of the program architecture with stress on architectural structure
and effectiveness of components (or modules) within the architecture. In architectural design
metrics, three software design complexity measures are defined, namely, structural complexity, data
complexity, and system complexity.
In hierarchical architectures (call and return architecture), say module ‘j’, structural complexity is
calculated by the following equation.
S(j) =f2 out(j)
Where
f out(j) = fan-out of module ‘j’ [Here, fan-out means number of modules that are subordinating module
j].
Complexity in the internal interface for a module ‘j’ is indicated with the help of data complexity,
which is calculated by the following equation.
D(j) = V(j) / [fout(j)+l]
Where
V(j) = number of input and output variables passed to and from module ‘j’.
System complexity is the sum of structural complexity and data complexity and is calculated by the
following equation.
C(j) = S(j) + D(j)
The complexity of a system increases with increase in structural complexity, data complexity, and
system complexity, which in turn increases the integration and testing effort in the later stages.
In addition, various other metrics like simple morphology metrics are also used. These metrics allow
comparison of different program architecture using a set of straightforward dimensions. A metric can
be developed by referring to call and return architecture. This metric can be defined by the following
equation.
Size = n+a
Where
n = number of nodes
a= number of arcs.
For example, there are 11 nodes and 10 arcs. Here, Size can be calculated by the following
equation.
Size = n+a = 11+10+21.
Depth is defined as the longest path from the top node (root) to the leaf node and width is defined as
the maximum number of nodes at any one level.
Coupling of the architecture is indicated by arc-to-node ratio. This ratio also measures the
connectivity density of the architecture and is calculated by the following equation.
r=a/n
Quality of software design also plays an important role in determining the overall quality of the
software. Many software quality indicators that are based on measurable design characteristics of
a computer program have been proposed. One of them is Design Structural Quality Index (DSQI),
which is derived from the information obtained from data and architectural design. To calculate
DSQI, a number of steps are followed, which are listed below.
1. To calculate DSQI, the following values must be determined.

 Number of components in program architecture (S1)


 Number of components whose correct function is determined by the Source of input data (S 2)
 Number of components whose correct function· depends on previous processing (S3)
 Number of database items (S4)
 Number of different database items (S5)
 Number of database segments (S6)
 Number of components having single entry and exit (S7).

2. Once all the values from S 1 to S7 are known, some intermediate values are calculated, which are
listed below.
Program structure (D1): If discrete methods are used for developing architectural design then D 1=
1, else D1 = 0
Module independence (D2): D2 = 1-(S2/S1)
Modules not dependent on prior processing (D 3): D3 = 1-(S3/S1)
Database size (D4): D4 = 1-(S5/S4)
Database compartmentalization (D5):D5 = 1-(S6/S4)
Module entrance/exit characteristic (D6): D6 = 1-(S7/S1)
3. Once all the intermediate values are calculated, OSQI is calculated by the following equation.
DSQI = ∑WiDi
Where
i = 1 to 6
∑Wi = 1 (Wi is the weighting of the importance of intermediate values).
In conventional software, the focus of component – level design metrics is on the internal
characteristics of the software components; The software engineer can judge the quality of the
component-level design by measuring module cohesion, coupling and complexity; Component-level
design metrics are applied after procedural design is final. Various metrics developed for
component-level design are listed below.

 Cohesion metrics: Cohesiveness of a module can be indicated by the definitions of the


following five concepts and measures.
 Data slice: Defined as a backward walk through a module, which looks for values of data that
affect the state of the module as the walk starts
 Data tokens: Defined as a set of variables defined for a module
 Glue tokens: Defined as a set of data tokens, which lies on one or more data slice
 Superglue tokens: Defined as tokens, which are present in every data slice in the module
 Stickiness: Defined as the stickiness of the glue token, which depends on the number of data
slices that it binds.
 Coupling Metrics: This metric indicates the degree to which a module is connected to other
modules, global data and the outside environment. A metric for module coupling has been
proposed, which includes data and control flow coupling, global coupling, and environmental
coupling.

o Measures defined for data and control flow coupling are listed below.

di = total number of input data parameters


ci = total number of input control parameters
do= total number of output data parameters
co= total number of output control parameters
$1§ Measures defined for global coupling are listed below.
gd= number of global variables utilized as data
gc = number of global variables utilized as control
$1§ Measures defined for environmental coupling are listed below.
w = number of modules called
r = number of modules calling the modules under consideration
By using the above mentioned measures, module-coupling indicator (m c) is calculated by using the
following equation.
mc = K/M
Where
K = proportionality constant
M = di + (a*ci) + do+ (b*co)+ gd+ (c*gc) + w + r.
Note that K, a, b, and c are empirically derived. The values of m c and overall module coupling are
inversely proportional to each other. In other words, as the value of m c increases, the overall module
coupling decreases.

Complexity Metrics: Different types of software metrics can be calculated to ascertain the
complexity of program control flow. One of the most widely used complexity metrics for ascertaining
the complexity of the program is cyclomatic complexity.
Many metrics have been proposed for user interface design. However, layout appropriateness
metric and cohesion metric for user interface design are the commonly used metrics. Layout
Appropriateness (LA) metric is an important metric for user interface design. A typical Graphical
User Interface (GUI) uses many layout entities such as icons, text, menus, windows, and so on.
These layout entities help the users in completing their tasks easily. In to complete a given task with
the help of GUI, the user moves from one layout entity to another.
Appropriateness of the interface can be shown by absolute and relative positions of each layout
entities, frequency with which layout entity is used, and the cost of changeover from one layout
entity to another.
Cohesion metric for user interface measures the connection among the onscreen contents.
Cohesion for user interface becomes high when content presented on the screen is from a single
major data object (defined in the analysis model). On the other hand, if content presented on the
screen is from different data objects, then cohesion for user interface is low.
In addition to these metrics, the direct measure of user interface interaction focuses on activities like
measurement of time required in completing specific activity, time required in recovering from an
error condition, counts of specific operation, text density, and text size. Once all these measures are
collected, they are organized to form meaningful user interface metrics, which can help in improving
the quality of the user interface.
Metrics for Object-oriented Design

In order to develop metrics for object-oriented (OO) design, nine distinct and measurable
characteristics of OO design are considered, which are listed below.

 Complexity: Determined by assessing how classes are related to each other


 Coupling: Defined as the physical connection between OO design elements
 Sufficiency: Defined as the degree to which an abstraction possesses the features required of it
 Cohesion: Determined by analyzing the degree to which a set of properties that the class
possesses is part of the problem domain or design domain
 Primitiveness: Indicates the degree to which the operation is atomic
 Similarity: Indicates similarity between two or more classes in terms of their structure, function,
behavior, or purpose
 Volatility: Defined as the probability of occurrence of change in the OO design
 Size: Defined with the help of four different views, namely, population, volume, length, and
functionality. Population is measured by calculating the total number of OO entities, which can be
in the form of classes or operations. Volume measures are collected dynamically at any given
point of time. Length is a measure of interconnected designs such as depth of inheritance tree.
Functionality indicates the value rendered to the user by the OO application.

Metrics for Coding


Halstead proposed the first analytic laws for computer science by using a set of primitive measures, which can
be derived once the design phase is complete and code is generated. These measures are listed below.

nl = number of distinct operators in a program


n2 = number of distinct operands in a program
N1 = total number of operators
N2= total number of operands.
By using these measures, Halstead developed an expression for overall program length, program
volume, program difficulty, development effort, and so on.
Program length (N) can be calculated by using the following equation.
N = n1log2nl + n2 log2n2.
Program volume (V) can be calculated by using the following equation.
V = N log2 (n1+n2).
Note that program volume depends on the programming language used and represents the volume
of information (in bits) required to specify a program. Volume ratio (L)can be calculated by using the
following equation.
L = Volume of the most compact form of a program
Volume of the actual program
Where, value of L must be less than 1. Volume ratio can also be calculated by using the following
equation.
L = (2/n1)* (n2/N2).
Program difficulty level (D) and effort (E)can be calculated by using the following equations.
D = (n1/2)*(N2/n2).
E = D * V.
Metrics for Software Testing

Majority of the metrics used for testing focus on testing process rather than the technical
characteristics of test. Generally, testers use metrics for analysis, design, and coding to guide them
in design and execution of test cases.
Function point can be effectively used to estimate testing effort. Various characteristics like errors
discovered, number of test cases needed, testing effort, and so on can be determined by estimating
the number of function points in the current project and comparing them with any previous project.
Metrics used for architectural design can be used to indicate how integration testing can be carried
out. In addition, cyclomatic complexity can be used effectively as a metric in the basis-path testing to
determine the number of test cases needed.
Halstead measures can be used to derive metrics for testing effort. By using program volume (V)
and program level (PL),Halstead effort (e)can be calculated by the following equations.
e = V/ PL
Where
PL = 1/ [(n1/2) * (N2/n2)] … (1)
For a particular module (z), the percentage of overall testing effort allocated can be calculated by the
following equation.
Percentage of testing effort (z) = e(z)/∑e(i)
Where, e(z) is calculated for module z with the help of equation (1). Summation in the denominator
is the sum of Halstead effort (e) in all the modules of the system.
For developing metrics for object-oriented (OO) testing, different types of design metrics that have a
direct impact on the testability of object-oriented system are considered. While developing metrics
for OO testing, inheritance and encapsulation are also considered. A set of metrics proposed for OO
testing is listed below.

 Lack of cohesion in methods (LCOM): This indicates the number of states to be tested. LCOM
indicates the number of methods that access one or more same attributes. The value of LCOM is
0, if no methods access the same attributes. As the value of LCOM increases, more states need
to be tested.
 Percent public and protected (PAP): This shows the number of class attributes, which are
public or protected. Probability of adverse effects among classes increases with increase in
value of PAP as public and protected attributes lead to potentially higher coupling.
 Public access to data members (PAD): This shows the number of classes that can access
attributes of another class. Adverse effects among classes increase as the value of PAD
increases.
 Number of root classes (NOR): This specifies the number of different class hierarchies, which
are described in the design model. Testing effort increases with increase in NOR.
 Fan-in (FIN): This indicates multiple inheritances. If value of FIN is greater than 1, it indicates
that the class inherits its attributes and operations from many root classes. Note that this
situation (where FIN> 1) should be avoided.

Metrics for Software Maintenance

For the maintenance activities, metrics have been designed explicitly. IEEE have proposed Software
Maturity Index (SMI), which provides indications relating to the stability of software product. For
calculating SMI, following parameters are considered.

 Number of modules in current release (MT)


 Number of modules that have been changed in the current release (Fe)
 Number of modules that have been added in the current release (Fa)
 Number of modules that have been deleted from the current release (Fd)

Once all the parameters are known, SMI can be calculated by using the following equation.
SMI = [MT– (Fa+ Fe + Fd)]/MT.
Note that a product begins to stabilize as 8MI reaches 1.0. SMI can also be used as a metric for
planning software maintenance activities by developing empirical models in order to know the effort
required for maintenance.
Project Metrics

Project metrics enable the project managers to assess current projects, track potential risks, identify
problem areas, adjust workflow, and evaluate the project team’s ability to control the quality of work
products. Note that project metrics are used for tactical purposes rather than strategic purposes
used by the process metrics.
Project metrics serve two purposes. One, they help to minimize the development schedule by
making necessary adjustments in order to avoid delays and alleviate potential risks and problems.
Two, these metrics are used to assess the product quality on a regular basis-and modify the
technical issues if required. As the quality of the project improves, the number of errors and defects
are reduced, which in turn leads to a decrease in the overall cost of a software project.
Often, the first application of project metrics occurs during estimation. Here, metrics collected from
previous projects act as a base from which effort and time estimates for the current project are
calculated. As the project proceeds, original estimates of effort and time are compared with the new
measures of effort and time. This comparison helps the project manager to monitor (supervise) and
control the progress of the project.
As the process of development proceeds, project metrics are used to track the errors detected
during each development phase. For example, as software evolves from design to coding, project
metrics are collected to assess quality of the design and obtain indicators that in turn affect the
approach chosen for coding and testing. Also, project metrics are used to measure production rate,
which is measured in terms of models developed, function points, and delivered lines of code.

You might also like