0% found this document useful (0 votes)
78 views25 pages

SQL 4-1

The document discusses project progress control in software quality assurance, emphasizing the importance of early detection of irregularities to manage risks, schedules, resources, and budgets effectively. It outlines key components of project progress control, including risk management, schedule control, resource control, and budget control, along with their implementation challenges and the need for strong coordination among internal and external participants. Additionally, it highlights the role of computerized tools in enhancing project monitoring and management efficiency.

Uploaded by

Tejas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views25 pages

SQL 4-1

The document discusses project progress control in software quality assurance, emphasizing the importance of early detection of irregularities to manage risks, schedules, resources, and budgets effectively. It outlines key components of project progress control, including risk management, schedule control, resource control, and budget control, along with their implementation challenges and the need for strong coordination among internal and external participants. Additionally, it highlights the role of computerized tools in enhancing project monitoring and management efficiency.

Uploaded by

Tejas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

SOFTWARE QUALITY ASSURANCE

Subject code: BIS714B


Prepared by: Ganesh Roddanavar (M. Tech)
Assistant Professor, Dept. of ISE
MODULE 4
PROJECT PROGRESS CONTROL
Months of delay in completing project phases and budget overruns exceeding tens of
percents are “red flags” for project management. These events, which are mainly failures of
management itself, are caused by situations such as:
■ Overly or even blindly optimistic scheduling and budgeting (often beginning earlier,
during the proposal development stage).
■ Unprofessional software risk management expressed as tardy or inappropriate
reactions to software risks.
■ Belated identification of schedule and budget difficulties and/or under- estimation

of their extent.
The components of project progress control and their implementation are discussed in this
chapter. Special attention is assigned to the difficulties entailed with controlling external participants
and internal projects. Another section deals with tools for project progress control.
After completing this chapter, you will be able to:
1. Explain the components of project progress control.
2. Explain the implementation issues involved in project progress control.
20.1 The components of project progress control
Project progress control (CMM uses the term “software project tracking”) has one immediate
objective: early detection of irregular events. Detection promotes the timely initiation of problem-
solving responses. The accumulated information on progress control as well as successes and extreme
failures also serve a long-term objective: initiation of corrective actions.
Main Components of Project Progress Control
1. Control of risk management activities
2. Project schedule control
3. Project resource control
4. Project budget control
1. Control of Risk Management Activities
 Involves monitoring identified software development risk items, such as:
o Risks found during the pre-project stage
o Risks noted in contract reviews and project plans
o New risks discovered during project execution
 The project team should perform systematic risk management, including:
o Regular (periodic) assessments of current risk items
o Evaluation of the effectiveness of risk responses already applied
o Updating risk mitigation plans as necessary
 Project managers must step in when certain risks escalate or become critical.
 The process is guided by established standards and references such as:
o IEEE (2001) standards on software risk management
o Jones (1994) — a reference on software project risks

Page | 1
2. Project Schedule Control
Definition:
Project schedule control ensures that the project adheres to its approved and contracted timelines. It focuses
on monitoring progress against the schedule to detect and address delays early.

Key Elements:
1. Milestones as Control Points
o Project progress is tracked through milestones, which mark the completion of key activities or
deliverables.
o Milestones are deliberately placed to make it easier to identify delays or deviations from the
plan.
o Contractual milestones (e.g., product delivery dates, completion of development phases) are
especially important and receive extra attention.
2. Focus on Critical Delays
o Minor delays are expected in most projects, but management focuses primarily on critical
delays — those that can significantly impact the final delivery date or overall project
success.
3. Monitoring and Reporting
o Information about progress is gathered through:
 Milestone reports
 Periodic progress reports
o These reports provide the data management needs to evaluate schedule compliance.
4. Management Actions
o Based on these reports, management may:
 Allocate additional resources (e.g., staff, tools, or time) to get the project back on
track.
 Renegotiate the schedule with the customer if delays are unavoidable or justified.

3. Project Resource Control


Definition:
Project resource control focuses on monitoring and managing the use of project resources, primarily human
resources, but also other critical assets such as development and testing facilities (especially in real-time
systems or firmware projects).
Key Elements:
1. Focus on Human and Technical Resources
o While professional human resources (e.g., developers, analysts, testers) are the main concern,
resource control also includes hardware, software, and testing environments.
o These assets often require precise monitoring to ensure efficiency and prevent bottlenecks.
2. Periodic Reporting and Comparison
o Control is based on regular reports that compare actual resource usage against planned
utilization.
o The extent of deviation can only be fully understood in relation to project progress:
 A project may appear to have only minor deviations (e.g., 5%) in resource use at a
given time.
 However, if the project is significantly delayed, the cumulative deviation could be
much higher (e.g., 25%) — signaling a serious issue.

Page | 2
3. Composition and Allocation Control
o Resource control also looks at how resources are distributed internally:
 For example, even if total man-months for system analysts match the plan, the
allocation between junior and senior analysts might differ.
 Spending more on senior analysts (e.g., 50% instead of the planned 25%) could strain
the budget later on.
4. Early Detection and Action
o While budget control can eventually reveal such deviations, resource control identifies them
earlier, allowing timely intervention.
o If deviations are justified, management may:
 Increase total resources (e.g., hire more staff or add equipment).
 Reallocate existing resources (e.g., reorganize teams, revise the project plan).

4. Project Budget Control


Definition:
Project budget control involves comparing actual expenditures with planned (scheduled) expenditures to
monitor financial performance and detect budget deviations early.
Key Elements:
1. Comparison of Actual vs. Planned Spending
o The main goal is to ensure spending aligns with the approved budget plan.
o As with resource control, accurate assessment of budget deviations requires considering
activity delays — since delays can distort how actual spending compares to planned progress.
2. Main Budget Categories Under Control
The budget typically covers several major cost items that must be closely monitored:
o Human resources (e.g., salaries, contractor fees)
o Development and testing facilities
o Purchase of COTS (Commercial Off-The-Shelf) software
o Purchase of hardware
o Payments to subcontractors
3. Monitoring and Reporting
o Budget control relies on:
 Milestones (points where budget status is evaluated)
 Periodic financial reports
o These tools help in the early detection of budget overruns or spending irregularities.
4. Management Intervention
o For internal deviations (within the organization), management can:
 Reallocate funds or resources
 Revise plans or adjust priorities
o For external deviations (e.g., subcontractors), management may also use:
 Legal measures
 Contractual enforcement actions
5. Importance and Risks of Overemphasis
o Budget control is a top priority for management due to its direct impact on project
profitability.
o However, focusing too narrowly on budget control can lead to neglect of other crucial areas
(risk, schedule, and resource control).

Page | 3
o This is problematic because:
 Other controls can identify issues (risks, delays, overuse of resources) earlier.
 Relying only on budget control means problems are detected later, potentially making
them costlier to fix.

20.2 Progress Control of Internal Projects and External Participants


Purpose
Project progress control aims to give management a complete view of all software development
activities within the organization.
However, in practice, control over internal projects and external participants is often incomplete
or flawed for different reasons.
1. Progress Control of Internal Projects
Definition:
Internal projects are those conducted for other departments within the same organization or for
developing software products for the general market (without a specific external customer).
Typical Problems:
 Lower management priority: Internal projects often receive less attention because they don’t
involve paying external customers.
 Weak internal customer follow-up: Internal stakeholders may not enforce deadlines or budget limits
as strictly.
 Poor control and delayed detection: This lax approach leads to:
o Late identification of delays
o Severe budget overruns
o Limited corrective action
Recommended Solution:
 Apply the full range of project progress control measures (risk, schedule, resource, and budget
control) to internal projects, just as rigorously as for external contracts.
2. Progress Control of External Participants
Definition:
External participants include:
 Subcontractors
 Suppliers of COTS (Commercial Off-The-Shelf) software
 Providers of reused software modules
 In some cases, even the customer (when they play an active role in development)
Characteristics and Challenges:
 Large or complex projects often involve multiple external participants.
 External involvement arises for economic, technical, or staffing reasons.
 Complex contractual relationships make communication and coordination difficult.
 As the number of participants grows, management control becomes more demanding.
Control Focus Areas:
 Project schedule control: Ensuring external parties meet deadlines and deliverables.
 Risk management: Monitoring risks related to external dependencies and performance.
Management Implication:
 Stronger coordination and more intensive monitoring efforts are needed to maintain acceptable
control levels when multiple external contributors are involved.

Page | 4
20.3 Implementation of Project Progress Control Regimes
Purpose
The implementation of project progress control ensures that project activities are systematically
monitored and reported, allowing management to maintain an accurate and timely view of project
performance.
Key Components of Implementation
Project progress control relies on formal procedures that define:

1. Allocation of Responsibility
Responsibilities must be clearly assigned according to the project’s characteristics (e.g., size, complexity,
organizational structure).
These include:
 Who is responsible for performing progress control tasks (e.g., project leader, project manager,
department head).
 Frequency of reporting required from each unit and management level.
 Conditions requiring immediate reporting by:
o Project leaders to management (e.g., major risks, critical delays, severe deviations).
o Lower management to upper management (e.g., when corrective actions exceed their
authority).
This ensures accountability and prompt communication of critical issues.

2. Management Audits of Project Progress


Management audits evaluate the effectiveness of the progress control system by examining:
1. The quality and timeliness of progress reports transmitted through the management chain (from
project leaders up to top management).
2. The adequacy of management’s control actions — whether appropriate measures are being initiated
in response to identified issues.
These audits serve as a feedback mechanism to improve the reliability and responsiveness of the control
process.

3. Multi-Level Management Coordination


In large software organizations, project progress control operates across several management levels, such
as:
 Software Department Management
 Software Division Management
 Top Management
Each level:
 Defines its own control regime, including procedures, indicators, and reporting requirements suited
to its role and perspective.
 Must ensure coordination among levels so that information flows smoothly and consistently
throughout the organization.
Coordination is essential — without it, fragmented reporting can lead to gaps in control and delayed decision-
making.
4. Information Flow and Reporting Chain
The reporting chain begins at the lowest managerial level — typically the project leader — and moves
upward through successive layers of management.

Page | 5
Project Leader’s Role:
 Prepares periodic project progress reports summarizing:
1. Project risks (status and changes)
2. Project schedule (milestone progress, delays)
3. Resource utilization (human and technical resources)
 Bases these reports on information collected from team leaders and other direct subordinates.
These reports form the foundation for higher-level management assessments and decisions.
(Figure 20.1, referenced in the text, likely shows an example of such a progress report.)

Overall Summary: Implementing project progress control requires a structured reporting framework,
clear roles and responsibilities, and strong coordination across all management levels. The goal is to ensure
that accurate, timely, and actionable information about project risks, schedules, and resources flows upward,
enabling effective decision-making and corrective actions.

20.4 Computerized Tools for Project Progress Control


Purpose
The growing size and complexity of software projects make computerized project control tools essential.
These tools:
 Help track, analyze, and manage project performance automatically.
 Increase efficiency, accuracy, and timeliness of project monitoring.
 Support all major aspects of project control, including risk, schedule, resource, and budget
management.
Most of these tools are based on PERT (Program Evaluation and Review Technique) and CPM (Critical
Path Method), which:
 Identify dependencies between activities, and
 Highlight critical tasks that directly affect project completion time.
They are also highly customizable, making them suitable for different project types and organizational needs.

Examples of Services Provided by Computerized Tools


1. Control of Risk Management Activities
Computerized tools support risk control by generating detailed reports that help identify and manage threats
to project success.
Functions include:
 Producing lists of software risk items, categorized by type (technical, managerial, etc.), with their
planned resolution dates.
 Generating exception lists showing overdue or unresolved risks, especially those that could delay
project completion.
✅ Benefit: Enables proactive identification and resolution of risks before they escalate.

2. Project Schedule Control


Tools assist in continuously tracking progress against the planned schedule.
Functions include:
 Classified lists of delayed activities, organized by cause or severity.
 Lists of delayed critical activities, where delays could affect overall completion.
 Automatically updated activity schedules, adjusted according to latest progress reports and
corrective measures — for teams, development units, or the entire project.
Page | 6
Classified lists of delayed milestones, showing which key deliverables are behind schedule.
 Updated milestone schedules incorporating new timelines and applied corrections.
✅ Benefit: Keeps schedules accurate and dynamic, ensuring managers respond quickly to emerging delays.

3. Project Resource Control


These tools help manage and monitor how human and technical resources are allocated and used.
Functions include:
 Generating resource allocation plans for specific activities, software modules, teams, or time periods.
 Tracking actual resource utilization — by period or cumulatively — for comparison with plans.
 Identifying exceptions in resource utilization, such as overuse or underuse in specific areas or time
frames.
 Producing updated resource allocation plans based on latest progress reports and corrective
measures.
✅ Benefit: Improves efficiency and prevents bottlenecks or underutilization of key resources.

4. Project Budget Control


Computerized tools play a vital role in tracking project financial performance.
Functions include:
 Creating budget plans by activity, software module, team, unit, or time period.
 Providing budget utilization reports, showing actual spending over time or cumulatively.
 Detecting budget utilization deviations — both short-term (periodic) and long-term (cumulative).
 Producing updated budget plans that reflect progress reports and financial corrections.
✅ Benefit: Allows continuous financial monitoring and early detection of overruns or inefficiencies.

Summary Table
Control
Functions of Computerized Tools Primary Benefit
Component
Risk Management Lists of risk items, overdue risk reports Early detection and mitigation of risks
Delay tracking, critical activity and milestone
Schedule Control Real-time visibility of schedule health
updates
Allocation plans, utilization tracking, exception
Resource Control Efficient and balanced resource usage
reports
Budget plans, spending reports, deviation Financial control and prevention of
Budget Control
tracking overruns

Overall Summary
Computerized project progress control tools are indispensable in modern software project management.
By automating data collection, analysis, and reporting, they:
 Enhance visibility and coordination,
 Support timely corrective action, and
 Ensure integrated control across risk, schedule, resource, and budget dimensions.
These tools make project management data-driven, responsive, and far more efficient than traditional
manual methods.

Page | 7
Page | 8
21.1 Objectives of quality measurement
Software quality and other software engineers have formulated the main objectives for software quality
metrics, presented

Main Objectives of Software Quality Metrics


1. Management Control and Decision Support
Software quality metrics help management plan, monitor, and control software projects effectively.
They provide a factual basis for identifying where intervention is needed by measuring:
 Deviations in quality performance — the gap between actual and planned functional or quality
outcomes.
 Deviations in schedule and budget performance — differences between actual progress and the
planned timeline or costs.
✅ Purpose: To support informed decision-making, corrective actions, and ensure project alignment with
quality goals.
2. Process Improvement
Metrics are also used to identify opportunities for improvement in software development and maintenance
processes.
This is achieved through:
 Collecting and analyzing performance data from teams, units, and projects.
✅ Purpose: To detect patterns or problem areas that indicate where preventive or corrective actions can
enhance overall process effectiveness and product quality.
Comparison provides the practical basis for management’s application of metrics and for SQA
improvement in general. The metrics are used for comparison of performance data with indicators,
quantitative values such as:
■ Defined software quality standards
■ Quality targets set for organizations or individuals
■ Previous year’s quality achievements
■ Previous project’s quality achievements
■ Average quality levels achieved by other teams applying the same devel- opment tools in similar
development environments
■ Average quality achievements of the organization
■ Industry practices for meeting quality requirements.
In order for the selected quality metrics to be applicable and successful, both gen- eral and operative
requirements, as presented

Requirements for Software Quality Metrics


To ensure that software quality metrics are useful, accurate, and applicable, they must meet certain general
and operative requirements.
A. General Requirements
Requirement Explanation
The metric must relate to an attribute of substantial importance to software quality or
Relevant
project goals.
Valid The metric must accurately measure the intended attribute.
Reliable When applied under similar conditions, the metric should produce consistent results.

Page | 9
Requirement Explanation
The metric should be applicable across a wide variety of implementations, projects, and
Comprehensive
situations.
Mutually Each metric should measure a distinct attribute to avoid overlap or duplication with
Exclusive other metrics.

B. Operative Requirements
Requirement Explanation
Data collection for the metric should be straightforward and require minimal
Easy and Simple
resources.
Does Not Require Metrics should be integrated with existing project data collection systems (e.g.,
Independent Data attendance, wages, cost accounting), improving efficiency and coordination
Collection across organizational systems.
Metrics should be designed to minimize manipulation by individuals attempting
Immune to Biased
to influence results. This is achieved by careful metric selection and by
Interventions
establishing appropriate data collection procedures.

21.2 Classification of Software Quality Metrics


Software quality metrics can be classified according to two main categories, forming a two-level
classification system.

1. First-Level Classification – Based on the Software Life Cycle Phase


 Process Metrics These metrics relate to the software development process and evaluate the
efficiency and effectiveness of the development activities. (Discussed in Section 21.3.)
 Product Metrics These metrics relate to the software product itself, particularly during the
maintenance phase, and measure the quality and performance of the delivered software.
(Discussed in Section 21.4.)
2. Second-Level Classification – Based on the Subject of Measurement
Software quality metrics can also be categorized according to what they measure, such as:
 Quality – Assessing the degree to which the software meets specified requirements and user
expectations.
 Timetable – Measuring adherence to schedules and delivery deadlines.
 Effectiveness – Evaluating how efficiently errors are detected, removed, and maintenance tasks are
performed.
 Productivity – Measuring the efficiency of resource usage (e.g., manpower, time) during development
and maintenance.
Each of these aspects is addressed in separate sections for detailed discussion.
3. Measures of System Size
Many software quality metrics depend on the size of the software system, which can be expressed using one
of two common measures:
 KLOC (Thousands of Lines of Code):
A traditional metric measuring software size by the number of code lines.
⚠️ Limitation: Since programming languages and tools differ in how much code they require for the

Page | 10
same task, KLOC is only valid for systems developed in the same programming language or tool
environment.
 Function Points:
A language-independent measure estimating the amount of functionality delivered by the software
and the resources required to develop it.
(Further details in Appendix 21A.)
4. Exclusions
Metrics related to customer satisfaction are not included in this discussion, as they are extensively covered
in marketing literature rather than software engineering.

21.3 Process Metrics


Software development process metrics are used to evaluate and control various aspects of the software
development process.
They fall into four main categories:
1. Software Process Quality Metrics – Measure the quality of the development process.
2. Software Process Timetable Metrics – Measure adherence to project schedules.
3. Error Removal Effectiveness Metrics – Measure how efficiently errors are detected and corrected.
4. Software Process Productivity Metrics – Measure the efficiency of resource use in development
activities.
21.3.1 Software Process Quality Metrics
These metrics assess the quality of the software development process and can be classified into two main
types, along with an additional indirect metric:
 Error Density Metrics
 Error Severity Metrics
 (Indirectly related: McCabe’s Cyclomatic Complexity Metric) – a measure of software complexity that
indirectly reflects process quality (see Section 9.4.4).
A. Error Density Metrics
Definition:
Error density metrics measure the number of errors relative to the software volume, providing an indication
of process quality and stability.
Components of Calculation:
1. Software Volume Measures
o Can be expressed in:
 Lines of Code (LOC or KLOC)
 Function Points (FP)
o (See Section 21.2 for a comparison between these two measures.)
2. Errors Counted Measures
o Simple Count: Total number of detected errors.
o Weighted Count: Adjusts for error severity, giving more importance to serious defects.
 Errors are classified into severity levels (e.g., using the five-level system from MIL-
STD-498).
 A weighted error measure is then calculated by multiplying the number of errors in
each class by its severity weight, and summing these values.
Purpose:
Weighted error density metrics are considered more accurate indicators of software quality issues than
simple unweighted counts.
Page | 11
They can also lead to different management decisions, since they emphasize the impact of critical errors
rather than just their number.

Example 1 – Calculation of Error Density Metrics


This example demonstrates how to calculate two key measures of software quality based on detected code
errors:
 NCE (Number of Code Errors)
 WCE (Weighted Code Errors)
Error Severity Classes and Relative Weights
Error severity class Relative weight
Low severity 1
Medium severity 3
High severity 9
Error Summary for the “Atlantis” Project
Calculation of NCE Calculation of WCE
Error severity class (number of errors) Relative weight Weighted errors
a b c D = b x c

Low severity 42 1 42
Medium severity 17 3 51
High severity 11 9 99

Total 70 — 192

NCE 70 — —
WCE — — 192

Results
 NCE (Number of Code Errors) = 70
 WCE (Weighted Code Errors) = 192
This means that while 70 errors were found in total, when severity is considered, the weighted total impact
of those errors is 192, highlighting that high-severity errors have a much larger effect on software quality.
Table 21.1 – Error Density Metrics (Key Measures and Definitions)
Code Name Calculation formula

CED Code Error Density CED = ––N–C–E–


KLOC

DED Development Error Density DED = –N––D–E–


KLOC

WCED Weighted Code Error Density WCED = –W––C–E–


KLOC
WDED = –WD
––E–
WDED Weighted Development Error Density
KLOC
WCEF = –W C
–E

WCEF Weighted Code Errors per Function point
NFP
WDEF = –WD
––E–
WDEF Weighted Development Errors per Function point

Key:
1. NCE = number of code errors detected in the software code by code inspections and testing. Data for this
measure are culled from code inspection and testing reports.
2. KLOC = thousands of lines of code.
3. NDE = total number of development (design and code) errors detected in the software development process. Data
for this measure are found in the various design and code reviews and testing reports conducted.
4. WCE= weighted code errors detected. The sources of data for this metric are the same as those for NCE.
5. WDE = total weighted development (design and code) errors detected in development of the soft- ware.
The sources of data for this metric are the same as those for NDE.
6. NFP = number of function points required for development of the software. Sources for the number of
function points are professional surveys of the relevant software.

Page | 12
Example 2. This example follows Example 1 and introduces the factor of weighted measures so as
to demonstrate the implications of their use. A software development department applies two alternative
metrics for calcu- lation of code error density: CED and WCED. The unit determined the following
indicators for unacceptable software quality: CED > 2 and WCED > 4. For our calculations we apply
the three classes of quality and their relative weights and the code error summary for the Atlantis project
mentioned in Example 1. The software system size is 40 KLOC. Calculation of the two metrics resulted
in the following:
Calculation of CED Calculation of WCED
Measures and metrics (Code Error Density) (Weighted Code Error Density)

NCE 70 —
WCE — 192
KLOC 40 40

CED (NCE/KLOC) 1.75 —


WCED (WCE/KLOC) — 4.8

The conclusions reached after application of the unweighted versus weighted metrics are different.
While the CED does not indicate quality below the acceptable level, the WCED metric does indicate
quality below the acceptable level (in other words, if the error density is too high, the unit’s quality
is not acceptable), a result that calls for management intervention.
Error seveity metrics
The metrics belonging to this group are used to detect adverse situations of increasing numbers of severe
errors in situations where errors and weighted errors, as measured by error density metrics, are generally
decreasing. Two error severity metrics are presented in Table 21.2.

21.3.2 Software process timetable metrics


Software process timetable metrics may be based on accounts of success (completion of milestones per
schedule) in addition to failure events (non- completion per schedule). An alternative approach
calculates the average delay in completion of milestones. The metrics presented here are based on the
two approaches illustrated in Table 21.3.
The TTO and ADMC metrics are based on data for all relevant mile- stones scheduled in the project
plan. In other words, only milestones that were designated for completion in the project plan stage are
considered in the metrics’ computation. Therefore, these metrics can be applied through- out
development and need not wait for the project’s completion.
21.3.3 Error removal effectiveness metrics
Software developers can measure the effectiveness of error removal by the software quality assurance
system after a period of regular operation (usual- ly 6 or 12 months) of the system. The metrics combine
the error records of the development stage with the failures records compiled during the first year (or any
defined period) of regular operation. Two error removal effectiveness metrics are presented in Table
21.4.

Page | 13
Key:
 MSOT = milestones completed on time.
 MS = total number of milestones.
 TCDAM = total Completion Delays (days, weeks, etc.) for All Milestones.
To calculate this measure, delays reported for all relevant milestones are summed up. Milestones
completed on time or before schedule are considered “O” delays. Some professionals refer to completion of
milestones before schedule as “minus” delays. These are considered to balance the effect of accounted-for delays
(we might call the latter “plus” delays). In these cases, the value of the ADMC may be lower than the value
obtained according to the metric originally suggested.

Key:
 NYF = number of software failures detected during a year of maintenance service.
 WYF = weighted number of software failures detected during a year of maintenance service.

21.3.4 Software process productivity metrics


This group of metrics includes “direct” metrics that deal with a project’s human resources productivity
as well as “indirect” metrics that focus on the extent of software reuse. Software reuse substantially affects
productivity and effectiveness.
An additional term – “benchmarking software development productivity” – has recently entered the list
of metrics used to measure software process productivity (see Maxwell, 2001; Symons, 2001).
Four process productivity metrics, direct and indirect, are presented in Table 21.5.
21.4 Product Metrics
Definition: Product metrics focus on the operational phase of a software system — that is, the period when
the software is in regular use by customers (either internal or external). These metrics assess the quality and
productivity of customer support and maintenance activities once the software has been released

Page | 14
Key:
■ DevH = total working hours invested in the development of the software system.
■ ReKLOC = number of thousands of reused lines of code.
■ ReDoc = number of reused pages of documentation.
■ NDoc = number of pages of documentation.
attracted for its development. In most cases, the software developer is required to provide customer
service during the software’s operational phase. Customer services are of two main types:
■ Help desk services (HD) – software support by instructing customers regarding the method of
application of the software and solution of cus- tomer implementation problems. Demand for these
services depends to a great extent on the quality of the user interface (its “user friendliness”) as well
as the quality of the user manual and integrated help menus.
■ Corrective maintenance services – correction of software failures identi- fied by customers/users or
detected by the customer service team prior to their discovery by customers. The number of software
failures and their density are directly related to software development quality. For com- pleteness of
information and better control of failure correction, it is recommended that all software failures
detected by the customer service team be recorded as corrective maintenance calls.

The array of software product metrics presented here is classified as follows:


■ HD quality metrics
■ HD productivity and effectiveness metrics
■ Corrective maintenance quality metrics
■ Corrective maintenance productivity and effectiveness metrics.
It should be remembered that software maintenance activities include:
■ Corrective maintenance – correction of software failures detected during regular operation of the
software.
■ Adaptive maintenance – adaptation of existing software to new cus- tomers or new requirements.
■ Functional improvement maintenance – addition of new functions to the existing software,
improvement of reliability, etc.
In the metrics presented here we limit our selection to those that deal with corrective maintenance. For
other components of software maintenance, the metrics suggested for the software development process
(process metrics) can be used as is or with minor adaptations.

21.4.1 HD quality metrics


The types of HD quality metrics discussed here deal with:

1. HD calls density metrics – the extent of customer requests for HD serv- ices as measured by the number
of calls.
2. Metrics of the severity of the HD issues raised.
3. HD success metrics – the level of success in responding to these calls. A success is achieved by
completing the required service within the time determined in the service contract.
HD calls density metrics
This section describes six different types of metrics. Some relate to the num- ber of the errors and
others to a weighted number of errors. As for size/volume measures of the software, some use number of
lines of code while others apply function points. The sources of data for these and the other metrics in this
group are HD reports. Three HD calls density metrics for HD performance are presented in Table 21.6.
Severity of HD calls metrics
The metrics belonging to this group of measures aim at detecting one type of adverse situation:
increasingly severe HD calls. The computed results may contribute to improvements in all or parts of
the user interface (its “user friendliness”) as well as the user manual and integrated help menus. We have
Page | 15
Key:
■ NHYC = number of HD calls during a year of service.
■ KLMC = thousands of lines of maintained software code.
■ WHYC = weighted HD calls received during one year of service.
■ NMFP = number of function points to be maintained.
selected one metric from this group for demonstration of how the entire cat- egory is employed. This
metric, the Average Severity of HD Calls (ASHC), refers to failures detected during a period of one
year (or any portion there- of, as appropriate):
WHYC ASHC = –––––––
NHYC
where WHYC and NHYC are defind as in Table 21.6.
Success of the HD services
The most common metric for the success of HD services is the capacity to solve problems raised by
customer calls within the time determined in the service contract (availability). Thus, the metric for
success of HD services compares the actual with the designated time for provision of these services. For
example, the availability of help desk (HD) services for an inventory management software package is
defined as follows:
■ The HD service undertakes to solve any HD call within one hour.
■ The probability that HD call solution time exceeds one hour will not exceed 2%.
■ The probability that HD call solution time exceeds four working hours will not exceed 0.5%.
One metric of this group is suggested here, HD Service Success (HDS):

NHYOT
HDS = ––––––––––
NHYC
where NHYOT = number of HD calls per year completed on time during one year of service.

21.4.2 Help Desk (HD) Productivity and Effectiveness Metrics


Purpose
Help Desk (HD) metrics evaluate both:
 Productivity – how efficiently support resources are used overall.
 Effectiveness – how efficiently resources are used to handle each customer call.
A. HD Productivity Metrics
Definition:
Measure the total resources (e.g., working hours) invested in providing HD services over a defined period
(usually one year), in relation to the size of the maintained software system.
System size can be expressed as either:
 KLMC (Thousands of Lines of Maintained Code), or
 NMFP (Number of Maintained Function Points).

Page | 16
Two productivity metrics are defined (Table 21.7):
Metric Full Name Formula
𝐻𝐷𝑌𝐻
HDP Help Desk Productivity 𝐻𝐷𝑃 =
𝐾𝐿𝑀𝐶
𝐻𝐷𝑌𝐻
FHDP Function-Point Help Desk Productivity 𝐹𝐻𝐷𝑃 =
𝑁𝑀𝐹𝑃
Key Variables:
 HDYH – Total yearly working hours invested in HD servicing.
 KLMC – Thousands of lines of maintained code.
 NMFP – Number of function points for the maintained software.
Interpretation:
 A lower HDP or FHDP indicates higher productivity, as fewer hours are required per software unit.
 These metrics can compare productivity across years or different software systems.
B. HD Effectiveness Metrics
Definition:
Effectiveness metrics relate to the average amount of effort invested per customer HD call.
Common Metric:
Metric Formula
𝐻𝐷𝑌𝐻
HDE (Help Desk Effectiveness) 𝐻𝐷𝐸 =
𝑁𝐻𝑌𝐶
Key Variables:
 HDYH – Total yearly HD working hours.
 NHYC – Number of yearly HD customer calls.
Interpretation:
 A lower HDE value means greater effectiveness — the help desk resolves customer calls more
efficiently.

21.4.3 Corrective Maintenance Quality Metrics


Purpose
Corrective maintenance metrics assess the quality and reliability of software maintenance services.
They distinguish between:
1. Software system failures (defects in the software itself), and
2. Maintenance service failures (ineffective or delayed fixes).
A. Classification of Maintenance Quality Metrics
Category Focus Purpose
Software System Failures Frequency of failures during Shows how often corrective
Density Metrics operation. maintenance is needed.
Software System Failures Detects if failure impact is
Seriousness of failures.
Severity Metrics increasing.
Failures of Maintenance Cases where maintenance corrections
Measures service reliability.
Services Metrics fail or are delayed.
Software System Availability Reflects how system unavailability
Downtime experienced by users.
Metrics affects customers.

Page | 17
B. Software System Failures Density Metrics (Table 21.8)
These metrics measure how many failures occur in a defined period (usually one year), relative to software
size.
Metric Full Name Formula
𝑁𝑌𝐹
SSFD Software System Failure Density 𝑆𝑆𝐹𝐷 =
𝐾𝐿𝑀𝐶
𝑊𝑌𝐹
WSSFD Weighted Software System Failure Density 𝑊𝑆𝑆𝐹𝐷 =
𝐾𝐿𝑀𝐶
𝑊𝑌𝐹
WSSFF Weighted Software System Failures per Function Point 𝑊𝑆𝑆𝐹𝐹 =
𝑁𝑀𝐹𝑃
Key Variables:
 NYF – Number of software failures detected per year.
 WYF – Weighted number of yearly failures (adjusted for severity).
 KLMC – Thousands of maintained lines of code.
 NMFP – Maintained software’s function points.
Interpretation:
 Higher SSFD/WSSFD/WSSFF values indicate poorer quality (more failures per unit of software).
 Weighted metrics better reflect the true impact of failures.
C. Software System Failures Severity Metric
To detect trends toward more severe failures, the following metric is used:
Metric Full Name Formula
𝑊𝑌𝐹
ASSSF Average Severity of Software System Failures 𝐴𝑆𝑆𝑆𝐹 =
𝑁𝑌𝐹
Interpretation:
 Higher ASSSF = More severe failures on average.
 Useful for identifying adverse quality trends, such as fewer but more critical failures.
 Can trigger retesting or reinspection of the affected modules.

Failures of Maintenance Services Metrics


Purpose
These metrics assess how often maintenance services themselves fail, rather than the software system.
A maintenance failure occurs when:
 The correction was not completed on time, or
 The correction failed, leading to a repeat repair for the same issue.
To focus on relevant cases, most organizations define a time window (commonly 3 months) for counting
repeat failures.
Metric: Maintenance Repeated Repair Failure (MRepF)
RepYF
MRepF =
NYF
Where:
 RepYF = Number of repeated software failure calls (maintenance service failures).
 NYF = Total number of software failures detected during the year.
Interpretation:
 Represents the proportion of repairs that had to be repeated due to unsuccessful maintenance.
 Lower MRepF → Higher service quality (fewer repeated corrections).
Page | 18
2. Software System Availability Metrics
Purpose
Measure how much time the software is operational and available to users versus how much time it
is partially or fully unavailable due to failures.
Three distinct types of availability are recognized:
1. Full Availability (FA) – all functions operate properly.
2. Vital Availability (VitA) – all vital (critical) functions work, though non-vital ones may fail.
3. Total Unavailability (TUA) – complete system failure (no functions available).
Formulas (Table 21.9)
Metric Formula
𝑁𝑌𝑆𝑒𝑟𝐻 − 𝑁𝑌𝐹𝐻
Full Availability (FA) 𝐹𝐴 =
𝑁𝑌𝑆𝑒𝑟𝐻
𝑁𝑌𝑆𝑒𝑟𝐻 − 𝑁𝑌𝑉𝑖𝑡𝐹𝐻
Vital Availability (VitA) 𝑉𝑖𝑡𝐴 =
𝑁𝑌𝑆𝑒𝑟𝐻
𝑁𝑌𝑇𝐹𝐻
Total Unavailability (TUA) 𝑇𝑈𝐴 =
𝑁𝑌𝑆𝑒𝑟𝐻
Key Variables:
 NYSerH = Total service hours per year.
o e.g. Office software: 50 hr/week × 52 weeks = 2600 hr/year.
o Real-time software: 24 hr/day × 365 days = 8760 hr/year.
 NYFH = Hours when at least one function failed (partial or total).
 NYVitFH = Hours when at least one vital function failed.
 NYTFH = Hours when total system failure occurred.
Relationships:
𝑁𝑌𝐹𝐻 ≥ 𝑁𝑌𝑉𝑖𝑡𝐹𝐻 ≥ 𝑁𝑌𝑇𝐹𝐻
1 − 𝑇𝑈𝐴 ≥ 𝑉𝑖𝑡𝐴 ≥ 𝐹𝐴
Interpretation:
 High FA / VitA = system is stable and reliable.
 High TUA = poor system availability (frequent total failures).
3. Software Corrective Maintenance Productivity and Effectiveness Metrics
Purpose
Assess both:
 Productivity – efficiency of the entire corrective maintenance operation.
 Effectiveness – efficiency in fixing each individual failure.
A. Corrective Maintenance Productivity Metrics
Metric Formula
𝐶𝑀𝑎𝑖𝑌𝐻
CMaiP 𝐶𝑀𝑎𝑖𝑃 =
𝐾𝐿𝑀𝐶
𝐶𝑀𝑎𝑖𝑌𝐻
FCMP 𝐹𝐶𝑀𝑃 =
𝑁𝑀𝐹𝑃
Key Variables:
 CMaiYH = Total yearly working hours spent on corrective maintenance.
 KLMC = Thousands of lines of maintained code.
 NMFP = Number of maintained function points.
Interpretation:

Page | 19
 Lower CMaiP / FCMP → Higher productivity, since fewer hours are needed per software unit.
B. Corrective Maintenance Effectiveness Metric
Metric Formula
𝐶𝑀𝑎𝑖𝑌𝐻
CMaiE 𝐶𝑀𝑎𝑖𝐸 =
𝑁𝑌𝐹
Key Variables:
 CMaiYH = Total yearly maintenance hours.
 NYF = Number of failures corrected during the year.
Interpretation:
 Lower CMaiE → Higher effectiveness (less effort required per fix).
21.5 Implementation of Software Quality Metrics
To ensure software quality metrics are useful and effective, an organization must do more than just define
them — it must systematically implement, monitor, and refine their use over time.
Key Steps in Implementation
1. Definition of relevant software quality metrics: Metrics must be relevant and adequate for the
needs of teams, departments, and projects.
2. Regular application: Metrics should be collected and analyzed on a routine basis (per project, per
quarter, etc.).
3. Statistical analysis: Collected metrics data should be statistically analyzed to reveal patterns,
deviations, and opportunities for improvement.
4. Follow-up and corrective actions: Based on the analysis, the organization may:
o Adjust organizational methods or procedures in software development or maintenance.
o Modify metrics or data collection processes.
o Use findings to plan and execute corrective or preventive actions across relevant units.
Example: Nokia’s experience (Kilpi, 2001) demonstrated the technical side of applying metrics but did not
elaborate on their use in managerial decision-making (e.g., productivity, effectiveness).
21.5.1 Definition of New Software Quality Metrics
Defining new (or improved) software quality metrics involves a four-stage process:
Stage 1: Define Attributes to be Measured
 Identify what aspects need to be measured, such as:
o Software quality
o Development team productivity
o Maintenance effectiveness, etc.
Stage 2: Define the Metric and Validate Its Adequacy
 Design metrics that accurately measure the chosen attributes.
 Confirm that each metric meets the general and operative requirements (see Frame 21.2):
o Relevant, valid, reliable, comprehensive, and mutually exclusive
o Easy to apply, unbiased, and integrated with existing systems
Stage 3: Determine Comparative Target Values
 Set benchmark or target values for each metric, based on:
o Industry standards
o Previous year’s achievements
o Past project performance
o Organizational goals
 These targets act as reference indicators to evaluate compliance or improvement.
Stage 4: Define the Metrics Application Process
Page | 20
 Specify how metrics will be reported and collected, including:
o Reporting method (who reports, when, how frequently)
o Data collection method (automatic tools, manual records, integrated systems)
 Metrics should be updated as the organization evolves and as data analysis suggests refinements.

21.5.2 Application of the Metrics – Managerial Aspects


The introduction and use of metrics resemble the adoption of any new management or quality assurance
procedure. It requires both organizational commitment and continuous oversight.
Key Managerial Activities:
1. Assign Responsibilities: Define who is responsible for:
 Metrics data collection
 Report preparation and submission
 Oversight of data accuracy
2. Train and Instruct Teams: Provide clear guidance on:
 The purpose of each metric
 How to collect and report data correctly
 How metrics contribute to quality improvement
3. Follow-Up and Support:Management should ensure ongoing support by:
 Helping resolve implementation issues
 Offering clarification or training as needed
 Monitoring completeness and accuracy of data reporting
4. Update and Refine Metrics: Periodically review metrics and modify them based on:
 Lessons learned from past projects
 Changes in organizational structure or tools
 Evolving software engineering practices

Page | 21
An interesting application of software quality metrics for comparison of national software industries
is presented in the following example.

Example – Comparison of US and Japanese software industries Cusumano (1991) makes use of three
metrics in a comparison of the US and Japanese software industries:

 Mean productivity
 Failure density (based on measurements during the first 12 months after system delivery)
 Code reuse.
These metrics are presented in Table 21.11, and Cusumano’s results are pre- sented in Table 21.12.

21.5.3 Statistical Analysis of Metrics Data


Purpose
The statistical analysis of software quality metrics allows organizations to compare performance, identify
trends, and evaluate improvements across projects, teams, and time periods. It turns raw data into
actionable insights for Software Quality Assurance (SQA) management.
1. Uses of Metrics Data Analysis
Metrics analysis supports:
 Comparisons against predefined indicators (e.g., targets, benchmarks, standards)
 Comparisons between:
o Different projects or releases
o Different teams or departments
o Different time periods within the same team
o Different tools, methods, or organizational changes
These comparisons help answer practical managerial questions, such as:
 Are there significant differences between help desk (HD) teams’ service quality?
 Does introducing a new development tool improve software quality?
 Did a recent reorganization improve productivity?
2. Example – Industry Comparison
Tables 21.11 and 21.12 (based on Cusumano, 1991) illustrate how metrics can be used to compare software
industries across countries — in this case, U.S. vs. Japanese software companies.
Metric Formula Interpretation
𝐾𝑁𝐿𝑂𝐶
Mean Productivity Lines of code produced per work-year
𝑊𝑜𝑟𝑘𝑌
𝑁𝑌𝐹
Failure Density Number of failures per thousand lines of code
𝐾𝑁𝐿𝑂𝐶
𝑅𝑒𝐾𝑁𝐿𝑂𝐶
Code Reuse % of reused code lines
𝐾𝑁𝐿𝑂𝐶
Table 21.12 – Comparison Results
Metric U.S. Japan
Mean Productivity 7,290 12,447
Failure Density 4.44 1.96
Code Reuse 9.71% 18.25%
Number of Companies 20 11

Page | 22
Interpretation:
Japanese companies demonstrated higher productivity, lower failure density, and greater code reuse —
indicating better software quality and efficiency practices overall.
3. Types of Statistical Analysis
Metrics data can be analyzed using two major approaches:
A. Descriptive Statistics
Used for summarizing and visualizing data to reveal trends, patterns, and anomalies.
Common Tools & Techniques:
 Mean, median, mode
 Histograms
 Cumulative distribution graphs
 Pie charts
 Control charts (often showing indicator or target values)
Purpose:
 Quickly identify trends (e.g., improvement or degradation in quality)
 Detect deviations from target values
 Flag situations that may require corrective or preventive actions
Limitations:
 Descriptive statistics do not test significance — i.e., they don’t tell whether trends are due to actual
improvement or just random variation.
B. Analytical (Inferential) Statistics
Used to test the significance of observed differences or changes in metrics data — determining whether
results reflect real changes or random fluctuations.
Common Analytical Tools:
 T-test – compares two averages (e.g., before and after process change)
 Chi-square test – tests relationships between categorical variables
 Regression analysis – examines how one factor influences another (e.g., tool adoption vs. error rates)
 Analysis of variance (ANOVA) – compares means among multiple groups or projects
Purpose:
 Validate that observed trends are statistically significant
 Support data-driven decisions about process improvements
Challenge:
 Applying analytical statistics to software performance metrics can be difficult due to:
o The complexity of software systems
o The many interrelated factors influencing quality (tools, teams, design, etc.)
For deeper understanding, further reading in statistical analysis and SQA research is recommended.
4. Overall Importance
Statistical analysis transforms metrics from mere numbers into decision-making tools.
When applied correctly, it helps organizations:
 Validate the effectiveness of process changes
 Identify areas needing improvement
 Ensure objective performance assessment
 Support a culture of continuous quality improvement

Page | 23
21.5.4 Taking Action in Response to Metrics Analysis Results
Purpose
Once metrics data has been analyzed, organizations must take practical actions to address findings,
improve processes, and maintain quality performance.
Types of Actions
Metrics-driven actions can be classified into two main types:
1. Direct Actions
Initiated by project or team management based on metrics results from their own unit.
Examples include:
 Reorganization of teams or processes
 Changes in software development or maintenance methods (e.g., adopting new tools, refining
testing approaches)
 Revision of metrics themselves to improve their relevance or accuracy
These actions are typically local and immediate, aimed at addressing identified weaknesses or reinforcing
effective practices.
2. Indirect Actions
Initiated by the Corrective Action Board (CAB) — a central quality oversight body.
 CAB actions are based on aggregated analysis of metrics data from multiple projects or departments.
 They typically lead to organization-wide process changes, such as new standards, training programs,
or updates to quality procedures.
 Detailed discussion of CAB’s role is provided in Chapter 17.
Summary:
Metrics → Analysis → Direct or Indirect Actions → Continuous Improvement Cycle

21.6 Limitations of Software Metrics


Overview
Although software quality metrics are valuable tools, their application is challenging due to both general
organizational barriers and software-specific issues.
1. General Obstacles
These are common across most industries applying performance metrics:
Category Description
Budget constraints Insufficient resources (manpower, funds, tools) to build and sustain a metrics system
Human factors Employee resistance to evaluation, fear of surveillance or unfair judgment
Data validity issues Incomplete, inaccurate, or biased data due to inconsistent reporting practices
These factors can undermine the credibility and usefulness of metrics-based decisions.
2. Unique Software-Related Limitations
Unlike manufacturing or service industries, software development presents unique challenges due to the
nature of its products and processes.
Most software metrics suffer from low validity (they don’t measure what they intend to) and limited
comprehensiveness (they don’t capture the full picture).
Examples of affected metrics:
 Process metrics: KLOC (thousands of lines of code), NDE (number of detected errors), NCE (number
of corrected errors)
 Product/maintenance metrics: KLMC (KLOC maintained), NHYC (human work years in
correction), NYF (number of yearly failures)
Page | 24
3. Factors Affecting Development Process Metrics
Factor Impact on Metrics
“Verbose” or inefficient code inflates KLOC without increasing
(1) Programming style
functionality
Large volumes of comments increase code size, distorting KLOC-based
(2) Documentation comments
metrics
Complex modules take more time and contain more defects (affects
(3) Software complexity
KLOC, NCE)
(4) Code reuse percentage More reuse → higher productivity, fewer defects (affects NDE, NCE)
(5) Professionalism of QA teams Influences how many defects are actually detected
Teams differ in how they record findings — concise vs. detailed reports
(6) Reporting style
cause inconsistency in NCE/NDE values
Result:
Two projects of equal quality may appear very different in metrics results due to coding or reporting variations.
4. Factors Affecting Maintenance (Product) Metrics
Factor Impact on Metrics
(1) Quality of installed software and
Poor initial quality leads to more failures (↑ NYF, ↑ NHYC)
documentation
(2) Programming style and code
Wasteful coding inflates maintenance workload (↑ KLMC)
documentation
Complex modules require more maintenance effort per line
(3) Software complexity
of code (↑ NYF)

(4) Code reuse Higher reuse → fewer defects, fewer help desk calls (↓ NYF)

More users or installations → more defect reports (↑ NHYC,


(5) User population and installations
↑ NYF)
These variations distort how maintenance productivity or quality appear when measured numerically.

5. Consequences
Because these factors distort metrics results:
 Many metrics fail to reflect true quality or productivity
 Comparison between teams or systems can be misleading
 Decision-making based solely on metrics can lead to incorrect conclusions
Thus, while metrics are valuable, they must always be interpreted contextually, not mechanically.
6. Future Directions and Improvements
Substantial research and innovation are needed to design better software-specific metrics.
One major improvement is the Function Point method, which:
 Measures functionality delivered to the user, not just code volume.
 Is less dependent on programming language or style.
 Offers a more reliable and consistent measure of development effort.
(A detailed discussion of the Function Point method appears in Appendix 21A.)

Page | 25

You might also like