SQL 4-1
SQL 4-1
of their extent.
The components of project progress control and their implementation are discussed in this
chapter. Special attention is assigned to the difficulties entailed with controlling external participants
and internal projects. Another section deals with tools for project progress control.
After completing this chapter, you will be able to:
1. Explain the components of project progress control.
2. Explain the implementation issues involved in project progress control.
20.1 The components of project progress control
Project progress control (CMM uses the term “software project tracking”) has one immediate
objective: early detection of irregular events. Detection promotes the timely initiation of problem-
solving responses. The accumulated information on progress control as well as successes and extreme
failures also serve a long-term objective: initiation of corrective actions.
Main Components of Project Progress Control
1. Control of risk management activities
2. Project schedule control
3. Project resource control
4. Project budget control
1. Control of Risk Management Activities
Involves monitoring identified software development risk items, such as:
o Risks found during the pre-project stage
o Risks noted in contract reviews and project plans
o New risks discovered during project execution
The project team should perform systematic risk management, including:
o Regular (periodic) assessments of current risk items
o Evaluation of the effectiveness of risk responses already applied
o Updating risk mitigation plans as necessary
Project managers must step in when certain risks escalate or become critical.
The process is guided by established standards and references such as:
o IEEE (2001) standards on software risk management
o Jones (1994) — a reference on software project risks
Page | 1
2. Project Schedule Control
Definition:
Project schedule control ensures that the project adheres to its approved and contracted timelines. It focuses
on monitoring progress against the schedule to detect and address delays early.
Key Elements:
1. Milestones as Control Points
o Project progress is tracked through milestones, which mark the completion of key activities or
deliverables.
o Milestones are deliberately placed to make it easier to identify delays or deviations from the
plan.
o Contractual milestones (e.g., product delivery dates, completion of development phases) are
especially important and receive extra attention.
2. Focus on Critical Delays
o Minor delays are expected in most projects, but management focuses primarily on critical
delays — those that can significantly impact the final delivery date or overall project
success.
3. Monitoring and Reporting
o Information about progress is gathered through:
Milestone reports
Periodic progress reports
o These reports provide the data management needs to evaluate schedule compliance.
4. Management Actions
o Based on these reports, management may:
Allocate additional resources (e.g., staff, tools, or time) to get the project back on
track.
Renegotiate the schedule with the customer if delays are unavoidable or justified.
Page | 2
3. Composition and Allocation Control
o Resource control also looks at how resources are distributed internally:
For example, even if total man-months for system analysts match the plan, the
allocation between junior and senior analysts might differ.
Spending more on senior analysts (e.g., 50% instead of the planned 25%) could strain
the budget later on.
4. Early Detection and Action
o While budget control can eventually reveal such deviations, resource control identifies them
earlier, allowing timely intervention.
o If deviations are justified, management may:
Increase total resources (e.g., hire more staff or add equipment).
Reallocate existing resources (e.g., reorganize teams, revise the project plan).
Page | 3
o This is problematic because:
Other controls can identify issues (risks, delays, overuse of resources) earlier.
Relying only on budget control means problems are detected later, potentially making
them costlier to fix.
Page | 4
20.3 Implementation of Project Progress Control Regimes
Purpose
The implementation of project progress control ensures that project activities are systematically
monitored and reported, allowing management to maintain an accurate and timely view of project
performance.
Key Components of Implementation
Project progress control relies on formal procedures that define:
1. Allocation of Responsibility
Responsibilities must be clearly assigned according to the project’s characteristics (e.g., size, complexity,
organizational structure).
These include:
Who is responsible for performing progress control tasks (e.g., project leader, project manager,
department head).
Frequency of reporting required from each unit and management level.
Conditions requiring immediate reporting by:
o Project leaders to management (e.g., major risks, critical delays, severe deviations).
o Lower management to upper management (e.g., when corrective actions exceed their
authority).
This ensures accountability and prompt communication of critical issues.
Page | 5
Project Leader’s Role:
Prepares periodic project progress reports summarizing:
1. Project risks (status and changes)
2. Project schedule (milestone progress, delays)
3. Resource utilization (human and technical resources)
Bases these reports on information collected from team leaders and other direct subordinates.
These reports form the foundation for higher-level management assessments and decisions.
(Figure 20.1, referenced in the text, likely shows an example of such a progress report.)
Overall Summary: Implementing project progress control requires a structured reporting framework,
clear roles and responsibilities, and strong coordination across all management levels. The goal is to ensure
that accurate, timely, and actionable information about project risks, schedules, and resources flows upward,
enabling effective decision-making and corrective actions.
Summary Table
Control
Functions of Computerized Tools Primary Benefit
Component
Risk Management Lists of risk items, overdue risk reports Early detection and mitigation of risks
Delay tracking, critical activity and milestone
Schedule Control Real-time visibility of schedule health
updates
Allocation plans, utilization tracking, exception
Resource Control Efficient and balanced resource usage
reports
Budget plans, spending reports, deviation Financial control and prevention of
Budget Control
tracking overruns
Overall Summary
Computerized project progress control tools are indispensable in modern software project management.
By automating data collection, analysis, and reporting, they:
Enhance visibility and coordination,
Support timely corrective action, and
Ensure integrated control across risk, schedule, resource, and budget dimensions.
These tools make project management data-driven, responsive, and far more efficient than traditional
manual methods.
Page | 7
Page | 8
21.1 Objectives of quality measurement
Software quality and other software engineers have formulated the main objectives for software quality
metrics, presented
Page | 9
Requirement Explanation
The metric should be applicable across a wide variety of implementations, projects, and
Comprehensive
situations.
Mutually Each metric should measure a distinct attribute to avoid overlap or duplication with
Exclusive other metrics.
B. Operative Requirements
Requirement Explanation
Data collection for the metric should be straightforward and require minimal
Easy and Simple
resources.
Does Not Require Metrics should be integrated with existing project data collection systems (e.g.,
Independent Data attendance, wages, cost accounting), improving efficiency and coordination
Collection across organizational systems.
Metrics should be designed to minimize manipulation by individuals attempting
Immune to Biased
to influence results. This is achieved by careful metric selection and by
Interventions
establishing appropriate data collection procedures.
Page | 10
same task, KLOC is only valid for systems developed in the same programming language or tool
environment.
Function Points:
A language-independent measure estimating the amount of functionality delivered by the software
and the resources required to develop it.
(Further details in Appendix 21A.)
4. Exclusions
Metrics related to customer satisfaction are not included in this discussion, as they are extensively covered
in marketing literature rather than software engineering.
Low severity 42 1 42
Medium severity 17 3 51
High severity 11 9 99
Total 70 — 192
NCE 70 — —
WCE — — 192
Results
NCE (Number of Code Errors) = 70
WCE (Weighted Code Errors) = 192
This means that while 70 errors were found in total, when severity is considered, the weighted total impact
of those errors is 192, highlighting that high-severity errors have a much larger effect on software quality.
Table 21.1 – Error Density Metrics (Key Measures and Definitions)
Code Name Calculation formula
Key:
1. NCE = number of code errors detected in the software code by code inspections and testing. Data for this
measure are culled from code inspection and testing reports.
2. KLOC = thousands of lines of code.
3. NDE = total number of development (design and code) errors detected in the software development process. Data
for this measure are found in the various design and code reviews and testing reports conducted.
4. WCE= weighted code errors detected. The sources of data for this metric are the same as those for NCE.
5. WDE = total weighted development (design and code) errors detected in development of the soft- ware.
The sources of data for this metric are the same as those for NDE.
6. NFP = number of function points required for development of the software. Sources for the number of
function points are professional surveys of the relevant software.
Page | 12
Example 2. This example follows Example 1 and introduces the factor of weighted measures so as
to demonstrate the implications of their use. A software development department applies two alternative
metrics for calcu- lation of code error density: CED and WCED. The unit determined the following
indicators for unacceptable software quality: CED > 2 and WCED > 4. For our calculations we apply
the three classes of quality and their relative weights and the code error summary for the Atlantis project
mentioned in Example 1. The software system size is 40 KLOC. Calculation of the two metrics resulted
in the following:
Calculation of CED Calculation of WCED
Measures and metrics (Code Error Density) (Weighted Code Error Density)
NCE 70 —
WCE — 192
KLOC 40 40
The conclusions reached after application of the unweighted versus weighted metrics are different.
While the CED does not indicate quality below the acceptable level, the WCED metric does indicate
quality below the acceptable level (in other words, if the error density is too high, the unit’s quality
is not acceptable), a result that calls for management intervention.
Error seveity metrics
The metrics belonging to this group are used to detect adverse situations of increasing numbers of severe
errors in situations where errors and weighted errors, as measured by error density metrics, are generally
decreasing. Two error severity metrics are presented in Table 21.2.
Page | 13
Key:
MSOT = milestones completed on time.
MS = total number of milestones.
TCDAM = total Completion Delays (days, weeks, etc.) for All Milestones.
To calculate this measure, delays reported for all relevant milestones are summed up. Milestones
completed on time or before schedule are considered “O” delays. Some professionals refer to completion of
milestones before schedule as “minus” delays. These are considered to balance the effect of accounted-for delays
(we might call the latter “plus” delays). In these cases, the value of the ADMC may be lower than the value
obtained according to the metric originally suggested.
Key:
NYF = number of software failures detected during a year of maintenance service.
WYF = weighted number of software failures detected during a year of maintenance service.
Page | 14
Key:
■ DevH = total working hours invested in the development of the software system.
■ ReKLOC = number of thousands of reused lines of code.
■ ReDoc = number of reused pages of documentation.
■ NDoc = number of pages of documentation.
attracted for its development. In most cases, the software developer is required to provide customer
service during the software’s operational phase. Customer services are of two main types:
■ Help desk services (HD) – software support by instructing customers regarding the method of
application of the software and solution of cus- tomer implementation problems. Demand for these
services depends to a great extent on the quality of the user interface (its “user friendliness”) as well
as the quality of the user manual and integrated help menus.
■ Corrective maintenance services – correction of software failures identi- fied by customers/users or
detected by the customer service team prior to their discovery by customers. The number of software
failures and their density are directly related to software development quality. For com- pleteness of
information and better control of failure correction, it is recommended that all software failures
detected by the customer service team be recorded as corrective maintenance calls.
1. HD calls density metrics – the extent of customer requests for HD serv- ices as measured by the number
of calls.
2. Metrics of the severity of the HD issues raised.
3. HD success metrics – the level of success in responding to these calls. A success is achieved by
completing the required service within the time determined in the service contract.
HD calls density metrics
This section describes six different types of metrics. Some relate to the num- ber of the errors and
others to a weighted number of errors. As for size/volume measures of the software, some use number of
lines of code while others apply function points. The sources of data for these and the other metrics in this
group are HD reports. Three HD calls density metrics for HD performance are presented in Table 21.6.
Severity of HD calls metrics
The metrics belonging to this group of measures aim at detecting one type of adverse situation:
increasingly severe HD calls. The computed results may contribute to improvements in all or parts of
the user interface (its “user friendliness”) as well as the user manual and integrated help menus. We have
Page | 15
Key:
■ NHYC = number of HD calls during a year of service.
■ KLMC = thousands of lines of maintained software code.
■ WHYC = weighted HD calls received during one year of service.
■ NMFP = number of function points to be maintained.
selected one metric from this group for demonstration of how the entire cat- egory is employed. This
metric, the Average Severity of HD Calls (ASHC), refers to failures detected during a period of one
year (or any portion there- of, as appropriate):
WHYC ASHC = –––––––
NHYC
where WHYC and NHYC are defind as in Table 21.6.
Success of the HD services
The most common metric for the success of HD services is the capacity to solve problems raised by
customer calls within the time determined in the service contract (availability). Thus, the metric for
success of HD services compares the actual with the designated time for provision of these services. For
example, the availability of help desk (HD) services for an inventory management software package is
defined as follows:
■ The HD service undertakes to solve any HD call within one hour.
■ The probability that HD call solution time exceeds one hour will not exceed 2%.
■ The probability that HD call solution time exceeds four working hours will not exceed 0.5%.
One metric of this group is suggested here, HD Service Success (HDS):
NHYOT
HDS = ––––––––––
NHYC
where NHYOT = number of HD calls per year completed on time during one year of service.
Page | 16
Two productivity metrics are defined (Table 21.7):
Metric Full Name Formula
𝐻𝐷𝑌𝐻
HDP Help Desk Productivity 𝐻𝐷𝑃 =
𝐾𝐿𝑀𝐶
𝐻𝐷𝑌𝐻
FHDP Function-Point Help Desk Productivity 𝐹𝐻𝐷𝑃 =
𝑁𝑀𝐹𝑃
Key Variables:
HDYH – Total yearly working hours invested in HD servicing.
KLMC – Thousands of lines of maintained code.
NMFP – Number of function points for the maintained software.
Interpretation:
A lower HDP or FHDP indicates higher productivity, as fewer hours are required per software unit.
These metrics can compare productivity across years or different software systems.
B. HD Effectiveness Metrics
Definition:
Effectiveness metrics relate to the average amount of effort invested per customer HD call.
Common Metric:
Metric Formula
𝐻𝐷𝑌𝐻
HDE (Help Desk Effectiveness) 𝐻𝐷𝐸 =
𝑁𝐻𝑌𝐶
Key Variables:
HDYH – Total yearly HD working hours.
NHYC – Number of yearly HD customer calls.
Interpretation:
A lower HDE value means greater effectiveness — the help desk resolves customer calls more
efficiently.
Page | 17
B. Software System Failures Density Metrics (Table 21.8)
These metrics measure how many failures occur in a defined period (usually one year), relative to software
size.
Metric Full Name Formula
𝑁𝑌𝐹
SSFD Software System Failure Density 𝑆𝑆𝐹𝐷 =
𝐾𝐿𝑀𝐶
𝑊𝑌𝐹
WSSFD Weighted Software System Failure Density 𝑊𝑆𝑆𝐹𝐷 =
𝐾𝐿𝑀𝐶
𝑊𝑌𝐹
WSSFF Weighted Software System Failures per Function Point 𝑊𝑆𝑆𝐹𝐹 =
𝑁𝑀𝐹𝑃
Key Variables:
NYF – Number of software failures detected per year.
WYF – Weighted number of yearly failures (adjusted for severity).
KLMC – Thousands of maintained lines of code.
NMFP – Maintained software’s function points.
Interpretation:
Higher SSFD/WSSFD/WSSFF values indicate poorer quality (more failures per unit of software).
Weighted metrics better reflect the true impact of failures.
C. Software System Failures Severity Metric
To detect trends toward more severe failures, the following metric is used:
Metric Full Name Formula
𝑊𝑌𝐹
ASSSF Average Severity of Software System Failures 𝐴𝑆𝑆𝑆𝐹 =
𝑁𝑌𝐹
Interpretation:
Higher ASSSF = More severe failures on average.
Useful for identifying adverse quality trends, such as fewer but more critical failures.
Can trigger retesting or reinspection of the affected modules.
Page | 19
Lower CMaiP / FCMP → Higher productivity, since fewer hours are needed per software unit.
B. Corrective Maintenance Effectiveness Metric
Metric Formula
𝐶𝑀𝑎𝑖𝑌𝐻
CMaiE 𝐶𝑀𝑎𝑖𝐸 =
𝑁𝑌𝐹
Key Variables:
CMaiYH = Total yearly maintenance hours.
NYF = Number of failures corrected during the year.
Interpretation:
Lower CMaiE → Higher effectiveness (less effort required per fix).
21.5 Implementation of Software Quality Metrics
To ensure software quality metrics are useful and effective, an organization must do more than just define
them — it must systematically implement, monitor, and refine their use over time.
Key Steps in Implementation
1. Definition of relevant software quality metrics: Metrics must be relevant and adequate for the
needs of teams, departments, and projects.
2. Regular application: Metrics should be collected and analyzed on a routine basis (per project, per
quarter, etc.).
3. Statistical analysis: Collected metrics data should be statistically analyzed to reveal patterns,
deviations, and opportunities for improvement.
4. Follow-up and corrective actions: Based on the analysis, the organization may:
o Adjust organizational methods or procedures in software development or maintenance.
o Modify metrics or data collection processes.
o Use findings to plan and execute corrective or preventive actions across relevant units.
Example: Nokia’s experience (Kilpi, 2001) demonstrated the technical side of applying metrics but did not
elaborate on their use in managerial decision-making (e.g., productivity, effectiveness).
21.5.1 Definition of New Software Quality Metrics
Defining new (or improved) software quality metrics involves a four-stage process:
Stage 1: Define Attributes to be Measured
Identify what aspects need to be measured, such as:
o Software quality
o Development team productivity
o Maintenance effectiveness, etc.
Stage 2: Define the Metric and Validate Its Adequacy
Design metrics that accurately measure the chosen attributes.
Confirm that each metric meets the general and operative requirements (see Frame 21.2):
o Relevant, valid, reliable, comprehensive, and mutually exclusive
o Easy to apply, unbiased, and integrated with existing systems
Stage 3: Determine Comparative Target Values
Set benchmark or target values for each metric, based on:
o Industry standards
o Previous year’s achievements
o Past project performance
o Organizational goals
These targets act as reference indicators to evaluate compliance or improvement.
Stage 4: Define the Metrics Application Process
Page | 20
Specify how metrics will be reported and collected, including:
o Reporting method (who reports, when, how frequently)
o Data collection method (automatic tools, manual records, integrated systems)
Metrics should be updated as the organization evolves and as data analysis suggests refinements.
Page | 21
An interesting application of software quality metrics for comparison of national software industries
is presented in the following example.
Example – Comparison of US and Japanese software industries Cusumano (1991) makes use of three
metrics in a comparison of the US and Japanese software industries:
Mean productivity
Failure density (based on measurements during the first 12 months after system delivery)
Code reuse.
These metrics are presented in Table 21.11, and Cusumano’s results are pre- sented in Table 21.12.
Page | 22
Interpretation:
Japanese companies demonstrated higher productivity, lower failure density, and greater code reuse —
indicating better software quality and efficiency practices overall.
3. Types of Statistical Analysis
Metrics data can be analyzed using two major approaches:
A. Descriptive Statistics
Used for summarizing and visualizing data to reveal trends, patterns, and anomalies.
Common Tools & Techniques:
Mean, median, mode
Histograms
Cumulative distribution graphs
Pie charts
Control charts (often showing indicator or target values)
Purpose:
Quickly identify trends (e.g., improvement or degradation in quality)
Detect deviations from target values
Flag situations that may require corrective or preventive actions
Limitations:
Descriptive statistics do not test significance — i.e., they don’t tell whether trends are due to actual
improvement or just random variation.
B. Analytical (Inferential) Statistics
Used to test the significance of observed differences or changes in metrics data — determining whether
results reflect real changes or random fluctuations.
Common Analytical Tools:
T-test – compares two averages (e.g., before and after process change)
Chi-square test – tests relationships between categorical variables
Regression analysis – examines how one factor influences another (e.g., tool adoption vs. error rates)
Analysis of variance (ANOVA) – compares means among multiple groups or projects
Purpose:
Validate that observed trends are statistically significant
Support data-driven decisions about process improvements
Challenge:
Applying analytical statistics to software performance metrics can be difficult due to:
o The complexity of software systems
o The many interrelated factors influencing quality (tools, teams, design, etc.)
For deeper understanding, further reading in statistical analysis and SQA research is recommended.
4. Overall Importance
Statistical analysis transforms metrics from mere numbers into decision-making tools.
When applied correctly, it helps organizations:
Validate the effectiveness of process changes
Identify areas needing improvement
Ensure objective performance assessment
Support a culture of continuous quality improvement
Page | 23
21.5.4 Taking Action in Response to Metrics Analysis Results
Purpose
Once metrics data has been analyzed, organizations must take practical actions to address findings,
improve processes, and maintain quality performance.
Types of Actions
Metrics-driven actions can be classified into two main types:
1. Direct Actions
Initiated by project or team management based on metrics results from their own unit.
Examples include:
Reorganization of teams or processes
Changes in software development or maintenance methods (e.g., adopting new tools, refining
testing approaches)
Revision of metrics themselves to improve their relevance or accuracy
These actions are typically local and immediate, aimed at addressing identified weaknesses or reinforcing
effective practices.
2. Indirect Actions
Initiated by the Corrective Action Board (CAB) — a central quality oversight body.
CAB actions are based on aggregated analysis of metrics data from multiple projects or departments.
They typically lead to organization-wide process changes, such as new standards, training programs,
or updates to quality procedures.
Detailed discussion of CAB’s role is provided in Chapter 17.
Summary:
Metrics → Analysis → Direct or Indirect Actions → Continuous Improvement Cycle
(4) Code reuse Higher reuse → fewer defects, fewer help desk calls (↓ NYF)
5. Consequences
Because these factors distort metrics results:
Many metrics fail to reflect true quality or productivity
Comparison between teams or systems can be misleading
Decision-making based solely on metrics can lead to incorrect conclusions
Thus, while metrics are valuable, they must always be interpreted contextually, not mechanically.
6. Future Directions and Improvements
Substantial research and innovation are needed to design better software-specific metrics.
One major improvement is the Function Point method, which:
Measures functionality delivered to the user, not just code volume.
Is less dependent on programming language or style.
Offers a more reliable and consistent measure of development effort.
(A detailed discussion of the Function Point method appears in Appendix 21A.)
Page | 25