0% found this document useful (0 votes)
2 views

Software Metrics - PDF

Software metrics are quantitative measures that provide insights into the effectiveness of software processes and projects, helping to identify areas for improvement. They include measures, metrics, and indicators, which assess product characteristics, process efficiency, and project performance. Establishing a metrics baseline from past projects aids in process improvement and accurate estimation for future projects.

Uploaded by

sulagna9597
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Software Metrics - PDF

Software metrics are quantitative measures that provide insights into the effectiveness of software processes and projects, helping to identify areas for improvement. They include measures, metrics, and indicators, which assess product characteristics, process efficiency, and project performance. Establishing a metrics baseline from past projects aids in process improvement and accurate estimation for future projects.

Uploaded by

sulagna9597
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Software Metrics

1. What is software metrics?


Software process and project metrics are quantitative measures that enable you to
gain insight into the efficacy of the software process and the projects that are
conducted using the process as a framework. Basic quality and productivity data are
collected. These data are then analyzed, compared against past averages, and assessed
to determine whether quality and productivity improvements have occurred. Metrics
are also used to pinpoint problem areas so that remedies can be developed and the
software process can be improved.
2. Measure
A measure is a direct, quantifiable observation or measurement of a specific
attribute. It's the raw data point. For Examples,
• Number of lines of code written
• Time taken to complete a task
• Number of defects found in testing
• Memory usage of the application
3. Metrics
A metric is a derived value or calculation based on one or more measures. It provides
context and meaning to the raw data. For example:
• Defect density: Number of defects found per 1000 lines of code (measures: defects,
lines of code)
• Memory efficiency: Memory usage per user or transaction (measures: memory
usage, users/transactions)
4. Indicators
An indicator is a higher-level interpretation of one or more metrics. It provides insights
into the specific goal or objective. For examples,
• Software quality: An indicator of software quality might be based on a combination
of metrics like defect density, test coverage, and customer satisfaction.
• System performance: An indicator of system performance could be based on metrics
like response time, throughput, and error rates.
5. Characteristics of Software Metrics
Quantifiable: Software metrics provide numerical values that can be measured and
analyzed.
➢ Objectives: They are based on factual data and avoid subjective interpretations.
➢ Meaningfulness: Metrics should be relevant to the goals and objectives of the
software project.
➢ Actionable: They should provide insights that can be used to improve the
software development process or product.
6. Types of Software Metrics
Product Metrics: These metrics assess the characteristics of the software product itself,
such as size, complexity, quality, and performance. Examples include lines of code,
defect density, and response time.
Process Metrics: These metrics evaluate the effectiveness and efficiency of the
software development process, such as effort, time, and cost. Examples include
development time, cost per feature, and team velocity.
Project Metrics: These metrics track the progress and performance of the software
project as a whole, such as schedule adherence, budget variance, and milestone
completion.
7. Examples of Software Metrics
Lines of Code (LOC): Measures the size of the software program.
Cyclomatic Complexity: Measures the complexity of the code's control flow.
Function Points: Estimates the functionality of the software based on user
requirements.
Team Velocity: Amount of work completed by the team in a sprint.
Customer Satisfaction: Measures user happiness with the software product.
8. Size-Oriented Metrics
Size-oriented software metrics are derived by normalizing quality and/or productivity
measures by considering the size of the software that has been produced. In order to
develop metrics that can be assimilated with similar metrics from other projects, you
can choose lines of code as a normalization value. From the rudimentary data
contained in the table, a set of simple size-oriented metrics can be developed for each
project:
Errors per KLOC (thousand lines of code)
• Defects per KLOC
• cost per KLOC
• Pages of documentation per KLOC
9. Function-Oriented Metrics
Function-oriented software metrics use a measure of the functionality delivered by
the application as a normalization value. The most widely used function-oriented
metric is the function point (FP). Computation of the function point is based on
characteristics of the software’s information domain and complexity.
Proponents claim that FP is programming language independent, making it ideal for
applications using conventional and nonprocedural languages, and that it is based on
data that are more likely to be known early in the evolution of a project, making FP
more attractive as an estimation approach. Opponents claim that the method requires
some “sleight of hand” in that computation is based on subjective rather than
objective data, that counts of the information domain (and other dimensions) can be
difficult to collect after the fact, and that FP has no direct physical meaning—it’s just a
number.
10. Object-Oriented Metrics
Number of scenario scripts: A scenario script (analogous to use cases discussed
throughout Part 2 of this book) is a detailed sequence of steps that describe the
interaction between the user and the application.
Number of key classes: Key classes are the “highly independent components” that are
defined early in object-oriented analysis. Because key classes are central to the
problem domain, the number of such classes is an indication of the amount of effort
required to develop the software and also an indication of the potential amount of
reuse to be applied during system development.
Number of support classes. Support classes are required to implement the system but
are not immediately related to the problem domain. Examples might be user interface
(GUI) classes, database access and manipulation classes, and computation classes. In
addition, support classes can be developed for each of the key classes. Support classes
are defined iteratively throughout an evolutionary process. The number of support
classes is an indication of the amount of effort required to develop the software and
also an indication of the potential amount of reuse to be applied during system
development.
Average number of support classes per key class: In general, key classes are known
early in the project. Support classes are defined throughout. If the average number of
support classes per key class were known for a given problem domain, estimating
(based on the total number of classes) would be greatly simplified.
Number of subsystems: A subsystem is an aggregation of classes that support a
function that is visible to the end user of a system. Once subsystems are identified, it
is easier to lay out a reasonable schedule in which work on subsystems is partitioned
among project staff.
11. WebApp Project Metrics
Number of static Web pages: Web pages with static content (i.e., the end user has no
control over the content displayed on the page) are the most common of all WebApp
features. These pages represent low relative complexity and generally require less
effort to construct than dynamic pages. This measure provides an indication of the
overall size of the application and the effort required to develop it.
Number of dynamic Web pages: Web pages with dynamic content (i.e., end-user
actions or other external factors result in customized content displayed on the page)
are essential in all e-commerce applications, search engines, financial applications, and
many other WebApp categories. These pages represent higher relative complexity and
require more effort to construct than static pages. This measure provides an indication
of the overall size of the application and the effort required to develop it.
Number of internal page links: Internal page links are pointers that provide a hyperlink
to some other Web page within the WebApp. This measure provides an indication of
the degree of architectural coupling within the WebApp. As the number of page links
increases, the effort expended on navigational design and construction also increases.
Number of persistent data objects: One or more persistent data objects (e.g., a
database or data file) may be accessed by a WebApp. As the number of persistent data
objects grows, the complexity of the WebApp also grows and the effort to implement
it increases proportionally.
Number of external systems interfaced: WebApps must often interface with
“backroom” business applications. As the requirement for interfacing grows, system
complexity and development effort also increase.
Number of static content objects: Static content objects encompass static text-based,
graphical, video, animation, and audio information that are incorporated within the
WebApp. Multiple content objects may appear on a single Web page.
Number of dynamic content objects: Dynamic content objects are generated based on
end-user actions and encompass internally generated text-based, graphical, video,
animation, and audio information that are incorporated within the WebApp. Multiple
content objects may appear on a single Web page.
Number of executable functions: An executable function (e.g., a script or applet)
provides some computational service to the end user. As the number of executable
functions increases, the modelling and construction effort also increase.
12. Software Quality
Most software developers will agree that high-quality software is an important goal.
But how do we define software quality? In the most general sense, software quality
can be defined as: An effective software process applied in a manner that creates a
useful product that provides measurable value for those who produce it and those
who use it.
13. McCall’s Quality Factors
McCall, Richards, and Walters propose a useful categorization of factors that affect
software quality. These software quality factors focus on three important aspects of a
software product: its operational characteristics, its ability to undergo change, and its
adaptability to new environments.
Referring to the factors noted in McCall and his colleagues provide the following
descriptions:
Correctness: The extent to which a program satisfies its specification and fulfills the
customer’s mission objectives.
Reliability: The extent to which a program can be expected to perform its intended
function with required precision.
Efficiency: The amount of computing resources and code required by a program to
perform its function.
Integrity: Extent to which access to software or data by unauthorized persons can be
controlled.
Usability: Effort required to learn, operate, prepare input for, and interpret output of
a program.
Maintainability: Effort required to locate and fix an error in a program.
Flexibility: Effort required to modify an operational program.
Testability: Effort required to test a program to ensure that it performs its intended
function.
Portability: Effort required to transfer the program from one hardware and/or software
system environment to another.
Reusability: Extent to which a program [or parts of a program] can be reused in other
applications—related to the packaging and scope of the functions that the program
performs.
Interoperability: Effort required to couple one system to another.
14. Establishing a Baseline
By establishing a metrics baseline, benefits can be obtained at the process, project,
and product (technical) levels. Yet the information that is collected need not be
fundamentally different. The same metrics can serve many masters. The metrics
baseline consists of data collected from past software development projects and can
be as simple or as complex as a comprehensive database containing dozens of project
measures and the metrics derived from them. To be an effective aid in process
improvement and/or cost and effort estimation, baseline data must have the following
attributes:
(1) data must be reasonably accurate—“guestimates” about past projects are to be
avoided,
(2) data should be collected for as many projects as possible,
(3) measures must be consistent (for example, a line of code must be interpreted
consistently across all projects for which data are collected),
(4) applications should be similar to work that is to be estimated—it makes little sense
to use a baseline for batch information systems work to estimate a real-time,
embedded application.

You might also like