MODULE 5
MODULE 5
13.1 Introduction
Quality Consideration: When defining the objectives of the project, some objectives may
explicitly relate to product quality.
Quality Consideration: Within this step, installation standards are identified (Activity 2.2).
These standards play a direct role in ensuring that the project's quality requirements are met
during the setup and operation phases.
Quality Consideration: This step includes analyzing the specific characteristics of the project
and identifying quality issues.
Quality Consideration: At this stage, the entry, exit, and process requirements for each
activity are identified. These requirements define the quality benchmarks that need to be
met before an activity begins or concludes.
2. Intangibility of Software:
Unlike physical goods, software lacks tangible aspects, making it harder to
verify the satisfactory completion of tasks.
To address this, developers should produce deliverables—documents,
prototypes, or tested modules—that provide evidence of progress and
quality.
Definition of Software Quality: Software quality refers to the degree to which a software product
meets its specified requirements and satisfies the needs and expectations of its users.
In the context of software engineering, quality is often divided into two categories:
External Quality: This is the quality viewed by users, which includes usability, functionality,
and performance.
Internal Quality: This refers to the characteristics that developers care about, such as code
structure, maintainability, and reliability.
1. Reliability:
o Reliability refers to the ability of the software to perform its intended functions
without failure over a specified period of time.
Failure on Demand: The likelihood that the system will fail when required or
during critical operations.
2. Maintainability:
o Maintainability measures how easy it is to modify the software after it has been
deployed, including fixing faults and making enhancements.
Analysability: The ease with which the causes of failures can be diagnosed.
3. Usability:
o Usability measures how easy and user-friendly the software is for end-users.
o This includes the software's ease of learning, its interface design, and how intuitive it
is for users.
4. Efficiency:
o Efficiency refers to the ability of the software to perform its tasks using the least
amount of resources, such as memory, processing time, and bandwidth.
o A software product is efficient if it performs well within the given hardware and
software environment constraints.
5. Portability:
6. Scalability:
o Scalability is the ability of the software to handle growing amounts of work or its
potential to be expanded to accommodate growth. This includes the software's
ability to scale up (handle larger volumes) or scale out (work across multiple
machines).
Software quality models, are frameworks used to characterize and measure the quality of software
through various attributes.
Well-Known Models:
David Garvin proposed eight general attributes to define the quality of any product:
Dromey's Model
High-level properties:
2. Internal Characteristics: Attributes like structure and design that affect development
and performance.
3. Contextual Characteristics: How well the software fits into its operating
environment.
4. Descriptive Properties: Attributes like clarity and documentation for better
understanding.
Purpose: To show how achieving lower-level attributes can improve the overall quality of the
software.
Böhm's Model
Focus: Defining software quality from the perspective of end-user needs and usability.
High-level characteristics:
Purpose: To focus on practical user concerns while incorporating a broad range of attributes.
13.7 Product and Process Metrics
1. Product Metrics:
o Examples:
2. Process Metrics:
o Examples:
Review effectiveness.
o For example, an error in the specification stage found during testing affects all
intermediate stages.
To manage errors effectively, each step in the development process should include well-
defined requirements:
1. Entry Requirements:
o Example: Test data and expected results must be prepared and approved
before testing starts.
2. Implementation Requirements:
o Example: During testing, when an error is fixed, all prior successful test cases
must be rerun to ensure no new issues are introduced.
3. Exit Requirements:
o Conditions to be met before a stage is considered complete.
o Example: For the testing phase to be deemed complete, all tests must run
successfully with no outstanding errors.
Software estimation is the process of predicting the cost, effort, time, or resources required
to complete a software project.
1. Delay estimation until late in the project: This approach would provide the most
accurate estimates, but it’s impractical since cost estimates must be provided
upfront.
2. Base estimates on similar past projects: This works when the current project is
similar to past ones, but the risk is that past experience doesn't always predict future
outcomes accurately.
4. Use empirical models: These models rely on historical data and use parameters like
lines of code (LOC) or function points (FP) to generate estimates. The formula used is
d = f(vi), where d represents the estimated value (cost, effort, duration) and vi are
independent variables (e.g., LOC or FP).
1. Software Sizing:
Problem-Based Estimation techniques in software project management, focusing on the use of LOC
(Lines of Code) and FP (Function Points)
o Serve as baseline metrics derived from past projects to estimate cost and effort.
2. Process:
o LOC Estimation:
o FP Estimation:
4. Range-Based Estimation:
o Use optimistic, most likely, and pessimistic estimates for each function or domain.
o This beta probability distribution gives more weight to the most likely estimate.
5. Validation:
FP-based estimation for determining the effort and cost of developing the CAD software, focusing on
information domain values rather than software functions.
1. Process Decomposition:
o After identifying the functions and process activities, the effort (measured in
person-months or other units) required for each task is estimated. This forms
the central matrix of the estimation.
o Each task's effort estimate is influenced by factors like the complexity of the
task and the skill level required.
o Labor rates (cost per unit effort) are then applied to each task's estimated
effort. These rates can vary depending on the task, as senior staff members
are typically involved in earlier activities and are more expensive compared to
junior staff involved in later stages like construction and release.
o The costs and effort for each function and process activity are then
calculated.
4. Comparing Estimates:
o Tasks like customer communication, planning, and risk analysis are included
in the estimates, as shown in the total row at the bottom of the table.
o The engineering and construction release activities are broken down into
smaller tasks like requirements analysis, design, coding, and testing.
o The table summarizes the total effort for these tasks both horizontally (for
each function) and vertically (for each engineering activity).
4. Effort Distribution:
Estimation with use cases is a method used to estimate the size and effort required for a
software development project based on the use cases that describe the system's
functionality.
Level of Detail: The level of detail for each use case needs to be considered.
Historical Data: Use historical data from similar projects to determine the typical
number of lines of code (LOC) or function points (FP) associated with each use case
at different levels of abstraction.
Estimate Adjustments: Once you have the number of use cases, scenarios, and
pages, use adjustments based on these factors.
o Number of Pages (Pa): Longer use cases (more pages) also require more
effort.
26.6.8 An Example of Use-Case–Based Estimation
Let's say we are estimating a CAD system with three subsystem groups: User Interface,
Engineering, and Infrastructure. Each subsystem has a set of use cases.
For User Interface, there are 6 use cases with 10 scenarios each and an average
length of 6 pages.
For Engineering, there are 10 use cases with 20 scenarios each and an average length
of 8 pages.
For Infrastructure, there are 5 use cases with 6 scenarios each and an average length
of 5 pages.
26.7 Empirical Estimation Models
The COCOMO II (COnstructive COst MOdel II) is an evolved and more comprehensive version of the
original COCOMO model introduced by Barry Boehm.
1. Hierarchy of COCOMO II Models:
COCOMO II is a hierarchy of estimation models that are applied at different stages of software
development:
Application Composition Model: Used in the early stages when user interface prototyping,
system interaction, performance assessment, and technology maturity evaluation are critical.
Early Design Stage Model: Applied once the requirements are stabilized, and the basic
software architecture is defined.
COCOMO II models require sizing information, and there are three primary options for estimating
size:
The Application Composition Model in COCOMO II uses object points to measure the size of the
software. Object points are an indirect software measure and are calculated based on the number of:
Each of these elements is classified into three complexity levels (simple, medium, or difficult).
Complexity is determined by:
The number and source of client and server data tables needed.
After determining complexity, the object points are calculated by multiplying the number of object
instances (screens, reports, components) by predefined weighting factors for each complexity level.
The total object point count is then the sum of these weighted values.
4. Adjustment for Reuse:
If component-based development or software reuse is applied, the object point count is adjusted for
reuse. The formula for this adjustment is:
Where:
5. Productivity Rate:
To estimate effort, the model requires a productivity rate, which depends on the experience of the
developers and the maturity of the development environment. This rate is used to estimate the
effort for the project based on the calculated object points.
Where:
The software equation [Put92] is a dynamic multivariable model that assumes a specific