0% found this document useful (0 votes)
6 views

MODULE 5

Uploaded by

Akul Natekar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

MODULE 5

Uploaded by

Akul Natekar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

MODULE 5

13.1 Introduction

Two perspectives on quality:

1. For Selection (Customer Perspective):


A as potential customer evaluates whether a system meets specified quality
requirements during the selection process.

2. For Development (Developer Perspective):


software developers, requires early assessments of quality during the development
process to ensure that the methods employed will result in a high-quality system.

13.2 The Place of Software Quality in Project Planning

Quality considerations are integrated throughout the project's lifecycle.

Step 1: Identify Project Scope and Objectives

 Quality Consideration: When defining the objectives of the project, some objectives may
explicitly relate to product quality.

Step 2: Identify Project Infrastructure

 Quality Consideration: Within this step, installation standards are identified (Activity 2.2).
These standards play a direct role in ensuring that the project's quality requirements are met
during the setup and operation phases.

Step 3: Analyze Project Characteristics

 Quality Consideration: This step includes analyzing the specific characteristics of the project
and identifying quality issues.

Step 4: Identify Products and Activities of the Project

 Quality Consideration: At this stage, the entry, exit, and process requirements for each
activity are identified. These requirements define the quality benchmarks that need to be
met before an activity begins or concludes.

Step 8: Review and Publicize Plan


 Quality Consideration: Before the project plan is finalized and shared, the overall quality
aspects are reviewed.

13.3 Importance of Software Quality

1. Increasing Criticality of Software:

 As organizations become more dependent on software, especially for safety-


critical applications like aircraft control, ensuring reliability becomes
paramount.
 This intensifies the need for robust quality assurance to build trust among
users and customers.

2. Intangibility of Software:
 Unlike physical goods, software lacks tangible aspects, making it harder to
verify the satisfactory completion of tasks.
 To address this, developers should produce deliverables—documents,
prototypes, or tested modules—that provide evidence of progress and
quality.

3. Accumulating Errors in Development:

 Errors in software can cascade, as outputs from earlier development stages


become inputs for later ones.
 This compounding effect increases the complexity of debugging and elevates
costs, especially when errors are detected late in the project. Managing errors
early is crucial for effective project control and cost management.

13.4 Defining Software Quality

Software Quality and Its Characteristics

Definition of Software Quality: Software quality refers to the degree to which a software product
meets its specified requirements and satisfies the needs and expectations of its users.

In the context of software engineering, quality is often divided into two categories:

 External Quality: This is the quality viewed by users, which includes usability, functionality,
and performance.

 Internal Quality: This refers to the characteristics that developers care about, such as code
structure, maintainability, and reliability.

Characteristics of Software Quality:

1. Reliability:

o Reliability refers to the ability of the software to perform its intended functions
without failure over a specified period of time.

o Key metrics for reliability include:

 Availability: The percentage of time the system is available for use.


 Mean Time Between Failures (MTBF): The average time the system operates
before a failure occurs.

 Failure on Demand: The likelihood that the system will fail when required or
during critical operations.

 Support Activity: The number of fault reports generated and processed.

2. Maintainability:

o Maintainability measures how easy it is to modify the software after it has been
deployed, including fixing faults and making enhancements.

o Maintainability is closely related to:

 Changeability: The ease with which modifications can be made to the


software.

 Analysability: The ease with which the causes of failures can be diagnosed.

3. Usability:

o Usability measures how easy and user-friendly the software is for end-users.

o This includes the software's ease of learning, its interface design, and how intuitive it
is for users.

4. Efficiency:

o Efficiency refers to the ability of the software to perform its tasks using the least
amount of resources, such as memory, processing time, and bandwidth.

o A software product is efficient if it performs well within the given hardware and
software environment constraints.

5. Portability:

o Portability is the ability of the software to be transferred from one environment to


another with minimal effort. It includes the software's ability to run on different
platforms or hardware configurations without requiring significant changes.

6. Scalability:

o Scalability is the ability of the software to handle growing amounts of work or its
potential to be expanded to accommodate growth. This includes the software's
ability to scale up (handle larger volumes) or scale out (work across multiple
machines).

13.5 Software Quality Models

Software quality models, are frameworks used to characterize and measure the quality of software
through various attributes.

Well-Known Models:

 Garvin’s Model: Focuses on general product attributes.


 McCall’s Model: Defines software quality through operational characteristics, defect-
fixing ease, and portability.
 Dromey’s and Boehm’s Models: Other notable approaches.
 ISO 9126 Model: Introduced for standardization and discussed separately.

Garvin’s Quality Dimensions

David Garvin proposed eight general attributes to define the quality of any product:

1. Performance: Effectiveness in performing intended tasks.

2. Features: Availability of required functionalities.

3. Reliability: Probability of satisfactory performance over time.

4. Conformance: Adherence to specifications and requirements.

5. Durability: Longevity and lifecycle of the product.

6. Serviceability: Ease and speed of maintenance.

7. Aesthetics: Subjective appeal based on look and feel.

8. Perceived Quality: Users’ opinions and impressions of quality.

McCall’s Quality Model

McCall’s model breaks software quality into three high-level attributes:


1. Operational Characteristics: Reflecting functionality and reliability during operation.

2. Defect Fixing Ease: How simple it is to maintain the software.

3. Portability: Effort required for transitioning between platforms.

These high-level attributes are further defined by 11 software quality attributes:

 Correctness: Satisfies specifications.

 Reliability: Performs satisfactorily over time.

 Efficiency: Optimal use of computing resources.

 Integrity: Ensures data remains secure and valid.

 Usability: Ease of operation for users.

 Maintainability: Simplifies bug fixing.

 Flexibility: Adapts to changing needs.

 Testability: Ease of validating functionality.

 Portability: Transitioning effort across environments.

 Reusability: Reapplication in different contexts.

 Interoperability: Integration with other systems.

Dromey's Model

 Focus: Establishing a relationship between high-level software properties and low-level


attributes.

 High-level properties:

1. Correctness: Ensuring the software meets its specifications.

2. Internal Characteristics: Attributes like structure and design that affect development
and performance.

3. Contextual Characteristics: How well the software fits into its operating
environment.
4. Descriptive Properties: Attributes like clarity and documentation for better
understanding.

 Model Representation: A hierarchical structure where the high-level properties are


influenced by measurable, lower-level quality attributes.

 Purpose: To show how achieving lower-level attributes can improve the overall quality of the
software.

Böhm's Model

 Focus: Defining software quality from the perspective of end-user needs and usability.

 High-level characteristics:

1. As-is Utility: Usability in terms of ease, reliability, and efficiency.

2. Maintainability: Simplicity in understanding, modifying, and retesting the software.

3. Portability: Ease of adapting the software to different or changing environments.

 Model Representation: Hierarchical classification of ten measurable quality attributes


grouped under the three high-level characteristics.

 Purpose: To focus on practical user concerns while incorporating a broad range of attributes.
13.7 Product and Process Metrics

Product vs. Process Metrics

1. Product Metrics:

o Measure characteristics of the software product being developed.

o Examples:

 Size Metrics: Lines of Code (LOC), Function Points.

 Effort Metrics: Person-Months (PM).

 Time Metrics: Development duration in months.

2. Process Metrics:

o Assess the performance of the software development process.

o Examples:

 Review effectiveness.

 Average defects found per hour of inspection.

 Average defect correction time.


 Productivity metrics.

 Failures and latent defects per LOC.

13.8 Product versus Process Quality Management

System Development Process and Error Management

 The development process involves a sequence of interlinked activities.

 Errors can arise from:

o Defects in the process itself (e.g., logic errors by developers).

o Miscommunication between stages.

 Cascading Effect of Errors:

o Errors not removed early require more extensive rework later.

o For example, an error in the specification stage found during testing affects all
intermediate stages.

Process Requirements for Error Prevention

To manage errors effectively, each step in the development process should include well-
defined requirements:

1. Entry Requirements:

o Conditions that must be fulfilled before a stage begins.

o Example: Test data and expected results must be prepared and approved
before testing starts.

2. Implementation Requirements:

o Guidelines for conducting processes to ensure thoroughness.

o Example: During testing, when an error is fixed, all prior successful test cases
must be rerun to ensure no new issues are introduced.

3. Exit Requirements:
o Conditions to be met before a stage is considered complete.

o Example: For the testing phase to be deemed complete, all tests must run
successfully with no outstanding errors.

26.5 Software Project Estimation

Software estimation is the process of predicting the cost, effort, time, or resources required
to complete a software project.

Several methods to improve estimation accuracy:

1. Delay estimation until late in the project: This approach would provide the most
accurate estimates, but it’s impractical since cost estimates must be provided
upfront.
2. Base estimates on similar past projects: This works when the current project is
similar to past ones, but the risk is that past experience doesn't always predict future
outcomes accurately.

3. Use simple decomposition techniques: By breaking the project into smaller


components (functions or activities), cost and effort can be estimated in stages,
making the process more manageable.

4. Use empirical models: These models rely on historical data and use parameters like
lines of code (LOC) or function points (FP) to generate estimates. The formula used is
d = f(vi), where d represents the estimated value (cost, effort, duration) and vi are
independent variables (e.g., LOC or FP).

26.6 Decomposition Techniques

26.6.1 Software Sizing

1. Software Sizing:

o Size Estimation is one of the first challenges in project planning. It is a


quantifiable outcome that can be measured in terms of:

 Lines of Code (LOC): A direct measure of size.

 Function Points (FP): An indirect measure based on the functional


requirements of the software.

2. Approaches to Software Sizing (as suggested by Putnam and Myers):

o Fuzzy Logic Sizing: This uses approximate reasoning techniques. It involves


identifying the type of application and refining its magnitude on a qualitative
scale.

o Function Point Sizing: Estimating the characteristics of the software’s


information domain, which is then translated into function points.

o Standard Component Sizing: Software can be broken into standard


components (e.g., modules, screens, reports). By estimating the number of
these components and using historical data, the software's size can be
estimated.

o Change Sizing: Used when modifying existing software. It estimates the


number of modifications required (e.g., adding, changing, or deleting code).

3. Combining Sizing Approaches:

o To improve estimation reliability, Putnam and Myers recommend combining


the results from these different sizing approaches statistically. The approach is
based on developing three estimates:

1. Optimistic (low) estimate


2. Most Likely estimate
3. Pessimistic (high) estimate

26.6.2 Problem-Based Estimation

Problem-Based Estimation techniques in software project management, focusing on the use of LOC
(Lines of Code) and FP (Function Points)

1. Purpose of LOC and FP Data:

o Used to measure productivity and as variables to size software components.

o Serve as baseline metrics derived from past projects to estimate cost and effort.

2. Process:

o Begin with a clear, bounded software scope.

o Decompose the scope into smaller problem functions or components.

o Estimate LOC or FP for each function/component.

o Use baseline productivity metrics (e.g., LOC/person-month or FP/person-month) to


calculate effort and cost.

3. LOC vs. FP Estimation:

o LOC Estimation:

 Requires detailed decomposition into finer components.


 More detailed partitioning improves estimation accuracy.

o FP Estimation:

 Decomposes information domain characteristics (inputs, outputs, data files,


inquiries, interfaces).

 Estimates are influenced by 14 complexity adjustment factors.

 The final FP value is tied to past data for projections.

4. Range-Based Estimation:

o Use optimistic, most likely, and pessimistic estimates for each function or domain.

o A weighted average (three-point estimate) is calculated:

opt: Optimistic m: most likely pess: pessimistic

o This beta probability distribution gives more weight to the most likely estimate.

5. Validation:

o Cross-check estimates with alternative techniques to ensure reliability.

26.6.3 An Example of LOC-Based Estimation

1. Preliminary Statement of Scope

The software scope includes:

 Accepting 2D and 3D geometric data.

 Providing a user interface adhering to good human/machine interaction design principles.

 Maintaining a CAD database for geometric and supporting data.

 Developing design analysis modules for outputs displayed on various devices.

 Interacting with peripheral devices (mouse, digitizer, printer, plotter).

2. Decomposition and LOC Estimation


The software is broken into major functions (e.g., 3D geometric analysis, user interface design). For
each function:

 A range of LOC estimates is provided:

o Optimistic: Minimum likely LOC (e.g., 4600 LOC for 3D analysis).

o Most Likely: Central estimate (e.g., 6900 LOC).

o Pessimistic: Maximum likely LOC (e.g., 8600 LOC).


26.6.4 An Example of FP-Based Estimation

FP-based estimation for determining the effort and cost of developing the CAD software, focusing on
information domain values rather than software functions.

1. Decomposition Based on Information Domain Values

For FP-based estimation, the focus is on:

 Inputs (e.g., geometric data entry by the engineer).

 Outputs (e.g., graphical results for design analysis).

 Inquiries (e.g., interactive queries).

 Files (e.g., CAD database).

 External Interfaces (e.g., peripherals like a plotter and mouse).


26.6.5 Process-Based Estimation

process-based estimatio approach involves breaking down the software development


process into tasks and estimating the effort required for each task.

1. Process Decomposition:

o The estimation process starts by decomposing the software functions from


the project scope and breaking them down into process activities.
o The framework activities (e.g., design, coding, testing) associated with each
function are identified and estimated in terms of effort and cost.

2. Effort Estimation for Tasks:

o After identifying the functions and process activities, the effort (measured in
person-months or other units) required for each task is estimated. This forms
the central matrix of the estimation.

o Each task's effort estimate is influenced by factors like the complexity of the
task and the skill level required.

3. Labor Rates and Cost Calculation:

o Labor rates (cost per unit effort) are then applied to each task's estimated
effort. These rates can vary depending on the task, as senior staff members
are typically involved in earlier activities and are more expensive compared to
junior staff involved in later stages like construction and release.

o The costs and effort for each function and process activity are then
calculated.

4. Comparing Estimates:

o Two or more estimates (from process-based estimation and other methods


like LOC or FP estimation) are generated and compared.

o If these estimates align well, it increases the reliability of the estimates.

o If the estimates diverge significantly, further analysis is required to identify


the cause of the discrepancy and refine the estimates.

26.6.6 An Example of Process-Based Estimation

process-based estimation with an example of a CAD software project.

1. Project Scope and Functions:

o The CAD software's configuration and software functions remain unchanged


as indicated in the project scope.
o The estimation focuses on the engineering activities required to develop the
software, such as planning, risk analysis, design, coding, and testing.

2. Effort Estimates by Activity:

o Effort estimates (in person-months) are provided for each software


engineering activity, corresponding to each software function (represented in
the table).

o Tasks like customer communication, planning, and risk analysis are included
in the estimates, as shown in the total row at the bottom of the table.

3. Decomposition into Tasks:

o The engineering and construction release activities are broken down into
smaller tasks like requirements analysis, design, coding, and testing.

o The table summarizes the total effort for these tasks both horizontally (for
each function) and vertically (for each engineering activity).

4. Effort Distribution:

o A significant portion (53%) of the total effort is allocated to front-end


engineering tasks, particularly requirements analysis and design.

o This indicates the high importance of these early stages of development in


ensuring a successful project.
26.6.7 Estimation with Use Cases

Estimation with use cases is a method used to estimate the size and effort required for a
software development project based on the use cases that describe the system's
functionality.

Steps for Estimating with Use Cases

 Hierarchical Structure: Use cases should be organized into a structural hierarchy.


Higher-level use cases represent more abstract, complex functionality, while lower-
level use cases represent smaller, more detailed tasks.

 Level of Detail: The level of detail for each use case needs to be considered.

 Historical Data: Use historical data from similar projects to determine the typical
number of lines of code (LOC) or function points (FP) associated with each use case
at different levels of abstraction.

 Estimate Adjustments: Once you have the number of use cases, scenarios, and
pages, use adjustments based on these factors.

o Number of Pages (Pa): Longer use cases (more pages) also require more
effort.
26.6.8 An Example of Use-Case–Based Estimation

Let's say we are estimating a CAD system with three subsystem groups: User Interface,
Engineering, and Infrastructure. Each subsystem has a set of use cases.

 For User Interface, there are 6 use cases with 10 scenarios each and an average
length of 6 pages.

 For Engineering, there are 10 use cases with 20 scenarios each and an average length
of 8 pages.

 For Infrastructure, there are 5 use cases with 6 scenarios each and an average length
of 5 pages.
26.7 Empirical Estimation Models

26.7.1 The Structure of Estimation Models

Examples of LOC-Based Estimation Models:

26.7.2 The COCOMO II Model

The COCOMO II (COnstructive COst MOdel II) is an evolved and more comprehensive version of the
original COCOMO model introduced by Barry Boehm.
1. Hierarchy of COCOMO II Models:

COCOMO II is a hierarchy of estimation models that are applied at different stages of software
development:

 Application Composition Model: Used in the early stages when user interface prototyping,
system interaction, performance assessment, and technology maturity evaluation are critical.

 Early Design Stage Model: Applied once the requirements are stabilized, and the basic
software architecture is defined.

 Post-Architecture-Stage Model: Used during the construction phase of the software.

2. Sizing Options in COCOMO II:

COCOMO II models require sizing information, and there are three primary options for estimating
size:

 Object Points: Used in the application composition model.

 Function Points (FP): Used in other models.

 Lines of Code (LOC): Commonly used in various models.

3. Application Composition Model:

The Application Composition Model in COCOMO II uses object points to measure the size of the
software. Object points are an indirect software measure and are calculated based on the number of:

 Screens (user interface elements)

 Reports (output generated by the system)

 Components (software components likely required in the application)

Each of these elements is classified into three complexity levels (simple, medium, or difficult).
Complexity is determined by:

 The number and source of client and server data tables needed.

 The number of views or sections presented as part of each screen or report.

After determining complexity, the object points are calculated by multiplying the number of object
instances (screens, reports, components) by predefined weighting factors for each complexity level.
The total object point count is then the sum of these weighted values.
4. Adjustment for Reuse:

If component-based development or software reuse is applied, the object point count is adjusted for
reuse. The formula for this adjustment is:

Where:

 NOP is the new object points after accounting for reuse.

 %reuse is the percentage of reused components.

5. Productivity Rate:

To estimate effort, the model requires a productivity rate, which depends on the experience of the
developers and the maturity of the development environment. This rate is used to estimate the
effort for the project based on the calculated object points.

The formula for estimating effort is:

Where:

 NOP is the adjusted object points.

 PROD is the productivity rate.

26.7.3 The Software Equation

The software equation [Put92] is a dynamic multivariable model that assumes a specific

distribution of effort over the life of a software development project.

You might also like