0% found this document useful (0 votes)
50 views95 pages

Chapter 2 Slides

The document discusses different software development lifecycle models including waterfall, V-model, iterative, incremental, agile approaches like Scrum and Kanban, and describes their characteristics and how testing is integrated at each stage. It also covers different test levels from unit to integration to system and acceptance testing and how each level has specific objectives, test basis, test objects, and responsibilities. Characteristics of good testing practices are defined which emphasize early involvement of testers and corresponding test activities for each development activity.

Uploaded by

Mukesh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views95 pages

Chapter 2 Slides

The document discusses different software development lifecycle models including waterfall, V-model, iterative, incremental, agile approaches like Scrum and Kanban, and describes their characteristics and how testing is integrated at each stage. It also covers different test levels from unit to integration to system and acceptance testing and how each level has specific objectives, test basis, test objects, and responsibilities. Characteristics of good testing practices are defined which emphasize early involvement of testers and corresponding test activities for each development activity.

Uploaded by

Mukesh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 95

Software Development Lifecycle Models

THE
SOFTWARE
DEVELOPMENT
CYCLE

A software development lifecycle model describes the types of activity


performed at each stage in a software development project, and how the
activities relate to one another logically and chronologically.
Characteristics of good Testing
In any software lifecycle model

1. For every development activity, there is a corresponding


test activity

2. Each test level has test objectives specific to that level

3. Test analysis and design for a given test level begin during
the corresponding development activity

4. Testers participate in discussions to define and refine


requirements and design, and are involved in reviewing
work products
SDLC

Iterative &
Sequential Incremental
Software Development & Software Testing

• A sequential development model describes


the software development process as a linear,
sequential flow of activities.

• This means that any phase in the


development process should begin when the
previous phase is complete.

• In theory, there is no overlap of phases, but in


practice, it is beneficial to have early feedback
from the following phase
Waterfall Model

• In the Waterfall model, the development activities are


completed one after another.

• In this model, test activities only occur after all other


development activities have been completed
V-Model

Unlike the Waterfall model, the V-model integrates the test process throughout the
development process, implementing the principle of early testing.
Incremental Development

• Incremental development involves establishing


requirements, designing, building, and testing a
system in pieces, which means that the software’s
features grow incrementally.

• The size of these feature increments vary, with


some methods having larger pieces and some
smaller pieces.

• The feature increments can be as small as a single


change to a user interface screen or new query
option
Iterative Development
• Iterative development occurs when groups of
features are specified, designed, built, and
tested together in a series of cycles, often of a
fixed duration.

• Iterations may involve changes to features


developed in earlier iterations, along with
changes in project scope.

• Each iteration delivers working software which


is a growing subset of the overall set of features
until the final software is delivered or
development is stopped
Rational Unified Process

Each iteration tends to be relatively long (e.g., two to three months), and the feature
increments are correspondingly large, such as two or three groups of related features
Scrum

Each iteration tends to be relatively short (e.g., hours, days, or a few weeks), and the
feature increments are correspondingly small, such as a few enhancements and/or two
or three new features
Kanban

Implemented with or without fixed-length iterations, which can deliver either a single
enhancement or feature upon completion, or can group features together to release at
once
Spiral (or Prototyping)

Involves creating experimental increments, some of which may be heavily re-worked or


even abandoned in subsequent development work
Continuous Delivery

• In some cases, teams use continuous delivery or


continuous deployment, both of which involve significant
automation of multiple test levels as part of their delivery
pipelines.

• Many development efforts using these methods include


the concept of self-organizing teams

• regression testing is increasingly important as the system


grows
Software Development & Software Testing
• In addition, software development lifecycle models
themselves may be combined.

• For example, a V-model may be used for the


development and testing of the backend systems
and their integrations, while an Agile development
model may be used to develop and test the front-
end user interface (UI) and functionality.

• Prototyping may be used early in a project, with an


incremental development model adopted once the
experimental phase is complete
Internet of Things (IOT)

Internet of Things (IoT) systems, which consist of many different objects, such as devices,
products, and services, typically apply separate software development lifecycle models
for each object
Quiz Time
• Which one of the following is the BEST definition of an incremental
development model?

A. Defining requirements, designing software and testing are done in a series with added
pieces

B. A phase in the development process should begin when the previous phase is complete

C. Testing is viewed as a separate phase which takes place after development has been
completed

D. Testing is added to development as an increment


Quiz Time
• Which one of the following is the BEST definition of an incremental
development model?

A. Defining requirements, designing software and testing are done in a series with added
pieces

B. A phase in the development process should begin when the previous phase is complete

C. Testing is viewed as a separate phase which takes place after development has been
completed

D. Testing is added to development as an increment


Quiz Time
• Which of the following is a true statement regarding the V-model lifecycle?

A. Testing involvement starts when the code is complete

B. The test process is integrated with the development process

C. The software is built in increments and each increment has activities for requirements,
design, build and test

D. All activities for development and test are completed sequentially


Quiz Time
• Which of the following is a true statement regarding the V-model lifecycle?

A. Testing involvement starts when the code is complete

B. The test process is integrated with the development process

C. The software is built in increments and each increment has activities for requirements,
design, build and test

D. All activities for development and test are completed sequentially


Quiz Time
• In an iterative lifecycle model, which of the following is an accurate statement
about testing activities?

A. For every development activity, there should be a corresponding testing activity

B. For every testing activity, appropriate documentation should be produced, versioned and
stored

C. For every development activity resulting in code, there should be a testing activity to
document test cases

D. For every testing activity, metrics should be recorded and posted to a metrics dashboard
for all stakeholders
Quiz Time
• In an iterative lifecycle model, which of the following is an accurate statement
about testing activities?

A. For every development activity, there should be a corresponding testing activity

B. For every testing activity, appropriate documentation should be produced, versioned and
stored

C. For every development activity resulting in code, there should be a testing activity to
document test cases

D. For every testing activity, metrics should be recorded and posted to a metrics dashboard
for all stakeholders
Test Levels
• Test levels are groups of test activities that are
organized and managed together. Acceptance

• Each test level is an instance of the test System


process
Integration
• Test levels are related to other activities
within the software development lifecycle. Unit
Test Levels
• Test levels are characterized by the following attributes:

• Specific objectives Acceptance


• Test basis, referenced to derive test cases System
• Test object (i.e., what is being tested)
Integration
• Typical defects and failures
Component
• Specific approaches and responsibilities
Test Levels
• For every test level, a suitable test
environment is required.

• In acceptance testing, for example, a


production-like test environment is
ideal, while in component testing the
developers typically use their own
development environment.
Component Testing

Component testing (also known as unit or module testing) focuses on components that
are separately testable
Component Testing
Reduce risk-find defects-prevent defects
Objectives escape-verify behavior-Build confidence

Detailed design-Code-Data Model-


Test Basis Component Specification
Components - Units-Modules – Code – Data
Test Objects Structures – Classes – Database Modules

Incorrect functionality-Data flow


Defects & Failures problems-Incorrect code & Logic

Approaches & Responsibilities Done by the developer-TDD


Test Driven Development

Developing automated test cases, then building and integrating small pieces of code,
then executing the component tests, correcting any issues, and re-factoring the code
Integration Testing

Integration testing focuses on interactions between components or systems


Integration
Testing

Component System
Integration Integration
Integration Testing
Reduce risk-find defects-prevent defects
Objectives escape-verify behavior-Build confidence

Software & System Design-Sequence


Test Basis Diagrams-Use Cases-Workflows
Subsystems-APIs-Microservices-Interfaces-
Test Objects Databases-Infrastructure

Failures in communication-Incorrect
Defects & Failures data-Interface mismatch-Incorrect Data

Top down-Bottom up-


Approaches & Responsibilities
Simulators-Stubs-Drivers
Integration Testing
• Component integration tests and system integration
tests should concentrate on the integration itself.

• For example, if integrating module A with module B,


tests should focus on the communication between
the modules, not the functionality of the individual
modules, as that should have been covered during
component testing.

• If integrating system X with system Y, tests should


focus on the communication between the systems,
not the functionality of the individual systems, as
that should have been covered during system
testing.
Integration Testing

• Component integration testing is often the


responsibility of developers.

• System integration testing is generally the


responsibility of testers.
Integration Testing Planning

• In order to simplify defect isolation and detect


defects early, integration should normally be
incremental

• The greater the scope of integration, the more


difficult it becomes to isolate defects to a specific
component or system

• This is one reason that continuous integration,


where software is integrated on a component-by-
component basis
System Testing

System testing focuses on the behavior and capabilities of a whole system or product,
often considering the end-to-end tasks the system can perform and the non-functional
behaviors it exhibits while performing those tasks
System Testing

Reduce risk-find defects-prevent defects


Objectives escape-verify behavior-Build confidence

SRS-Risk Analysis Reports-Use Cases-Epics-


Test Basis User Stories-State Diagrams-Manuals

Applications-Operating Systems-System Under


Test Objects Test-System Configuration

System failure-Incorrect data flow-


Defects & Failures Unexpected behavior
System Testing

• System testing often produces information that is used


by stakeholders to make release decisions

• The test environment should ideally correspond to the


final target or production environment

• System testing should focus on the overall, end-to-end


behavior of the system as a whole

• Independent testers typically carry out system testing.


Acceptance Testing

Acceptance testing, like system testing, typically focuses on the behavior and capabilities
of a whole system or product
Acceptance Testing

Establish confidence-Validate system-Verify


Objectives behavior
Business processes-Requirements-Use
Test Basis cases-Installation Procedures-Risk analysis
reports-Regulations-Contracts
System under test-Recovery systems-hot
Test Objects sites-Forms-Reports

System workflow-Business rules-


Defects & Failures Contract-Non functional failures
Acceptance Testing

• Acceptance testing may produce information to


assess the system’s readiness for deployment and
use by the customer (end-user).

• Defects may be found during acceptance testing, but


finding defects is often not an objective, and finding
a significant number of defects during acceptance
testing may in some cases be considered a major
project risk.
User Acceptance Testing
Operational Acceptance Testing
Contract or Regulatory Acceptance Testing
Alpha & Beta Testing
Acceptance Testing

• Acceptance testing is often the responsibility of the


customers, business users, product owners, or
operators of a system, and other stakeholders

• Acceptance testing is often thought of as the last test


level in a sequential development lifecycle, but it may
also occur at other times
Quiz Time
• Which of the following comparisons of component testing and system testing is true?

A. Component testing verifies the functioning of software modules, program objects, and classes that are
separately testable, whereas system testing verifies interfaces between components and interactions with
different parts of the system

B. Test cases for component testing are usually derived from component specifications, design specifications, or
data models, whereas test cases for system testing are usually derived from requirement specifications,
functional specifications or use cases

C. Component testing focuses on functional characteristics, whereas system testing focuses on functional and
non-functional characteristics

D. Component testing is the responsibility of the technical testers, whereas system testing typically is the
responsibility of the users of the system
Quiz Time
• Which of the following comparisons of component testing and system testing is true?

A. Component testing verifies the functioning of software modules, program objects, and classes that are
separately testable, whereas system testing verifies interfaces between components and interactions with
different parts of the system

B. Test cases for component testing are usually derived from component specifications, design specifications, or
data models, whereas test cases for system testing are usually derived from requirement specifications,
functional specifications or use cases

C. Component testing focuses on functional characteristics, whereas system testing focuses on functional and
non-functional characteristics

D. Component testing is the responsibility of the technical testers, whereas system testing typically is the
responsibility of the users of the system
Quiz Time
• What type of testing is normally conducted to verify that a product meets a particular
regulatory requirement?

A. Unit Testing

B. Integration Testing

C. System Testing

D. Acceptance Testing
Quiz Time
• What type of testing is normally conducted to verify that a product meets a particular
regulatory requirement?

A. Unit Testing

B. Integration Testing

C. System Testing

D. Acceptance Testing
Quiz Time
• Use cases are a test basis for which level of testing?

A. Unit

B. System

C. Load and performance

D. Usability
Quiz Time
• Use cases are a test basis for which level of testing?

A. Unit

B. System

C. Load and performance

D. Usability
Test Types

• A test type is a group of test activities


aimed at testing specific characteristics
of a software system, or a part of a
system, based on specific test objectives
Functional Testing
• Functional testing of a system involves tests that
evaluate functions that the system should perform.

• Functional requirements may be described in work


products such as
• Business requirements specifications
• Epics
• User stories
• Use cases
• Functional specifications
• They may be undocumented.
Functional Testing

• The functions are “what” the system should


do.

• Functional tests should be performed at all


test levels, though the focus is different at
each level
Functional Testing

Input Output
Black Box

Functional testing considers the behavior of the software, so black-box


techniques may be used to derive test conditions and test cases for the
functionality of the component or system
Functional Coverage

• Functional coverage is the extent to which some type of


functional element has been exercised by tests, and is
expressed as a percentage of the type(s) of element
being covered.

• For example, using traceability between tests and


functional requirements, the percentage of these
requirements which are addressed by testing can be
calculated, potentially identifying coverage gaps
Non-functional Testing
• Non-functional testing of a system evaluates
characteristics of systems and software such
as usability, performance, efficiency or
security.

• Non-functional testing is the testing of “how


well” the system behaves

• Non-functional testing can be done at all test


levels
Non-Functional Coverage

• Nonfunctional coverage is the extent to which some type


of non-functional element has been exercised by tests, and
is expressed as a percentage of the type(s) of element
being covered.

• For example, using traceability between tests and


supported devices for a mobile application, the percentage
of devices which are addressed by compatibility testing
can be calculated, potentially identifying coverage gaps
White-box Testing

• White-box testing derives tests based on the


system’s internal structure or
implementation.
Input Output
White Box
• Internal structure may include code,
architecture, work flows, and/or data flows
within the system
Structural Coverage

• Structural coverage is the extent to which some type of


structural element has been exercised by tests, and is
expressed as a percentage of the type of element being
covered

• At the component testing level, code coverage is based on


the percentage of component code that has been tested

• At the component integration testing level, white-box


testing may be based on the architecture of the system,
such as interface between components, and structural
coverage may be measured in terms of the percentage of
interfaces exercised by tests
When changes are made to a system,, testing should be done to confirm that the changes
have corrected the defect or implemented the functionality correctly, and have not caused
any unforeseen adverse consequences
Confirmation Testing

• After a defect is fixed, the software should be tested

• At the very least, the steps to reproduce the failure(s) caused by the defect must be re-
executed on the new software version.

• The purpose of a confirmation test is to confirm whether the original defect has been
successfully fixed
Regression Testing

• It is possible that a change made in one part of the


code, may accidentally affect the behavior of other
parts of the code

• Changes may include changes to the environment

• Regression testing involves running tests to detect


such unintended side-effects
Regression Testing

• Regression test suites are run many times and


generally evolve slowly, so regression testing is a
strong candidate for automation.

• Automation of these tests should start early in the


project

• Change-related testing is performed at all test levels


Test Types & Test Levels

• It is possible to perform any of the test types mentioned above at any test level.
• Functional testing at component level -> testing that the code performs its function
• Non-functional testing at component level -> testing how many cycles does the code
use to work
• Functional testing at acceptance level -> testing that users can login successfully to the
website
• Non-functional testing at acceptance level -> gathering users opinions abut the
usability of the application
Quiz Time
• Which one of the following is TRUE?

A. The purpose of regression testing is to check if the correction has been successfully implemented, while the
purpose of confirmation testing is to confirm that the correction has no side effects

B. The purpose of regression testing is to detect unintended side effects, while the purpose of confirmation
testing is to check if the system is still working in a new environment

C. The purpose of regression testing is to detect unintended side effects, while the purpose of confirmation
testing is to check if the original defect has been fixed

D. The purpose of regression testing is to check if the new functionality is working, while the purpose of
confirmation testing is to check if the originally defect has been fixed.
Quiz Time
• Which one of the following is TRUE?

A. The purpose of regression testing is to check if the correction has been successfully implemented, while the
purpose of confirmation testing is to confirm that the correction has no side effects

B. The purpose of regression testing is to detect unintended side effects, while the purpose of confirmation
testing is to check if the system is still working in a new environment

C. The purpose of regression testing is to detect unintended side effects, while the purpose of confirmation
testing is to check if the original defect has been fixed

D. The purpose of regression testing is to check if the new functionality is working, while the purpose of
confirmation testing is to check if the originally defect has been fixed.
Quiz Time
• Which of the following is most correct regarding the test level at
which functional tests may be executed?

A. Unit and integration


B. Integration and system
C. System and acceptance
D. All levels
Quiz Time
• Which of the following is most correct regarding the test level at
which functional tests may be executed?

A. Unit and integration


B. Integration and system
C. System and acceptance
D. All levels
Quiz Time
• Usability testing is an example of which type of testing?

A. Functional
B. Non-functional
C. Structural
D. Change-related
Quiz Time
• Usability testing is an example of which type of testing?

A. Functional
B. Non-functional
C. Structural
D. Change-related
Quiz Time
• You have been receiving daily builds from the developers. Even though they are documenting
the fixes they are including in each build, you are finding that the fixes either aren’t in the
build or are not working. What type of testing is best suited for finding these issues?

A. Unit Testing
B. System Testing
C. Confirmation Testing
D. Regression Testing
Quiz Time
• You have been receiving daily builds from the developers. Even though they are documenting
the fixes they are including in each build, you are finding that the fixes either aren’t in the
build or are not working. What type of testing is best suited for finding these issues?

A. Unit Testing
B. System Testing
C. Confirmation Testing
D. Regression Testing
Quiz Time
• During which level of testing should non-functional tests be
executed?

A. Unit and integration only


B. System testing only
C. Integration, system and acceptance only
D. Unit, integration, system and acceptance only
Quiz Time
• During which level of testing should non-functional tests be
executed?

A. Unit and integration only


B. System testing only
C. Integration, system and acceptance only
D. Unit, integration, system and acceptance only
Maintenance Testing
• Once deployed to production environments, software and systems need to be maintained

• Maintenance testing focuses on testing the changes to the system, as well as testing
unchanged parts that might have been affected by the changes.

• Maintenance can involve planned releases and unplanned releases (hot fixes)
Triggers for Maintenance

Modification

Migration

Retirement
Impact Analysis
• Impact analysis evaluates the changes that were
made for a maintenance release to identify the
intended consequences

• Impact analysis can also help to identify the impact


of a change on existing tests.

• The side effects and affected areas in the system


need to be tested for regressions

• Impact analysis may be done before a change is


made, to help decide if the change should be made
Quiz Time
• Which of the following should NOT be a trigger for maintenance testing?

A. Decision to test the maintainability of the software

B. Decision to test the system after migration to a new operating platform

C. Decision to test if archived data is possible to be retrieved

D. Decision to test after “hot fixes”


Quiz Time
• Which of the following should NOT be a trigger for maintenance testing?

A. Decision to test the maintainability of the software

B. Decision to test the system after migration to a new operating platform

C. Decision to test if archived data is possible to be retrieved

D. Decision to test after “hot fixes”


Quiz Time
• If impact analysis indicates that the overall system could be significantly
affected by system maintenance activities, why should regression testing be
executed after the changes?

A. To ensure the system still functions as expected with no introduced issues


B. To ensure no unauthorized changes have been applied to the system
C. To assess the scope of maintenance performed on the system
D. To identify any maintainability issues with the code
Quiz Time
• If impact analysis indicates that the overall system could be significantly
affected by system maintenance activities, why should regression testing be
executed after the changes?

A. To ensure the system still functions as expected with no introduced issues


B. To ensure no unauthorized changes have been applied to the system
C. To assess the scope of maintenance performed on the system
D. To identify any maintainability issues with the code
Chapter 2 In the Exam

Remember (1 Questions) Understand (4 Questions)

• Software development & software


testing
• Software development lifecycle models • Test levels
in context • Functional Testing
• Non-functional testing • White-box testing
• Triggers for maintenance testing
• Impact analysis for maintenance

You might also like