Software Testing Life Cycle
Software Testing Life Cycle
Waterfall Model
2
Waterfall Model
The waterfall model is a sequential one, consisting of the following
process areas:
The requirements phase, in which the requirements for the software
are gathered and analyzed, to produce a complete and unambiguous
specification of what the software is required to do.
The architectural design (analysis) phase, where a software
architecture for the implementation of the requirements is designed
and specified, identifying the components within the software and
the relationships between the components.
The design phase, where the detailed implementation of each
component is specified.
3
Waterfall Model
The code and unit test phase, in which each component of
the software is coded and tested to verify that it faithfully
implements the detailed design.
The system integration and system test phase, in which the
software is integrated to the overall product and tested.
The acceptance testing phase, where tests are applied and
witnessed to validate that the software faithfully implements
the specified requirements.
4
Features of a Waterfall Model
A waterfall model is easy to follow.
It can be implemented for any size project.
Every stage has to be done separately at the right
time so you cannot jump stages.
Documentation is produced at every stage of a
waterfall model allowing people to understand what
has been done.
Testing is done at every stage.
Advantages of a Waterfall Model
A waterfall model helps find problems earlier on
which can cost a business less than if it was found
later on.
Requirements will be set and these wouldn't be
changed.
As everything is documented a new team member
can easily understand what's to be done.
Implementers have to follow the design accurately
Disadvantages of a Waterfall Model
If requirements may change the Waterfall model may
not work.
Many believe it is impossible to make one stage of the
projects life cycle perfect.
Difficult to estimate time and cost for each stage of
the development process.
Constant testing of the design is needed.
Traditional Waterfall Model
8
STLC - Definition
The course of software being tested in a well-planned way is known as
Software test life cycle.
Contract Requirement Test Test
Signing Analysis Planning Development
9
STLC – Stages Involved
Contract Signing:
Process - The project is signed with client for testing the software
Documents involved:
SRS
Test Deliverables
Test Metrics etc.
10
STLC – Stages Involved
Requirement Analysis:
Process: Analyzing software for design and implementation methods and
11
STLC – Stages Involved
Test Planning:
Process: To plan, how the testing process should flow
Test Process Flow
Test Scope, Test Environment
Different Test phase and Test Methodologies
Documents Involved:
Master Test Plan, Test Scenario, SCM
12
STLC – Stages Involved
Test Development
Process:
Test Traceability Matrix and Test coverage
Test Scenarios Identification & Test Case preparation
Documents Involved:
Test Plan, RTM
Test cases
13
STLC – Stages Involved
Test Execution:
Process:
Executing Test cases
Testing Test Scripts
Documents Involved:
Test Cases
Bug report
14
STLC – Stages Involved
Defect Reporting
Process:
Defect logging
Assigning defect and fixing
Retesting
Defect closing
Documents involved:
Test report
Bug Report
15
STLC – Stages Involved
Product Delivery
Process:
After the product had undergone several tests, the acceptance test is
done by the user/client i.e. UAT, wherein the use cases were executed
and the product is accepted to go on live
Test Metrics and process Improvements made
Build release
Receiving acceptance
Documents involved
Test summary reports
16
V Model
Requirements
Acceptance Test
Definition
Functional system
System Test
design
Technical system
Integration Test
design
Component Unit/Component
Specification Test
Programming
V Model
V Model
It describes the activities to be performed and the results that have to be produced
during product development.
The left side of the "V" represents the decomposition of requirements, and creation of
system specifications.
The V-Model improves project transparency and project control by specifying standardized
approaches and describing the corresponding results and responsible roles.
It permits an early recognition of planning deviations and risks and improves process
management, thus reducing the project risk.
As a standardized process model, the V-Model ensures that the results to be provided are
complete and have the desired quality.
The effort for the development, production, operation and maintenance of a system can be
calculated, estimated and controlled in a transparent manner by applying a standardized process
model.
The results obtained are uniform and easily retraced. This reduces the acquirers dependency on
the supplier and the effort for subsequent activities and projects.
The standardized and uniform description of all relevant elements and terms is the basis for the
mutual understanding between all stakeholders.
Thus, the frictional loss between user, acquirer, supplier and developer is reduced.
V Model topics
Systems Engineering and verification
The Systems Engineering Process (SEP) provides a path for improving the cost effectiveness of
complex systems as experienced by the system owner over the entire life of the system, from
conception to retirement.
It involved early and comprehensive identification of goals, a concept of operations that describes
user needs and the operating environment, thorough and testable system requirements, detailed
design, implementation, rigorous acceptance testing of the implemented system to ensure it meets
the stated requirements (system verification), measuring its effectiveness in addressing goals
(system validation), on-going operation and maintenance, system upgrades over time, and eventual
retirement.
All design elements and acceptance tests must be traceable to one or more system requirements
and every requirement must be addressed by at least one design element and acceptance test.
Such rigor ensures nothing is done unnecessarily and everything that is necessary is
accomplished.
Systems Engineering and verification
V Model topics
The specification stream
The development stream can consist (depending on the system type and the
development scope) of customization, configuration or coding.
Advantages
These are the advantages V-Model offers in front of other systems development
models:
The users of the V-Model participate in the development and maintenance of The V-Model. A
change control board publicly maintains the V-Model.
The change control board meets anywhere from every day to weekly and processes all change
requests received during system development and test.
The V-Model provides concrete assistance on how to implement an activity and its work steps,
defining explicitly the events needed to complete a work step: each activity schema contains
instructions, recommendations and detailed explanations of the activity.
Limits
The following aspects are not covered by the V-Model, they must be regulated in addition, or the
V-Model must be adapted accordingly.
Test Leader
Test Analyst
Test Designer
•Test Data
(Manual or Automation) • Test Reports
• Test Scripts
•Test Results
Test Report •Tester
•Final Test Reports
•Test Leader
& Evaluation
Test Planning
• Test Manager or Test Leader will have initial plan for
test Test Planning
No
Create
test report
(*) See more in Defects Workflow
Defects Workflow
Defect in system
Update more
information Review by
Test Lead, Dev Lead, PM Explain why and
Ask Tester close
Assign back to Tester Yes Defect.
for more information Ambiguous defect?
No
No
Really defects? Check in to build
Assign to Tester
Explain why and Yes
Ask approval
Assign Developer
from PM/ Leaders No
to fix Re-Test pass?
Yes
No Yes
Pending defect Can fix defect?
Close defect
Test Report and Evaluation
• Test Manager or Test Leader will analyze
defects in defect tracking system. Test Planning
• Generate the Test Evaluation Summary and
Defect Reports
– To evaluate the test results and log change Test Analysis & Design
requests (Manual or Automation)
– To calculate and deliver the test metrics.
– To generate the test evaluation summary
Test Executing
Determine if test success criteria and test (Manual or Automation)
completion have been achieved.
Test Report
& Evaluation
Test Plan – What?
Derived from Test Approach, Requirements, Project
Plan, Functional Spec., and Design Spec
Details out project-specific Test Approach
Lists general (high level) Test Case areas
Include testing Risk Assessment
Include preliminary Test Schedule
Lists Resource requirements
35
Test Plan – Why?
Identify Risks and Assumptions up front to reduce
surprises later.
Communicate objectives to all team members.
Foundation for Test Spec, Test Cases, and ultimately
the Bugs we find.
Failing to plan = planning to fail.
36
Test Plan – Definition
The test strategy identifies multiple test levels, which
are going to be performed for the project. Activities at each
level must be planned well in advance and it has to be
formally documented. Based on the individual plans, the
individual test levels are carried out.
37
Forming a Test team
The capabilities of the testing team greatly affect the success, or failure, of the
testing effort.
An effective testing team includes a mixture of technical and domain expertise
relevant to the software problem.
Testing team should not only be technically proficient with the testing
techniques and tools necessary to perform the actual tests, but depending on
the complexity of the domain, a team should also include members who have
a detailed understanding of the problem domain.
The testing team must be properly structured, with defined roles and
responsibilities that allows the tester to perform their functions with minimal
overlap and without uncertainty regarding which team member should
perform which duties.
38
Forming a Test team
Testing team includes –
Test manager
Test lead
Usability Test Engineer
Manual Test Engineer
Automated Test Engineer
Network Test Engineer
Test Environment Specialist
Security Test Engineer
39
TEST LEAD
Defining and implementing the role testing plays within the organizational structure.
Defining the scope of testing within the context of each release / delivery.
Deploying and managing the appropriate testing framework to meet the testing
mandate.
Implementing and evolving appropriate measurements and metrics.
To be applied against the Product under test.
To be applied against the Testing Team.
Planning, deploying, and managing the testing effort for any given engagement /
release.
Managing and growing Testing assets required for meeting the testing mandate:
Team Members
Testing Tools
Testing Process
Retaining skilled testing personnel.
40
TEST LEAD
The Test Lead must understand how testing fits into the organizational structure, in other
words, clearly define its role within the organization . this is often accomplished by crafting
a Mission Statement or a defined Testing Mandate. For example:
"To prevent, detect, record, and manage defects within the context of a defined release."
Now it becomes the task on the Test Lead to communicate and implement effective
managerial and testing techniques to support this .simple. mandate. Expectations of your
team, your peers (Development Lead, Deployment Lead, and other leads) and your
superior need to be set appropriately given the timeframe of the release, the maturity of
the development team and testing team. These expectations are usually defined in terms of
functional areas deemed to be in Scope or out of Scope. For example:
...
41
TEST LEAD
In Scope:
Security
Backup and Recovery
…
42
Unit Test Planning
Steps for Unit Testing are
Creation of Test Plan
Creation of Test Cases and the Test data
Creation of scripts to run the test cases wherever applicable
Execution of the test cases, once the code is ready
Fixing of the if present and re testing of the code
Repetition of the test cycle until the Unit is free from all types of bugs \
43
Unit Test Planning
It is document, which describes various steps as to how the
tests will be carried out.
List of things to be tested
Test environment
Test Strategy and approach
Test scope and assumptions
Language standards
Development documentation standards
Input test data
Initial conditions
Expected results and test log status
What to do after a test is successfully carried out, what to do if test fails and so on
44
Unit Test Planning
The Unit Test Plan activity can be initiated when :-
The requirements for the module have been completed if Black Box testing
is used.
The Detailed Design for the module has been completed if White Box testing
used.
The module has been developed and implemented if Code Coverage testing
used.
The unit Plans objective is to ensure that the particular
module under test works properly and perform all the
desired functions.
Interaction between modules and overall system
performance is not tested during this phase.
45
Test Cases
A Test Case is a commonly used term for a specific test.
This is usually the smallest unit of testing.
A Test Case consists of information such as test steps,
Verification steps, prerequisites, outputs, test environment,
etc.
It is a set of
- Inputs
- Execution preconditions, and
- Expected outcomes
46
Test Cases
Inputs may be described as the specific values, or as file names, tables,
database, and so on.
Execution preconditions are the conditions required for running the test,
for example, a certain state of a database, or a configuration of a hardware
device.
Expected outputs are the specified results to be produced by the code under
test.
Test cases are executed against the product software, and then expected
outputs/results are compared with the actual results to determine whether the
test objective for a given inputs(s) has passed /failed the test.
Negative test cases consider the handling of invalid data, as well as scenarios
in which the precondition has not been satisfied.
47
Test Cases and Test Data
Test dada are inputs that have been devised to test the system
Test data is generated using various Black box and White box approaches
such as boundary value analysis, equivalence partitioning, etc.
Test Cases are inputs and outputs specification plus a statement of the
function under test.
48
Template for Test Case
A Template should appear in test documentation for the organization.
It is very difficult to create template that fits all organizations.
Test case template usually includes following fields :-
Project name
Version date, version number , version Author
Approval and distribution date
Test data
Types of Testing
Test case Number – Unique Identifier
Test case name :- The name or title contain the essence of the test case including the functional
area and purpose of the test
Test case Steps :- Each case step should clearly state the navigation, data and events required to
accomplish the step.
Expected Results :- The expected behavior of the system after any test case step that requires
verification / validation – this could include: screen pop-ups, data updates, display changes, or
any other transaction on the system that is expected to occur when the test case is executed.
Actual Results, Comments, etc.
49
Sample Test Case
50
Test Case for College website
51
Test case design strategies black box approach
The ability to develop effective test case is important to an organization
evolving toward a higher-quality testing process. if test case are
effective there is
52
The Two basic Testing Strategies
53
Test case design strategies black box approach
In order to design effective test case, two basic strategies are used.
These are called the black box and white box strategies. These
approaches are summarized in fig.
Black box testing ,focuses on the functional requirement of the software.
It enable the software engineer to derive a set of input condition that
will be fully exercise all the functional requirements of programed
Using the black box approached, a tester consider the software-under-
test to be on opaque box.
There is no knowledge of it inner structure. The tester only has
knowledge of what it does.
The size of the software-under-test using these approach can vary from
simple module, member function, or subsystem to complete system.
54
Test case design strategies black box approach
Random testing
Equivalence class partitioning and
Boundary value analysis
Positive and negative testing
User documentation testing
Domain testing
Requirement based testing
55
Random Testing
Each soft module or system has an input domain from which the test
input data is selected.
If a tester randomly selected input from the domain, it is called
Random testing.
For example if the valid input domain for a module is all positive
integers between 1 and 100, the test using these approach would
randomly, select some value from that domain;
for example, the value 55,66,14 might be chosen.
Given this approach, some of the issues that remain open are.
56
Random Testing
Are the three value sufficient to show that module meet its
specification should additional or fewer value be used to make
the most effective of resources?
Are any three any input value, other than those selected, more
likely to reveal defects?
Should any value outside the value outside the valid domain be
used as test input? For example, should test data include floating
point value, negative value , integer value greater than 100?
57
Random Testing
Use of random test input saves some of the time and effort required for
the test input selection, but it has very little chance producing an
effective set of test data.
There are tools that generate random test data for stress tests. This type
of testing can be very useful especially at the system level.
In error guessing, the tester has to guess what faults might occur and
design the test to represent them
58
Equivalence Class Partitioning
Equivalence partitioning is a black-box testing method that divides the
input domain of a program into classes of data from which test can be
derived.
It define test case that uncover classes of errors, thereby reducing the
total number of test cases that must be developed
An equivalence class represents a set of valid or invalid states for input
conditions. Typically an input condition is either a specific numeric
value, a set of related values, or Boolean condition.
59
Equivalence Class Partitioning
Equivalence classes may be defined according to the following
guidelines or list of condition.
For example, suppose the specification for insurance module says that
an input, age lies in the range 25 to 45; then select one valid
equivalence class that include all value from 25 to 45. Select a second
equivalence class that consists of all value less than 25, and third
equivalence class that consists of all values greater than 45.
60
Equivalence Class Partitioning
2. If an input condition requires a specific value, than select one valid
equivalence class that include the value and two invalid equivalence
classes that are outside each end of the allowed number.
61
Equivalence Class Partitioning
3. If an input condition specifies a set of input valid, than select one valid
equivalence class that contain all the member of the set and one invalid
equivalence class for any value outside the set.
For example, If the specification for a point a module state that the
color red, green and blue allowed as inputs, then select one equivalence
class that includes the set red, green and blue, and one invalid
equivalence class foe all other inputs.
62
Equivalence Class Partitioning
4. If an input condition specifies a must be condition, then select one
valid
equivalence class to represent the “must be ” condition and one invalid
equivalence class that does not include “must be” condition.
5. If an input condition is Boolean, then select one valid and one invalid
equivalence class.
63
User Documentation Testing
User documentation consists of :- User manuals, User guides, Installation and
Setup instructions, online help, read me file (s), Software release notes, and
many more that are provided to the end user along with the software to
understand the software system.
Defective documentation can cause systems to be improperly changed or
system output to be improperly used.
Both of these error can lead to incorrect system results .
The proposed test method for documentation is “Documentation Testing” .
64
User Documentation Testing
When a product is upgraded, the corresponding product documentation
should also get updated simultaneously to avoid defective documentation.
User documentation focuses on ensuring what is in the document exactly
matches the product behavior, by verifying screen by screen, transaction and
report by report.
Documentation testing also checks for the language aspects of the document
such as spell check and grammar.
Defects found in user documentation need to be tracked properly along with
the other defects information.
Such as :- Defect/comment description, Paragraph/page number, Document
version number reference, name of the Reviewer, Name of author, Priority and
Severity of the defect.
This defect information is passed to the corresponding author for defect fixing
and closure of the defect.
65
Benefits Of UDT
User Documentation testing helps in highlighting problems unnoticed during
reviews.
Good documentation ensures reliability and consistency of documentation
and product, thus minimizing the possible defects reported by customers.
It lowers support costs, as it reduces the time taken for each support call by
alerting the customers to refer the relevant section of the manual.
New programmers and tester can use the documentation to learn the external
functionality of the product.
Customers need less training and can proceed more quickly to the product
usage. Thus high-quality documentation can reduce overall training costs for
organizations.
66
Domain Testing
White box testing is performed looking at the program code.
Black box testing is performed without looking at the program code but
looking at the specifications.
Domain testing is the next level of testing in which we do not look at the
specification of a software product but are testing the product, purely based
on the domain knowledge.
Domain testing requires business domain knowledge rather than the
knowledge of what the software specification contains or how the software is
written.
As a tester move from white box testing through black box testing to domain
testing as shown in fig. ,
The tester focus more on the product’s external behavior rather than focusing
on the details of the software product.
67
Domain Testing
68
Domain Testing
For domain testing, the organization prefers to hire tester from the domain
area (such as banking, insurance, and so on ) and train them in software,
rather than take software professionals and train them in business domain.
This reduces the effort and time required for training the testers in domain
testing and also increases the effectiveness of domain testing.
Domain testing is characterized by how well an individual test engineer
understands the operation of the system.
If a tester does not understand the system process, it would be very difficult
for tester to test the application.
Domain testing involves testing the product, not by going through the logic
built into the product.
Test cases are written based on the domain knowledge.
69
Domain Testing
Let us consider an example od ATM cash withdrawal functionality.
The user performs the following actions :-
Go to the ATM
Put ATM card inside
Enter the correct PIN
Choose cash withdrawal
Enter amount
Take the cash
Exit and Retrieve the card
70
Domain testing
In the example, a domain tester is not concerned about testing
everything in the design; rather he or she is interested in testing
everything in the business flow.
The tester is concerned about whether the user got the right amount
or not.
Typical black box approaches are expected to be working before the
start of domain testing.
Domain testing is done after all the components are integrated and
after the product has been tested using other Black box approaches.
Hence the focus of domain testing is “Domain Intelligence”.
71
Mutation Testing
The data generation approaches are based on code behavior and code
structure.
Mutation testing is another approach to test data generation that
requires knowledge of code structure , but it is classified as a fault-
based testing approach.
Mutation testing is used to evaluate the effectiveness of the testing
applied to a system.
It is also called as defect seeding.
Mutation testing starts with a code component, its associated test
cases, and the test results.
The original code component is modified in a simple way to provide a
set of similar components that are called mutants.
Each mutant contains a fault as a result of the modification.
72
Mutation Testing
The original test data is then run with the mutants.
If the test data reveals the fault in the mutant by producing a different
output as a result of execution, then the mutant is said to be killed.
If the mutants produce same output as a result of execution, then the
test data are not adequate i.e. they are not capable of revealing the
defects.
The tester then must develop additional test data to reveal the fault
and kill the mutants.
Mutations are simple in the original code component, for example ;
constant replacement, arithmetic operator replacement, data statement
alteration, statement deletion, and logical operator replacement.
There are existing tools that will easily generate mutants.
73
Defect Life Cycle
Defect Life Cycle (Bug Life cycle) is the journey of a defect
from its identification to its closure.
The Life Cycle varies from organization to organization
and is governed by the software testing process the
organization or project follows and/or the Defect tracking
tool being used.
Nevertheless, the life cycle in general resembles the
following:
74
Bug Life Cycle
75
Status Alternative Status
NEW ----------
ASSIGNED OPEN
DEFERRED ----------
DROPPED REJECTED
COMPLETED FIXED, RESOLVED, TEST
REASSIGNED REOPENED
CLOSED VERIFIED
76
Defect Status Explanation
NEW: Tester finds a defect and posts it with the status NEW.
This defect is yet to be studied/approved. The fate of a NEW defect is one of
ASSIGNED, DROPPED and DEFERRED.
ASSIGNED / OPEN: Test / Development / Project lead studies the NEW defect
and if it is found to be valid it is assigned to a member of the Development
Team.
The assigned Developer’s responsibility is now to fix the defect and have it
COMPLETED. Sometimes, ASSIGNED and OPEN can be different statuses. In
that case, a defect can be open yet unassigned.
DEFERRED: If a valid NEW or ASSIGNED defect is decided to be fixed in
upcoming releases instead of the current release it is DEFERRED.
This defect is ASSIGNED when the time comes.
77
Defect Status Explanation
DROPPED / REJECTED: Test / Development/ Project lead studies the
NEW defect and if it is found to be invalid, it is DROPPED /
REJECTED. Note that the specific reason for this action needs to be
given.
COMPLETED / FIXED / RESOLVED / TEST: Developer ‘fixes’ the defect
that is ASSIGNED to him or her. Now, the ‘fixed’ defect needs to be
verified by the Test Team and the Development Team ‘assigns’ the
defect back to the Test Team. A COMPLETED defect is either CLOSED,
if fine, or REASSIGNED, if still not fine.
If a Developer cannot fix a defect, some organizations may offer the
following statuses:
Won’t Fix / Can’t Fix: The Developer will not or cannot fix the defect
due to some reason.
78
Defect Status Explanation
Can’t Reproduce: The Developer is unable to reproduce the defect.
Need More Information: The Developer needs more information on the
defect from the Tester.
REASSIGNED / REOPENED: If the Tester finds that the ‘fixed’ defect is
in fact not fixed or only partially fixed, it is reassigned to the Developer
who ‘fixed’ it.
A REASSIGNED defect needs to be COMPLETED again.
CLOSED / VERIFIED: If the Tester / Test Lead finds that the defect is
indeed fixed and is no more of any concern, it is CLOSED / VERIFIED.
This is the happy ending.
79
Defect Life Cycle Implementation Guidelines
Make sure the entire team understands what each defect status exactly
means. Also, make sure the defect life cycle is documented.
Ensure that each individual clearly understands his/her responsibility as
regards each defect.
Ensure that enough detail is entered in each status change. For example,
do not simply DROP a defect but provide a reason for doing so.
If a defect tracking tool is being used, avoid entertaining any ‘defect
related requests’ without an appropriate change in the status of the
defect in the tool.
Do not let anybody take shortcuts. Or else, you will never be able to get
up-to-date defect metrics for analysis.
80
Defect Life Cycle
Bug can be defined as the abnormal behavior of the software. No
software exists without a bug.
The elimination of bugs from the software depends upon the efficiency
of testing done on the software.
A bug is a specific concern about the quality of the Application under
Test (AUT).
In software development process, the bug has a life cycle.
The bug should go through the life cycle to be closed. A specific life
cycle ensures that the process is standardized.
The bug attains different states in the life cycle.
81
Defect Life Cycle
The life cycle of the bug can be shown diagrammatically as follows:
82
The different states of a bug can be summarized as follows:
1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed
83
Description of Various Stages:
1. New: When the bug is posted for the first time, its state will be
“NEW”. This means that the bug is not yet approved.
2. Open: After a tester has posted a bug, the lead of the tester
approves that the bug is genuine and he changes the state as “OPEN”.
3. Assign: Once the lead changes the state as “OPEN”, he assigns the
bug to corresponding developer or developer team. The state of the
bug now is changed to “ASSIGN”.
84
Description of Various Stages:
4. Test: Once the developer fixes the bug, he has to assign the bug to the testing
team for next round of testing. Before he releases the software with bug fixed,
he changes the state of bug to “TEST”. It specifies that the bug has been fixed
and is released to testing team.
5. Deferred: The bug, changed to deferred state means the bug is expected to be
fixed in next releases. The reasons for changing the bug to this state have many
factors. Some of them are priority of the bug may be low, lack of time for the
release or the bug may not have major effect on the software.
6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug.
Then the state of the bug is changed to “REJECTED”.
85
Description of Various Stages:
7. Duplicate: If the bug is repeated twice or the two bugs mention the same
concept of the bug, then one bug status is changed to “DUPLICATE”.
8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester
tests the bug. If the bug is not present in the software, he approves that the bug
is fixed and changes the status to “VERIFIED”.
9. Reopened: If the bug still exists even after the bug is fixed by the developer, the
tester changes the status to “REOPENED”. The bug traverses the life cycle once
again.
10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that
the bug no longer exists in the software, he changes the status of the bug to
“CLOSED”. This state means that the bug is fixed, tested and approved.
86
Defect Life Cycle
While defect prevention is much more effective and efficient in
reducing the number of defects, most organization conducts defect
discovery and removal.
Discovering and removing defects is an expensive and inefficient
process. It is much more efficient for an organization to conduct
activities that prevent defects.
87
Defect Life Cylce
A sample guideline for assignment of Priority Levels during the product test phase includes:
1. Critical / Show Stopper — An item that prevents further testing of the product or function
under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of
this include a missing menu option or security permission required to access a function under test.
2. Major / High — A defect that does not function as expected/designed or cause other
functionality to fail to meet requirements can be classified as Major Bug. The workaround can be
provided for such bugs. Examples of this include inaccurate calculations; the wrong field being
updated, etc.
.
3. Average / Medium — The defects which do not conform to standards and conventions can be
classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples
include matching visual and text links which lead to different end points.
.
4. Minor / Low — Cosmetic defects which does not affect the functionality of the system can be
classified as Minor Bugs.
88
Defect Severity
"A classification of a software error or fault based on an evaluation of the degree of
impact that error or fault on the development or operation of a system (often used to
determine whether or when a fault will be corrected)."
The severity framework for assigning defect criticality that has proven most useful in
actual testing practice is a five level scale. The criticality associated with each level is
based on the answers to several questions.
First, it must be determined if the defect resulted in a system failure. ANSI/IEEE Std
729-1983 defines a failure as,
"The termination of the ability of a functional unit to perform its required function."
Fourth, it must be determined if the system can operate reliably with the
defect present if it is not manifested as a failure.
The following five level scale of defect criticality addresses the these questions
90
Defect Severity
The five Levels are:
1. Critical
2. Major
3. Average
4. Minor
5. Exception
1. Critical - The defect results in the failure of the complete software system, of a
subsystem, or of a software unit (program or module) within the system.
2. Major - The defect results in the failure of the complete software system, of a
subsystem, or of a software unit (program or module) within the system. There is no
way to make the failed component(s), however, there are acceptable processing
alternatives which will yield the desired result.
91
Defect Severity
3. Average - The defect does not result in a failure, but causes the
system to produce incorrect, incomplete, or inconsistent results, or the
defect
impairs the systems usability.
4. Minor - The defect does not cause a failure, does not impair usability,
and the desired processing results are easily obtained by working
around the defect.
92
Defect Severity
In addition to the defect severity level defined above, defect priority
level can be used with severity categories to determine the immediacy
of repair. A five repair priority scale has also be used in common testing
practice. The levels are:
5. Defer
93
Defect Severity
1. Resolve Immediately - Further development and/or testing cannot occur until the
defect has been repaired. The system cannot be used until the repair has been effected.
2. Give High Attention - The defect must be resolved as soon as possible because it is
impairing development/and or testing activities. System use will be severely affected
until the defect is fixed.
3. Normal Queue - The defect should be resolved in the normal course of development
activities. It can wait until a new build or version is created.
4. Low Priority - The defect is an irritant which should be repaired but which can be
repaired after more serious defect have been fixed.
5. Defer - The defect repair can be put of indefinitely. It can be resolved in a future
major system revision or not resolved at all.
94
Test Execution
Software Testing Fundamentals
Testing objectives include
Testing is a process of executing a program with the intent of finding an
error
A good test case is one that has a high probability of finding an as yet
undiscovered error
A successful test is one that uncovers an as yet undiscovered error
96
When Testing should start?
Testing early in the life cycle reduces the
errors. Test deliverables are associated with
every phase of development. The goal of
Software Tester is to find bugs, find them as
early as possible, and make them sure they are
fixed
The number one cause of Software bugs is the
Specification
The next largest source of bugs is the Design
97
When to Stop Testing?
This can be difficult to determine.
Many modern software applications are so complex, and run
in such as interdependent environment, that complete
testing can never be done.
98
Test Execution
Testing of an application includes:
Unit Testing
Integration testing
System Testing
Acceptance testing
These are the functional testing strategies and few other functional, non-
functional, performance and other testing methods can also be applied on the
software.
99
Test Execution – Unit testing
The unit test plan is the overall plan to carry out the unit test activities. The
lead tester prepares it and it will be distributed to the individual testers
Basic input/output of the units along with their basic functionality will be
tested
input units will be tested for the format, alignment, accuracy and the totals
The UTP will clearly give the rules of what data types are present in the
system, their format and their boundary conditions
Testing the screens, files, database etc., are to be given in proper sequence
100
Test Execution – Integration
testing
The integration test plan is the overall plan for carrying
out the activities in the integration test level
This section clearly specifies the kinds of interfaces fall under the scope of
testing internal, external interfaces, with request and response is to be
explained
Two approaches practiced are Top-Down and Bottom-Up integrations
Given this correctly, the testing activities will lead to the product, slowly
building the product, unit by unit and then integrating them
101
Test Execution – System testing
The system test plan is the overall plan carrying out the
system test level activities
System testing is based on the requirements
All requirements are to be verified in the scope
of system testing
The requirements can be grouped in terms of
the functionality
Based on this, there may be priorities also
among the functional groups
Apart from this what special testing is
performed are also stated here
102
Test Execution – Non-functional testing
Non-functional testing includes:
Installation testing – Installation environment, practical obstacles etc.
Configuration testing - how well the product works with a broad range of
103
Test Execution – Performance testing
Performance testing includes:
Load Testing – Testing with the intent of determining how well the
104
Test Execution – Performance testing
Stress testing - Testing with the intent of determining how well a
105
Bug/Defect Management
BUG LIFE CYCLE - What is a bug?
A software bug is an error, flaw, mistake, failure, or fault in a
computer program that prevents it from behaving as intended.
107
BUG LIFE CYCLE
What is a Bug Life Cycle?
In software testing, the term
life cycle refers to the various
stages that a defect/bug
assumes over its life.
108
BUG LIFE CYCLE
Stages involved in Bug Life Cycle
The different stages involved in a bug life cycle are as follows:
Finding Bugs
Reporting/ Documentation
Fixing
Retesting
Closing
109
BUG LIFE CYCLE
Stages explained
1. Finding Bugs:
Software Tester finds bug while testing.
2. Reporting/ Documentation:
110
BUG LIFE CYCLE
Stages explained Continued…
3. Fixing:
Once the bug is assigned to the developer, he fixes the bug.
4. Retesting:
The tester then performs a regression test to confirm that the bug
is indeed fixed.
5. Closing:
If the bug is fixed, then the tester closes the bug.
Here the bug then enters its final state, the closed state.
111
Different status of a Bug
112
Description of Various Status of a bug
New: When the bug is posted for the first time, its state will be “NEW”. This
means that the bug is not yet approved.
Open: After a tester has posted a bug, the lead of the tester approves that
Assign: Once the lead changes the state as “OPEN”, he assigns the bug to
corresponding developer or developer team. The state of the bug now is
changed to “ASSIGN”.
Test: Once the developer fixes the bug, he assigns the bug to the testing
team for retesting. Before he releases the software with bug fixed, he changes
the state of bug to “TEST”. It specifies that the bug has been fixed and is
released to testing team.
113
Description of Various Status of a bug
Deferred: The bug, changed to deferred state means the bug is expected to
Rejected: If the developer feels that the bug is not genuine, he rejects the
Duplicate: If the bug is repeated twice or the two bugs mention the same
114
Description of Various Status of a bug
Verified: Once the bug is fixed and the status is changed to “TEST”, the
tester tests the bug. If the bug is not present in the software, he approves that
the bug is fixed and changes the status to “VERIFIED”.
Reopened: If the bug still exists even after the bug is fixed by the
developer, the tester changes the status to “REOPENED”. The bug traverses
the life cycle once again.
Closed: Once the bug is fixed, it is tested by the tester. If the tester feels
that the bug no longer exists in the software, he changes the status of the bug
to “CLOSED”. This state means that the bug is fixed, tested and approved.
115
Severity of a Bug
It indicates the impact each defect has on
116
Priority Levels of a Bug
Critical :
An item that prevents further testing of the product or function under test
Major / High:
117
Priority Levels of a Bug
Average / Medium:
The defects which do not conform to standards and conventions can be
Minor / Low:
Cosmetic defects which does not affect the functionality of the system
118
Various Bug tracking tools
The various bug tracking tools available are:
Quality Center® – from HP
119
Product Delivery
Product Delivery -Test Deliverables
Test Trace-ability Matrix
Test Plan
Testing Strategy
Test Cases (for functional testing)
Test Scenarios (for non-functional testing)
Test Scripts
Test Data
Test Results
Test Summary Report
Release Notes
Tested Build
121
Product Delivery - Test Metrics
Measuring the correctness of the testing process with
measurable is known to be test metrics.
122
Product Delivery - Test Metrics
There are several test metrics identified as part of the
overall testing activity in order to track and measure the
entire testing process.
These test metrics are collected at each phase of the
testing life cycle/SDLC, analyzed and appropriate process
improvements are determined and implemented. The metrics
should be constantly collected and evaluated as a parallel
activity together with testing, both for manual and automated
testing irrespective of the type of application
123
Product Delivery - Test Metrics - Classification
1. Project Related Metrics – such as
Test Size,
# of Test Cases tested per day –Automated (NTTA)
# of Test Cases tested per day –Manual (NTTM)
# of Test Cases created per day – Manual (TCED)
Total number of review defects (RD)
Total number of testing defects (TD) etc.
124
Product Delivery – Test Metrics – Classification
2. Process Related Metrics – such as
Schedule Adherence (SA)
Effort Variance (EV)
Schedule Slippage (SS)
Test Cases and Scripts Rework Effort, etc.
There is no specific clue on the way they will carry out the testing, since the
client performs this test
It will not differ much from the system testing
This is just one level of testing done by the client for the overall product and
it includes test cases including the unit and integration test level details
126
127