5 Testing Complete
5 Testing Complete
Testing 1
Background
Main objectives of a project: High
Quality & High Productivity (Q&P)
Quality has many dimensions
reliability, maintainability, interoperability
etc.
Reliability is perhaps the most important
Reliability: The chances of software
failing
More defects => more chances of
failure => lesser reliability
Hence Q goal: Have as few defects as
possible in the delivered software
Testing 2
Faults & Failure
Failure: A software failure occurs if
the behavior of the s/w is different
from expected/specified.
Fault: cause of software failure
Fault = bug = defect
Failure implies presence of defects
A defect has the potential to cause
failure.
Definition of a defect is environment,
project specific Testing 3
What is this?
A failure?
An error?
A fault?
Need to specify
the desired behavior first!
Erroneous State (“Error”)
Algorithmic Fault
How do we deal with
Errors and Faults?
Verification?
Modular Redundancy?
as a
Feature?
Who does testing?
• Software Tester
• Software
Testing? Developer
• Project
Lead/Manager
• End User
According to ANSI/IEEE
1059 standard, Testing
can be defined as “A
process of analyzing
a software item to
detect the differences
between existing and
required conditions
(that
is defects/errors/bugs)
and to evaluate the
features of the
software item”.
Patching?
When to Start Testing
An early start to testing reduces the cost,
time to rework and error free software that
is
delivered to the client. However in Software
Development Life Cycle (SDLC) testing can
be started from the Requirements Gathering
phase and lasts till the deployment of the
software. However it also depends on the
development model that is being used.
Testing 13
When to Stop Testing
Testing is a never ending process and no one can say
that any software is 100% tested.
Following are the aspects which should be considered to
stop the testing:
Testing Deadlines.
Completion of test case execution.
Completion of Functional and code coverage to a certain
point.
Bug rate falls below a certain level and no high priority
bugs are identified.
Management decision.
Testing 14
Verification Vs Validation
Testing 15
Difference between Audit
and Inspection
Audit: As per IEEE, it is a review of documented processes whether
organizations implements and follows the processes or not. Types of Audit
include the Legal Compliance Audit, Internal Audit, and System Audit.
Testing 16
Difference between
Testing and Debugging
Testing: It involves the identification of
bug/error/defect in the software without correcting it.
Normally professionals with a Quality Assurance
background are involved in the identification of bugs.
Testing is performed in the testing phase.
Testing 17
Testing takes creativity
To develop an effective test, one
must have:
Detailed understanding of the system
Knowledge of the testing techniques
Skill to apply these techniques in an
effective and efficient manner
Testing is done best by independent
testers
We often develop a certain mental attitude
that the program should in a certain way
when in fact it does not.
Programmer often stick to the data set
that makes the program work
"Don’t mess up my code!"
A program often does not work when
tried by somebody else.
Don't let this be the end-user.
Testing 19
Role of Testing
Reviews are human processes - can not
catch all defects
Hence there will be requirement defects,
design defects and coding defects in code
These defects have to be identified by
testing
Therefore testing plays a critical role in
ensuring quality.
All defects remaining from before as well
as new ones introduced
Testing have to be 20
Detecting defects in
Testing
During testing, a program is
executed with a set of test cases
Failure during testing => defects
are present
No failure => confidence grows, but
can not say “defects are absent”
Defects detected through failures
To detect defects, must cause
failures during testing
Testing 21
Testing Types
Manual Testing:- This type includes the
testing of the Software manually i.e. without
using any automated tool or any script.
In this type the tester takes over the role of an
end user and test the Software to identify any
un-expected behavior or bug.
Testers use test plan, test cases or test
scenarios to test the Software to ensure the
completeness of testing.
Manual testing also includes exploratory
testing as testers explore the software to
identify errors in it. Testing 22
Testing Types…
Automation Testing:- Automation
testing which is also known as “Test
Automation”, is when the tester writes scripts
and uses another software to test the
software. This process involves automation of
a manual process. Automation Testing is used
to re-run the test scenarios that were
performed manually, quickly and repeatedly.
Apart from regression testing, Automation
testing is also used to test the application from
load, performance and stress point of view. It
increases the test coverage;
Testing
improve 23
Automation Testing…
What to automate: It is not possible to
automate everything in the Software; however
the areas at which user can make transactions
such as login form or registration forms etc,
any area where large amount of users’ can
access the Software simultaneously should be
automated.
Furthermore all GUI items, connections with
databases, field validations etc. can be
efficiently tested by automating the manual
process.
Testing 24
Automation Testing…
When to Automate: Test Automation should be uses
by considering the following for the Software:
Large and critical projects.
Projects that require testing the same areas frequently.
Requirements not changing frequently.
Accessing the application for load and performance with
many virtual users.
Stable Software with respect to manual testing.
Availability of time
Testing 25
Automation Testing…
How to Automate:
Identifying areas within a software for automation.
Selection of appropriate tool for Test automation.
Writing Test scripts.
Development of Test suits.
Execution of scripts
Create result reports.
Identify any potential bug or performance issue.
Automation Tools:-
HP Quick Test Professional
Selenium
IBM Rational Functional Tester
Testing Anywhere
WinRunner
LaodRunner
Visual Studio Test Professional Testing 26
Test Oracle
To check if a failure has occurred
when executed with a test case,
we need to know the correct
behavior
I.e. need a test oracle, which is
often a human
Human oracle makes each test
case expensive as someone has to
check the correctness of its output
Testing 27
Role of Test cases
Ideally would like the following for test
cases
No failure implies “no defects” or “high quality”
If defects present, then some test case causes a
failure
Psychology of testing is important
should be to ‘reveal’ defects(not to show that it
works!)
test cases must be “destructive
Role of test cases is clearly very critical
Only if test cases are Testing
“good”, the 28
Test case design
During test planning, have to design a
set of test cases that will detect defects
present
Some criteria needed to guide test case
selection
Two approaches to design test cases
functional or black box
structural or white box
Both are complimentary; we discuss a
few approaches/criteria for both
Testing 29
Black Box testing
Testing 36
Equivalence class…
Once eq classes selected for each
of the inputs, test cases have to be
selected
Select each test case covering as
many valid eq classes as possible
Or, have a test case that covers at
most one valid class for each input
Plus a separate test case for each
invalid class
Testing 37
Example
A function of the software application accepts
a 10 digit mobile number.
Testing 38
Boundary value analysis
Testing 42
Cause Effect graphing
Equivalence classes and boundary
value analysis consider each input
separately
To handle multiple inputs, different
combinations of equivalent classes of
inputs can be tried
Number of combinations can be large –
if n different input conditions such that
each condition is valid/invalid, total: 2n
Cause effect graphing helps in selecting
combinations as input
Testing
conditions 43
CE-graphing
Identify causes and effects in the
system
Cause: distinct input condition which can be
true or false
Effect: distinct output condition (T/F)
Identify which causes can produce
which effects; can combine causes
Causes/effects are nodes in the graph
and arcs are drawn to capture
dependency; and/or are allowed
Testing 44
CE-graphing
From the CE graph, can make a
decision table
Lists combination of conditions that
set different effects
Together they check for various
effects
Decision table can be used for
forming the test cases
Testing 45
Notations used in the
Cause-Effect Graph
AND – For Effect E1 to be true, both the Causes C1 AND C2
should be true.
Mutually Exclusive - When only one of the causes will hold true.
Testing 46
CE graphing: Example
Draw A Cause And Effect Graph According
To Situation
Situation: The “Print message” is software
that reads two characters and, depending on
their values, messages is printed.
The first character must be an “A” or a “B”.
The second character must be a digit.
If the first character is an “A” or “B” and the second
character is a digit, then the file must be updated.
If the first character is incorrect (not an “A” or “B”), the
message X must be printed.
If the second character is incorrect (not a digit), the
message Y must be printed.
Testing 47
CE graphing: Example
Solution:
The Causes of this situation are:
C1 – First character is A
C2 – First character is B
C3 – the Second character is a digit
Testing 49
CE graphing: Example
Key – Always go from Effect to Cause (left to right). That
means, to get effect “E”, what causes should be true.
In this example, let’s start with Effect E1.
Effect E1 is for updating the file. The file is updated
when
– The first character is “A” and the second character is a digit
– The first character is “B” and the second character is a digit
– The first character can either be “A” or “B” and cannot be
both.
Now let’s put these 3 points in symbolic form:
For E1 to be true – the following are the causes:
– C1 and C3 should be true
– C2 and C3 should be true
Testing 50
– C1 and C2 cannot be true together. This means C1 and C2
CE graphing: Example
Now let’s draw this:
Testing 51
CE graphing: Example
There is a third condition where C1 and C2 are mutually
exclusive. So the final graph for effect E1 to be true is shown
below:
Testing 52
CE graphing: Example
Let’s move to Effect E2: E2 states print message “X”.
Message X will be printed when the First character is neither A
nor B.
This means Effect E2 will hold true when either C1 OR C2 is
invalid. So the graph for Effect E2 is shown as (In blue line)
Testing 53
CE graphing: Example
For Effect E3. E3 states print message “Y”. Message Y will be
printed when the Second character is incorrect.
This means Effect E3 will hold true when C3 is invalid. So the
graph for Effect E3 is shown as (In Green line)
Testing 54
Writing Decision Table
Based On Cause And
Effect graph
First, write down the Causes and Effects in a single column
shown below
The Key is the same. Go from bottom to top which means
traverse from Effect to Cause.
Start with Effect E1. For E1 to be true, the condition is
Here we are representing True as 1 and False as 0
First, put Effect E1 as True in the next column as
Testing 55
Writing Decision Table
Based On Cause And
Effect graph
Now for E1 to be “1” (true), we have the below two
conditions –
C1 AND C3 will be true
C2 AND C3 will be true
Testing 56
Writing Decision Table
Based On Cause And
Effect graph
For E3 to be true, C3 should be false.
Testing 57
Writing Test Cases From The Decision
Table
Below is a sample test case for Test Case 1 (TC1) and Test
Case 2 (TC2).
Testing 58
Pair-wise testing
Often many parameters determine the
behavior of a software system
The parameters may be inputs or settings, and
take different values (or different value
ranges)
Many defects involve one condition (single-
mode fault), eg. software not being able to
print on some type of printer
Single mode faults can be detected by testing for
different values of different parameters (parms)
If n parms and each can take m values, we can test
for one diff value for each parm in each test case
Total test cases for one params: m
Testing 59
Pair-wise testing…
All faults are not single-mode and sw
may fail at some combinations
Eg tel billing sw does not compute correct
bill for night time calling (one parm) to a
particular country (another parm)
Eg ticketing system fails to book a biz class
ticket (a parm) for a child (a parm)
Multi-modal faults can be revealed by
testing different combination of parm
values
This is called combinatorial testing
Testing 60
Pair-wise testing…
Full combinatorial testing not feasible
For n parms each with m values, total
combinations are nm
For 5 parms, 5 values each (tot: 3125), if
one test is 5 minutes, total time > 15625
minutes (260 hrs)/12 hrs per day = 10 days!
Research suggests that most such faults
are revealed by interaction of a pair of
values
I.e. most faults tend to be double-mode
For double mode, we need to exercise
each pair – called Testing
pair-wise testing 61
Pair-wise testing…
In pair-wise, all pairs of values
have to be exercised in testing
If n parms with m values each,
between any 2 parms we have
m*m pairs
1st parm will have m*m with n-1
others
2nd parm will have m*m pairs with n-2
3rd parm will have m*m pairs with n-3,
etc.
Testing 62
Total no of pairs are m*m*n*(n-1)/2
Pair-wise testing…
A test case consists of some setting of
the n parameters
Smallest set of test cases when each
pair is covered once only
A test case can cover a maximum of (n-
1)+(n-2)+…=n(n-1)/2 pairs
In the best case when each pair is
covered exactly once, we will have m2
different test cases providing the full
pair-wise coverage
Testing 63
Pair-wise testing…
Generating the smallest set of test
cases that will provide pair-wise
coverage is non-trivial (Significant)
Efficient algos exist; efficiently
generating these test cases can reduce
testing effort considerably
In an example with 13 parms each with 3
values pair-wise coverage can be done with
15 testcases
Pair-wise testing is a practical approach
that is widely usedTesting
in industry 64
Pair-wise testing, Example
A sw product for multiple platforms and
uses browser as the interface, and is to
work with diff OSs
We have these parms and values
OS (parm A): Windows, Solaris, Linux
Mem size (B): 128M, 256M, 512M
Browser (C): IE, Netscape, Mozilla
Total no of pair wise combinations: 27
No of cases can be less
Testing 65
Pair-wise testing…
Test case Pairs covered
Testing 66
Special cases
Testing 69
State-based Testing…
A system can be modeled as a state
machine
The state space may be too large (is a
cross product of all domains of vars)
The state space can be partitioned in a
few states, each representing a logical
state of interest of the system
State model is generally built from such
states
Testing 70
State-based Testing…
A state model has four
components
States: Logical states representing
cumulative impact of past inputs to
system
Transitions: How state changes in
response to some events
Events: Inputs to the system
Actions: The outputs for the events
Testing 71
State-based Testing…
State model shows what
transitions occur and what actions
are performed
Often state model is built from the
specifications or requirements
The key challenge is to identify
states from the
specs/requirements which capture
the key properties but is small
enough for modeling
Testing 72
State-based Testing,
example…
Let’s consider an ATM system function where if the user enters the invalid
password three times the account will be locked.
In this system, if the user enters a valid password in any of the first three
attempts the user will be logged in successfully. If the user enters the invalid
password in the first or second try, the user will be asked to re-enter the
password. And finally, if the user enters incorrect password 3rd time, the
account will be blocked.
Testing 73
State-based Testing, example…
Testing 74
State-based Testing, example…
State Transition Table
S1) Start S2 S2
Testing 75
White box testing
Testing 78
Statement Coverage
Criterion
Criterion: Each statement is executed at least
once during testing
I.e. set of paths executed during testing should
include all nodes
Limitation: does not require a decision to
evaluate to false if no else clause
E.g. : abs (x) : if ( x>=0) x = -x; return(x)
The set of test cases {x = 0} achieves 100%
statement coverage, but error not detected
Guaranteeing 100% coverage not always
possible due to possibility of unreachable
nodes
Testing 79
Branch coverage
Criterion: Each edge should be
traversed at least once during testing
i.e. each decision must evaluate to both
true and false during testing
Branch coverage implies stmt coverage
If multiple conditions in a decision, then
all conditions need not be evaluated to
T and F
Testing 80
Control flow based…
There are other criteria too - path
coverage, predicate coverage,
cyclomatic complexity based, ...
None is sufficient to detect all types of
defects (e.g. a program missing some
paths cannot be detected)
They provide some quantitative handle
on the breadth of testing
More used to evaluate the level of
testing rather than selecting test cases
Testing 81
Data flow-based testing
A def-use graph is constructed from the
control flow graph
A stmt in the control flow graph (in
which each stmt is a node) can be of
these types
Def: represents definition of a var (i.e. when
var is on the lhs)
C-use: computational use of a var
P-use: var used in a predicate for control
transfer
Testing 82
Example:
1. read x, y;
2. if(x>y)
3. a = x+1
else
4. a = y-1
5. print a;
Testing 83
Testing 84
Data flow based…
A def-use graph is constructed by
associating vars with nodes and edges
in the control flow graph
For a node I, def(i) is the set of vars for
which there is a global def in I
For a node I, C-use(i) is the set of vars for
which there is a global c-use in I
For an edge, p-use(I,j) is set of vars whor
which there is a p-use for the edge (I,j)
Def clear path from I to j wrt x: if no def
of x in the nodes in the path
Testing 85
Data flow based criteria
all-defs: for every node I, and every x in
def(i) there is a def-clear path
For def of every var, one of its uses (p-use
or c-use) must be tested
all-p-uses: all p-uses of all the
definitions should be tested
All p-uses of all the defs must be tested
Some-c-uses, all-c-uses, some-p-uses
are some other criteria
Testing 86
Relationship between diff
criteria
Testing 87
Tool support and test case
selection
Testing 90
Testing Process
Testing 91
Testing
Testing only reveals the presence of
defects
Does not identify nature and location of
defects
Identifying & removing the defect => role
of debugging and rework
Preparing test cases, performing testing,
defects identification & removal all
consume effort
Overall testing becomes
Testing very expensive :92
Incremental Testing
Goals of testing: detect as many defects as
possible, and keep the cost low
Both frequently conflict - increasing testing
can catch more defects, but cost also goes up
Incremental testing - add untested parts
incrementally to tested portion
For achieving goals, incremental testing
essential
helps catch more defects
helps in identification and removal
Testing of large systems is always incremental
Testing 93
Integration and Testing
Incremental testing requires
incremental ‘building’ I.e. incrementally
integrate parts to form system
Integration & testing are related
During coding, different modules are
coded separately
Integration - the order in which they
should be tested and combined
Integration is driven mostly by testing
needs Testing 94
Top-down and Bottom-up
System : Hierarchy of modules
Modules coded separately
Integration can start from bottom or top
Bottom-up requires test drivers
Top-down requires stubs
Both may be used, e.g. for user
interfaces top-down; for services
bottom-up
Drivers and stubs are code pieces
written only for testing
Testing 95
Levels of Testing
The code contains requirement
defects, design defects, and coding
defects
Nature of defects is different for
different injection stages
One type of testing will be unable
to detect the different types of
defects
Different levels Testing
of testing are used96
User needs Acceptance testing
Testing 98
Integration Testing
Focuses on interaction of modules
in a subsystem
Unit tested modules combined to
form subsystems
Test cases to “exercise” the
interaction of modules in different
ways
May be skipped if the system is not
too large Testing 99
System Testing
Entire software system is tested
Focus: does the software implement
the requirements?
Validation exercise for the system with
respect to the requirements
Generally the final testing stage
before the software is delivered
May be done by independent people
Defects removed by developers
Most time consuming
Testing
test phase 100
Acceptance Testing
Focus: Does the software satisfy user
needs?
Generally done by end users/customer
in customer environment, with real data
Only after successful AT software is
deployed
Any defects found,are removed by
developers
Acceptance test plan is based on the
acceptance test criteria
Testing in the SRS 101
Other forms of testing
Performance testing
Performance testing gauges how well a
program operates under typical operating
circumstances. Performance testing’s
objectives are to find any performance-
related problems and confirm that the
application can withstand usage levels that
are anticipated.
Performance parameters like reaction time,
throughput, and resource utilisation can all
be measured using performance testing.
Performance testing aims to assess how
well a web application works under various
loads and how quickly it reacts to user
requests. Testing 102
Other forms of testing
Methods of Performance testing
Testing 103
Other forms of testing
Methods of Performance testing
Load testing: Load testing is the method of
simulating actual user load on any
application or website is known as load
testing. It examines the behaviour of the
program under both light and heavy loads.
Endurance testing: a kind of non-
functional testing carried out to see if the
software system can withstand a heavy load
that is expected to last for a long duration.
Volume testing: A type of Software
Testing, where the software is subjected to a
huge volume of data
Testing 104
Other forms of testing
Methods of Performance testing
Scalability testing: A technique for non-
functional testing that assesses how well a
system or network performs when the
volume of user queries is scaled up or down
Spike testing: a type of performance
testing in which an application receives a
sudden and extreme increase or decrease in
load.
Stress testing: A type of intentionally
rigorous or intense testing. It entails testing
past the point at which a system would
normally break in order to notice the
outcomes
Testing 105
Other forms of testing
Popular Tools for Performance
Testing:
Apache Jmeter, LoadRunner,
Gatling, BlazeMeter
Regression testing
test that previous functionality works alright
important when changes are made
Previous test records are needed for
comparisons
Prioritization of testcases needed when
complete test suite cannot be executed for
a change
Testing 106
Test Plan
Testing usually starts with test plan and
ends with acceptance testing
Test plan is a general document that
defines the scope and approach for
testing for the whole project
Inputs are SRS, project plan, design
Test plan identifies what levels of
testing will be done, what units will be
tested, etc in the project
Testing 107
Test Plan…
Test plan usually contains
Test unit specs: what units need to be
tested separately
Features to be tested: these may
include functionality, performance,
usability,…
Approach: criteria to be used, when to
stop, how to evaluate, etc
Test deliverables
Schedule and task allocation
Testing 108
Test case specifications
Test plan focuses on approach; does not
deal with details of testing a unit
Test case specification has to be done
separately for each unit
Based on the plan (approach,
features,..) test cases are determined
for a unit
Expected outcome also needs to be
specified for each test case
Testing 109
Test case specifications…
Together the set of test cases should
detect most of the defects
Would like the set of test cases to detect
any defects, if it exists
Would also like set of test cases to be
small - each test case consumes effort
Determining a reasonable set of test
case is the most challenging task of
testing
Testing 110
Test case specifications…
The effectiveness and cost of testing depends
on the set of test cases
Q: How to determine if a set of test cases is
good? I.e. the set will detect most of the
defects, and a smaller set cannot catch these
defects
No easy way to determine goodness; usually
the set of test cases is reviewed by experts
This requires test cases be specified before
testing – a key reason for having test case
specs
Test case specs are essentially
Testing a table 111
Test case specifications…
Testing 112
Test case specifications…
So for each testing, test case specs are
developed, reviewed, and executed
Preparing test case specifications is
challenging and time consuming
Test case criteria can be used
Special cases and scenarios may be used
Once specified, the execution and
checking of outputs may be automated
through scripts
Desired if repeated testing is needed
Testing 113
Regularly done in large projects
Test case execution and
analysis
Executing test cases may require drivers or
stubs to be written; some tests can be auto,
others manual
A separate test procedure document may be
prepared
Test summary report is often an output – gives
a summary of test cases executed, effort,
defects found, etc
Monitoring of testing effort is important to
ensure that sufficient time is spent
Computer time also is an indicator of how
testing is proceeding
Testing 114
Defect logging and
tracking
A large software may have thousands of
defects, found by many different people
Often person who fixes (usually the
coder) is different from who finds
Due to large scope, reporting and fixing
of defects cannot be done informally
Defects found are usually logged in a
defect tracking system and then tracked
to closure
Defect logging and tracking is one of
the best practices in industry
Testing 115
Defect logging…
A defect in a software project has a
life cycle of its own, like
Found by someone, sometime and
logged along with info about it
(submitted)
Job of fixing is assigned; person
debugs and then fixes (fixed)
The manager or the submitter verifies
that the defect is indeed fixed
(closed)
More elaborate Testing
life cycles possible116
Defect logging…
Testing 117
Defect logging…
During the life cycle, info about
defect is logged at different stages
to help debug as well as analysis
Defects generally categorized into
a few types, and type of defects is
recorded
ODC (Orthogonal Defect
Classification) is one classification
Some std categories: Logic,
Testing 118
standards, UI, interface, performance,
ODC- Orthogonal Defect
Classification
Testing 119
Defect logging…
Severity of defects in terms of its
impact on sw is also recorded
Severity useful for prioritization of
fixing
One categorization
Critical: Show stopper
Major: Has a large impact
Minor: An isolated defect
Cosmetic: No impact on functionality
Testing 120
Defect logging and
tracking…
Ideally, all defects should be closed
Sometimes, organizations release
software with known defects (hopefully
of lower severity only)
Organizations have standards for when
a product may be released
Defect log may be used to track the
trend of how defect arrival and fixing is
happening
Testing 121
Defect arrival and closure
trend
Testing 122
Defect analysis for
prevention
Quality control focuses on removing
defects
Goal of defect prevention is to reduce the
defect injection rate in future
DP done by analyzing defect log,
identifying causes and then remove them
Is an advanced practice, done only in
mature organizations
Finally results in actions to be undertaken
by individuals to reduce defects in future
Testing 123
Metrics - Defect removal
efficiency
Definition : The defect removal efficiency (DRE) gives a
measure of the development team ability to remove
defects prior to release. It is calculated as a ratio of
defects resolved to total number of defects found. It is
typically measured prior and at the moment of release.
Basic objective of testing is to identify defects present
in the programs
Testing is good only if it succeeds in this goal
DRE = Number of defects resolved by the development
team / total number of defects at the moment of
measurement.
High DRE of a quality control activity means most
defects present at the time will be removed
Testing 124
Defect removal efficiency
…
DRE for a project can be evaluated only when
all defects are known, including delivered
defects
Delivered defects are approximated as the
number of defects found in some duration
after delivery
The injection stage of a defect is the stage in
which it was introduced in the software, and
detection stage is when it was detected
These stages are typically logged for defects
With injection and detection stages of all
defects, DRE for a QCTesting
activity can be 125
Defect Removal Efficiency
…
DREs of different QC activities are
a process property - determined
from past data
Past DRE can be used as expected
value for this project
Process followed by the project
must be improved for better DRE
Testing 126
Metrics – Reliability
Estimation
High reliability is an important goal
being achieved by testing
Reliability is usually quantified as a
probability or a failure rate
For a system it can be measured by
counting failures over a period of time
Measurement often not possible for
software as due to fixes reliability
changes, and with on-off, not possible
to measure
Testing 127
Reliability Estimation…
Sw reliability estimation models are
used to model the failure followed by fix
model of software
Data about failures and their times
during the last stages of testing is used
by these model
These models then use this data and
some statistical techniques to predict
the reliability of the software
Probability of failure = Number of failing
cases/ Total number of cases under
Testing 128
consideration
Summary
Testing plays a critical role in
removing defects, and in
generating confidence
Testing should be such that it
catches most defects present, i.e.
a high DRE
Multiple levels of testing needed
for this
Incremental testing
Testing also helps 129
Summary …
Deciding test cases during planning is
the most important aspect of testing
Two approaches – black box and white
box
Black box testing - test cases derived
from specifications.
Equivalence class partitioning, boundary
value, cause effect graphing, error guessing
White box - aim is to cover code
structures
Testing 130
statement coverage, branch coverage
Summary…
In a project both used at lower levels
Test cases initially driven by functional
Coverage measured, test cases enhanced
using coverage data
At higher levels, mostly functional
testing done; coverage monitored to
evaluate the quality of testing
Defect data is logged, and defects are
tracked to closure
The defect data can be used to estimate
reliability, DRE
Testing 131