0% found this document useful (0 votes)
10 views

5 Testing Complete

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

5 Testing Complete

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 131

Software Testing

Testing 1
Background
 Main objectives of a project: High
Quality & High Productivity (Q&P)
 Quality has many dimensions

reliability, maintainability, interoperability
etc.
 Reliability is perhaps the most important
 Reliability: The chances of software
failing
 More defects => more chances of
failure => lesser reliability
 Hence Q goal: Have as few defects as
possible in the delivered software
Testing 2
Faults & Failure
 Failure: A software failure occurs if
the behavior of the s/w is different
from expected/specified.
 Fault: cause of software failure
 Fault = bug = defect
 Failure implies presence of defects
 A defect has the potential to cause
failure.
 Definition of a defect is environment,
project specific Testing 3
What is this?
A failure?
An error?

A fault?
Need to specify
the desired behavior first!
Erroneous State (“Error”)
Algorithmic Fault
How do we deal with
Errors and Faults?
Verification?
Modular Redundancy?
as a
Feature?
Who does testing?
• Software Tester
• Software
Testing? Developer
• Project
Lead/Manager
• End User
According to ANSI/IEEE
1059 standard, Testing
can be defined as “A
process of analyzing
a software item to
detect the differences
between existing and
required conditions
(that
is defects/errors/bugs)
and to evaluate the
features of the
software item”.
Patching?
When to Start Testing
 An early start to testing reduces the cost,
time to rework and error free software that
is
 delivered to the client. However in Software
Development Life Cycle (SDLC) testing can
 be started from the Requirements Gathering
phase and lasts till the deployment of the
 software. However it also depends on the
development model that is being used.

Testing 13
When to Stop Testing
 Testing is a never ending process and no one can say
that any software is 100% tested.
 Following are the aspects which should be considered to
stop the testing:
 Testing Deadlines.
 Completion of test case execution.
 Completion of Functional and code coverage to a certain
point.
 Bug rate falls below a certain level and no high priority
bugs are identified.
 Management decision.

Testing 14
Verification Vs Validation

Testing 15
Difference between Audit
and Inspection
Audit: As per IEEE, it is a review of documented processes whether
organizations implements and follows the processes or not. Types of Audit
include the Legal Compliance Audit, Internal Audit, and System Audit.

Inspection: A formal technique which involves the formal or informal


technical reviews of any artifact by identifying any error or gap. Inspection
includes the formal as well as informal technical reviews.
As per IEEE94, Inspection is a formal evaluation technique in which
software requirements, design, or code are examined in detail by a person
or group other than the author to detect faults, violations of development
standards, and other problems.

Formal Inspection meetings may have following process: Planning,


Overview Preparation,
Inspection Meeting, Rework, and Follow-up.

Testing 16
Difference between
Testing and Debugging
 Testing: It involves the identification of
bug/error/defect in the software without correcting it.
Normally professionals with a Quality Assurance
background are involved in the identification of bugs.
Testing is performed in the testing phase.

 Debugging: It involves identifying, isolating and fixing


the problems/bug. Developers who code the software
conduct debugging upon encountering an error in the
code. Debugging is the part of White box or Unit Testing.
Debugging can be performed in the development phase
while conducting Unit Testing or in phases while fixing
the reported bugs.

Testing 17
Testing takes creativity
 To develop an effective test, one
must have:

Detailed understanding of the system

Knowledge of the testing techniques

Skill to apply these techniques in an
effective and efficient manner
 Testing is done best by independent
testers
 We often develop a certain mental attitude
that the program should in a certain way
when in fact it does not.
 Programmer often stick to the data set
that makes the program work
 "Don’t mess up my code!"
 A program often does not work when
tried by somebody else.
 Don't let this be the end-user.
Testing 19
Role of Testing
 Reviews are human processes - can not
catch all defects
 Hence there will be requirement defects,
design defects and coding defects in code
 These defects have to be identified by
testing
 Therefore testing plays a critical role in
ensuring quality.
 All defects remaining from before as well
as new ones introduced
Testing have to be 20
Detecting defects in
Testing
 During testing, a program is
executed with a set of test cases
 Failure during testing => defects
are present
 No failure => confidence grows, but
can not say “defects are absent”
 Defects detected through failures
 To detect defects, must cause
failures during testing
Testing 21
Testing Types
 Manual Testing:- This type includes the
testing of the Software manually i.e. without
using any automated tool or any script.
 In this type the tester takes over the role of an
end user and test the Software to identify any
un-expected behavior or bug.
 Testers use test plan, test cases or test
scenarios to test the Software to ensure the
completeness of testing.
 Manual testing also includes exploratory
testing as testers explore the software to
identify errors in it. Testing 22
Testing Types…
 Automation Testing:- Automation
testing which is also known as “Test
Automation”, is when the tester writes scripts
and uses another software to test the
software. This process involves automation of
a manual process. Automation Testing is used
to re-run the test scenarios that were
performed manually, quickly and repeatedly.
 Apart from regression testing, Automation
testing is also used to test the application from
load, performance and stress point of view. It
increases the test coverage;
Testing
improve 23
Automation Testing…
 What to automate: It is not possible to
automate everything in the Software; however
the areas at which user can make transactions
such as login form or registration forms etc,
any area where large amount of users’ can
access the Software simultaneously should be
automated.
 Furthermore all GUI items, connections with
databases, field validations etc. can be
efficiently tested by automating the manual
process.
Testing 24
Automation Testing…
 When to Automate: Test Automation should be uses
by considering the following for the Software:
 Large and critical projects.
 Projects that require testing the same areas frequently.
 Requirements not changing frequently.
 Accessing the application for load and performance with
many virtual users.
 Stable Software with respect to manual testing.
 Availability of time

Testing 25
Automation Testing…
How to Automate:
 Identifying areas within a software for automation.
 Selection of appropriate tool for Test automation.
 Writing Test scripts.
 Development of Test suits.
 Execution of scripts
 Create result reports.
 Identify any potential bug or performance issue.
 Automation Tools:-
 HP Quick Test Professional
 Selenium
 IBM Rational Functional Tester
 Testing Anywhere
 WinRunner
 LaodRunner
 Visual Studio Test Professional Testing 26
Test Oracle
 To check if a failure has occurred
when executed with a test case,
we need to know the correct
behavior
 I.e. need a test oracle, which is
often a human
 Human oracle makes each test
case expensive as someone has to
check the correctness of its output
Testing 27
Role of Test cases
 Ideally would like the following for test
cases
 No failure implies “no defects” or “high quality”
 If defects present, then some test case causes a
failure
 Psychology of testing is important
 should be to ‘reveal’ defects(not to show that it
works!)
 test cases must be “destructive
 Role of test cases is clearly very critical
 Only if test cases are Testing
“good”, the 28
Test case design
 During test planning, have to design a
set of test cases that will detect defects
present
 Some criteria needed to guide test case
selection
 Two approaches to design test cases
 functional or black box
 structural or white box
 Both are complimentary; we discuss a
few approaches/criteria for both
Testing 29
Black Box testing

 Software tested to be treated as a


black box
 Specification for the black box is
given
 The expected behavior of the system
is used to design test cases
 i.e test cases are determined solely
from specification.
 Internal structure of code not used
for test case design
Testing 30
Black box Testing…
 Premise: Expected behavior is
specified.
 Hence just test for specified
expected behavior
 How it is implemented is not an
issue.
 For modules, specification produced
in design specify expected behavior
 For system testing, SRS specifies
Testing 31
Black Box Testing…

 Most thorough functional testing -


exhaustive testing
 Software is designed to work for an input
space
 Test the software with all elements in the
input space
 Infeasible - too high a cost
 Need better method for selecting test
cases
 Different approaches have been
proposed Testing 32
Equivalence Class
partitioning
 Divide the input space into equivalent
classes
 If the software works for a test case
from a class the it is likely to work for all
 Can reduce the set of test cases if such
equivalent classes can be identified
 Getting ideal equivalent classes is
impossible
 Approximate it by identifying classes for
which different behavior
Testing is specified 33
Equivalence class partitioning…

 Rationale: specification requires


same behavior for elements in a
class
 Software likely to be constructed
such that it either fails for all or for
none.
 E.g. if a function was not designed
for negative numbers then it will
fail for all the negative numbers
 For robustness, Testing
should form 34
Equivalent class
partitioning..
 Every condition specified as input is
an equivalent class
 Define invalid equivalent classes also
 E.g. range 0< value<Max specified
 one range is the valid class
 input < 0 is an invalid class

 input > max is an invalid class


 Whenever that entire range may not
be treated uniformly
Testing - split into 35
Equivalent class
partitioning..
 Should consider eq. classes in outputs
also and then give test cases for
different classes
 E.g.: Compute rate of interest given
loan amount, monthly installment, and
number of months
 Equivalent classes in output: + rate, rate =
0 ,-ve rate
 Have test cases to get these outputs

Testing 36
Equivalence class…
 Once eq classes selected for each
of the inputs, test cases have to be
selected
 Select each test case covering as
many valid eq classes as possible
 Or, have a test case that covers at
most one valid class for each input
 Plus a separate test case for each
invalid class
Testing 37
Example
 A function of the software application accepts
a 10 digit mobile number.

 Valid and Invalid Partitions

Testing 38
Boundary value analysis

 Programs often fail on special values


 These values often lie on boundary of
equivalence classes
 Test cases that have boundary values
have high yield
 These are also called extreme cases
 A BV test case is a set of input data that
lies on the edge of a eq class of
input/output
Testing 39
BVA...
 For each equivalence class

choose values on the edges of the class

choose values just outside the edges
 E.g. if 0 <= x <= 1.0

0.0 , 1.0 are edges inside

-0.1,1.1 are just outside
 E.g. a bounded list - have a null list , a
maximum value list
 Consider outputs also and have test
cases generate outputs on the
boundary Testing 40
BVA…
 Extreme ends like Start- End, Lower- Upper,
Maximum-Minimum, Just Inside-Just Outside
values are called boundary values and the
testing is called “boundary testing”.
 If input is a defined range, then there are
6 boundary values plus 1 normal value
(tot: 7)
 If multiple inputs, how to combine them into
test cases; two strategies possible
 Try all possible combination of BV of diff variables,
with n vars this will have 7n test cases!
 Select BV for one var; have other vars at normal
values + 1 of all normal values
Testing 41
BVA.. (test cases for two vars – x and
y)

Testing 42
Cause Effect graphing
 Equivalence classes and boundary
value analysis consider each input
separately
 To handle multiple inputs, different
combinations of equivalent classes of
inputs can be tried
 Number of combinations can be large –
if n different input conditions such that
each condition is valid/invalid, total: 2n
 Cause effect graphing helps in selecting
combinations as input
Testing
conditions 43
CE-graphing
 Identify causes and effects in the
system
 Cause: distinct input condition which can be
true or false
 Effect: distinct output condition (T/F)
 Identify which causes can produce
which effects; can combine causes
 Causes/effects are nodes in the graph
and arcs are drawn to capture
dependency; and/or are allowed
Testing 44
CE-graphing
 From the CE graph, can make a
decision table
 Lists combination of conditions that
set different effects
 Together they check for various
effects
 Decision table can be used for
forming the test cases
Testing 45
Notations used in the
Cause-Effect Graph
 AND – For Effect E1 to be true, both the Causes C1 AND C2
should be true.

 OR - For Effect E1 to be true, either of Causes C1 OR C2


should be true.

 NOT - For Effect E1 to be true, Cause C1 should be False.

 Mutually Exclusive - When only one of the causes will hold true.

Testing 46
CE graphing: Example
 Draw A Cause And Effect Graph According
To Situation
 Situation: The “Print message” is software
that reads two characters and, depending on
their values, messages is printed.
 The first character must be an “A” or a “B”.
 The second character must be a digit.
 If the first character is an “A” or “B” and the second
character is a digit, then the file must be updated.
 If the first character is incorrect (not an “A” or “B”), the
message X must be printed.
 If the second character is incorrect (not a digit), the
message Y must be printed.
Testing 47
CE graphing: Example
Solution:
The Causes of this situation are:
C1 – First character is A
C2 – First character is B
C3 – the Second character is a digit

The Effects (results) for this situation are:


E1 – Update the file
E2 – Print message “X”
E3 – Print message “Y”
Testing 48
CE graphing: Example
First, draw the Causes and Effects as shown
below:

Testing 49
CE graphing: Example
Key – Always go from Effect to Cause (left to right). That
means, to get effect “E”, what causes should be true.
In this example, let’s start with Effect E1.
Effect E1 is for updating the file. The file is updated
when
– The first character is “A” and the second character is a digit
– The first character is “B” and the second character is a digit
– The first character can either be “A” or “B” and cannot be
both.
Now let’s put these 3 points in symbolic form:
For E1 to be true – the following are the causes:
– C1 and C3 should be true
– C2 and C3 should be true
Testing 50
– C1 and C2 cannot be true together. This means C1 and C2
CE graphing: Example
Now let’s draw this:

Testing 51
CE graphing: Example
There is a third condition where C1 and C2 are mutually
exclusive. So the final graph for effect E1 to be true is shown
below:

Testing 52
CE graphing: Example
Let’s move to Effect E2: E2 states print message “X”.
Message X will be printed when the First character is neither A
nor B.
This means Effect E2 will hold true when either C1 OR C2 is
invalid. So the graph for Effect E2 is shown as (In blue line)

Testing 53
CE graphing: Example
For Effect E3. E3 states print message “Y”. Message Y will be
printed when the Second character is incorrect.
This means Effect E3 will hold true when C3 is invalid. So the
graph for Effect E3 is shown as (In Green line)

Testing 54
Writing Decision Table
Based On Cause And
Effect graph
 First, write down the Causes and Effects in a single column
shown below
 The Key is the same. Go from bottom to top which means
traverse from Effect to Cause.
 Start with Effect E1. For E1 to be true, the condition is
 Here we are representing True as 1 and False as 0
 First, put Effect E1 as True in the next column as

Testing 55
Writing Decision Table
Based On Cause And
Effect graph
 Now for E1 to be “1” (true), we have the below two
conditions –
 C1 AND C3 will be true
 C2 AND C3 will be true

 For E2 to be True, either C1 or C2 has to be False shown as,

Testing 56
Writing Decision Table
Based On Cause And
Effect graph
 For E3 to be true, C3 should be false.

 Let’s complete the table by adding 0 in the blank column


and include the test case identifier.

Testing 57
Writing Test Cases From The Decision
Table

 Below is a sample test case for Test Case 1 (TC1) and Test
Case 2 (TC2).

Testing 58
Pair-wise testing
 Often many parameters determine the
behavior of a software system
 The parameters may be inputs or settings, and
take different values (or different value
ranges)
 Many defects involve one condition (single-
mode fault), eg. software not being able to
print on some type of printer
 Single mode faults can be detected by testing for
different values of different parameters (parms)
 If n parms and each can take m values, we can test
for one diff value for each parm in each test case
 Total test cases for one params: m
Testing 59
Pair-wise testing…
 All faults are not single-mode and sw
may fail at some combinations
 Eg tel billing sw does not compute correct
bill for night time calling (one parm) to a
particular country (another parm)
 Eg ticketing system fails to book a biz class
ticket (a parm) for a child (a parm)
 Multi-modal faults can be revealed by
testing different combination of parm
values
 This is called combinatorial testing
Testing 60
Pair-wise testing…
 Full combinatorial testing not feasible
 For n parms each with m values, total
combinations are nm
 For 5 parms, 5 values each (tot: 3125), if
one test is 5 minutes, total time > 15625
minutes (260 hrs)/12 hrs per day = 10 days!
 Research suggests that most such faults
are revealed by interaction of a pair of
values
 I.e. most faults tend to be double-mode
 For double mode, we need to exercise
each pair – called Testing
pair-wise testing 61
Pair-wise testing…
 In pair-wise, all pairs of values
have to be exercised in testing
 If n parms with m values each,
between any 2 parms we have
m*m pairs
 1st parm will have m*m with n-1
others
 2nd parm will have m*m pairs with n-2
 3rd parm will have m*m pairs with n-3,
etc.
Testing 62
 Total no of pairs are m*m*n*(n-1)/2
Pair-wise testing…
 A test case consists of some setting of
the n parameters
 Smallest set of test cases when each
pair is covered once only
 A test case can cover a maximum of (n-
1)+(n-2)+…=n(n-1)/2 pairs
 In the best case when each pair is
covered exactly once, we will have m2
different test cases providing the full
pair-wise coverage
Testing 63
Pair-wise testing…
 Generating the smallest set of test
cases that will provide pair-wise
coverage is non-trivial (Significant)
 Efficient algos exist; efficiently
generating these test cases can reduce
testing effort considerably
 In an example with 13 parms each with 3
values pair-wise coverage can be done with
15 testcases
 Pair-wise testing is a practical approach
that is widely usedTesting
in industry 64
Pair-wise testing, Example
 A sw product for multiple platforms and
uses browser as the interface, and is to
work with diff OSs
 We have these parms and values
 OS (parm A): Windows, Solaris, Linux
 Mem size (B): 128M, 256M, 512M
 Browser (C): IE, Netscape, Mozilla
 Total no of pair wise combinations: 27
 No of cases can be less

Testing 65
Pair-wise testing…
Test case Pairs covered

a1, b1, c1 (a1,b1) (a1, c1) (b1,c1)


a1, b2, c2 (a1,b2) (a1,c2) (b2,c2)
a1, b3, c3 (a1,b3) (a1,c3) (b3,c3)
a2, b1, c2 (a2,b1) (a2,c2) (b1,c2)
a2, b2, c3 (a2,b2) (a2,c3) (b2,c3)
a2, b3, c1 (a2,b3) (a2,c1) (b3,c1)
a3, b1, c3 (a3,b1) (a3,c3) (b1,c3)
a3, b2, c1 (a3,b2) (a3,c1) (b2,c1)
a3, b3, c2 (a3,b3) (a3,c2) (b3,c2)

Testing 66
Special cases

 Programs often fail on special cases


 These depend on nature of inputs,
types of data structures,etc.
 No good rules to identify them
 One way is to guess when the
software might fail and create those
test cases
 Also called error guessing
 Play the sadist & hit where it might
hurt Testing 67
Error Guessing

 Use experience and judgement to guess


situations where a programmer might make
mistakes
 Special cases can arise due to assumptions
about inputs, user, operating environment,
business, etc.
 E.g. A program to count frequency of words
 file empty, file non existent, file only has blanks,
contains only one word, all words are same, multiple
consecutive blank lines, multiple blanks between
words, blanks at the start, words in sorted order,
blanks at end of file, etc.
 Perhaps the most widely used in practice
Testing 68
State-based Testing
 Some systems are state-less: for same
inputs, same behavior is exhibited
 Many systems’ behavior depends on the
state of the system i.e. for the same
input the behavior could be different
 I.e. behavior and output depend on the
input as well as the system state
 System state – represents the
cumulative impact of all past inputs
 State-based testing is for such systems

Testing 69
State-based Testing…
 A system can be modeled as a state
machine
 The state space may be too large (is a
cross product of all domains of vars)
 The state space can be partitioned in a
few states, each representing a logical
state of interest of the system
 State model is generally built from such
states
Testing 70
State-based Testing…
 A state model has four
components
 States: Logical states representing
cumulative impact of past inputs to
system
 Transitions: How state changes in
response to some events
 Events: Inputs to the system
 Actions: The outputs for the events
Testing 71
State-based Testing…
 State model shows what
transitions occur and what actions
are performed
 Often state model is built from the
specifications or requirements
 The key challenge is to identify
states from the
specs/requirements which capture
the key properties but is small
enough for modeling
Testing 72
State-based Testing,
example…
 Let’s consider an ATM system function where if the user enters the invalid
password three times the account will be locked.
 In this system, if the user enters a valid password in any of the first three
attempts the user will be logged in successfully. If the user enters the invalid
password in the first or second try, the user will be asked to re-enter the
password. And finally, if the user enters incorrect password 3rd time, the
account will be blocked.

Testing 73
State-based Testing, example…

Testing 74
State-based Testing, example…
 State Transition Table

Correct PIN Incorrect PIN

S1) Start S2 S2

S2) 1st attempt S5 S3

S3) 2nd attempt S5 S4

S4) 3rd attempt S5 S6

S5) Access Granted – –

S6) Account blocked – –

Testing 75
White box testing

 Black box testing focuses only on


functionality
 What the program does; not how it is
implemented
 White box testing focuses on
implementation
 Aim is to exercise different program structures
with the intent of uncovering errors
 Is also called structural testing
 Various criteria exist for test case design
Testing 76
 Test cases have to be selected to satisfy
Types of structural testing

 Control flow based criteria


 looks at the coverage of the control flow
graph
 Data flow based testing
 looks at the coverage in the definition-use
graph
 Mutation testing
 looks at various mutants of the program
 We will discuss control flow based and
data flow based criteria
Testing 77
Control flow based criteria

 Considers the program as control flow


graph
 Nodes represent code blocks – i.e. set of
statements always executed together
 An edge (i,j) represents a possible transfer
of control from i to j
 Assume a start node and an end node
 A path is a sequence of nodes from start
to end

Testing 78
Statement Coverage
Criterion
 Criterion: Each statement is executed at least
once during testing
 I.e. set of paths executed during testing should
include all nodes
 Limitation: does not require a decision to
evaluate to false if no else clause
 E.g. : abs (x) : if ( x>=0) x = -x; return(x)
 The set of test cases {x = 0} achieves 100%
statement coverage, but error not detected
 Guaranteeing 100% coverage not always
possible due to possibility of unreachable
nodes
Testing 79
Branch coverage
 Criterion: Each edge should be
traversed at least once during testing
 i.e. each decision must evaluate to both
true and false during testing
 Branch coverage implies stmt coverage
 If multiple conditions in a decision, then
all conditions need not be evaluated to
T and F

Testing 80
Control flow based…
 There are other criteria too - path
coverage, predicate coverage,
cyclomatic complexity based, ...
 None is sufficient to detect all types of
defects (e.g. a program missing some
paths cannot be detected)
 They provide some quantitative handle
on the breadth of testing
 More used to evaluate the level of
testing rather than selecting test cases

Testing 81
Data flow-based testing
 A def-use graph is constructed from the
control flow graph
 A stmt in the control flow graph (in
which each stmt is a node) can be of
these types
 Def: represents definition of a var (i.e. when
var is on the lhs)
 C-use: computational use of a var
 P-use: var used in a predicate for control
transfer
Testing 82
 Example:
 1. read x, y;
 2. if(x>y)
 3. a = x+1
 else
 4. a = y-1
 5. print a;
Testing 83
Testing 84
Data flow based…
 A def-use graph is constructed by
associating vars with nodes and edges
in the control flow graph

For a node I, def(i) is the set of vars for
which there is a global def in I

For a node I, C-use(i) is the set of vars for
which there is a global c-use in I

For an edge, p-use(I,j) is set of vars whor
which there is a p-use for the edge (I,j)
 Def clear path from I to j wrt x: if no def
of x in the nodes in the path

Testing 85
Data flow based criteria
 all-defs: for every node I, and every x in
def(i) there is a def-clear path
 For def of every var, one of its uses (p-use
or c-use) must be tested
 all-p-uses: all p-uses of all the
definitions should be tested
 All p-uses of all the defs must be tested
 Some-c-uses, all-c-uses, some-p-uses
are some other criteria
Testing 86
Relationship between diff
criteria

Testing 87
Tool support and test case
selection

 Two major issues for using these criteria


 How to determine the coverage
 How to select test cases to ensure coverage
 For determining coverage - tools are
essential
 Tools also tell which branches and
statements are not executed
 Test case selection is mostly manual - test
plan is to be augmented based on
coverage data Testing 88
In a Project
 Both functional and structural should be used
 Test plans are usually determined using
functional methods; during testing, for further
rounds, based on the coverage, more test
cases can be added
 Structural testing is useful at lower levels only;
at higher levels ensuring coverage is difficult
 Hence, a combination of functional and
structural at unit testing
 Functional testing (but monitoring of coverage)
at higher levels
Testing 89
Comparison
Code Review Structural Functional
Testing Testing
Computational M H M
Logic M H M
I/O H M H
Data handling H L H
Interface H H M
Data defn. M L M
Database H M M

Testing 90
Testing Process

Testing 91
Testing
 Testing only reveals the presence of
defects
 Does not identify nature and location of
defects
 Identifying & removing the defect => role
of debugging and rework
 Preparing test cases, performing testing,
defects identification & removal all
consume effort
 Overall testing becomes
Testing very expensive :92
Incremental Testing
 Goals of testing: detect as many defects as
possible, and keep the cost low
 Both frequently conflict - increasing testing
can catch more defects, but cost also goes up
 Incremental testing - add untested parts
incrementally to tested portion
 For achieving goals, incremental testing
essential
 helps catch more defects
 helps in identification and removal
 Testing of large systems is always incremental
Testing 93
Integration and Testing
 Incremental testing requires
incremental ‘building’ I.e. incrementally
integrate parts to form system
 Integration & testing are related
 During coding, different modules are
coded separately
 Integration - the order in which they
should be tested and combined
 Integration is driven mostly by testing
needs Testing 94
Top-down and Bottom-up
 System : Hierarchy of modules
 Modules coded separately
 Integration can start from bottom or top
 Bottom-up requires test drivers
 Top-down requires stubs
 Both may be used, e.g. for user
interfaces top-down; for services
bottom-up
 Drivers and stubs are code pieces
written only for testing
Testing 95
Levels of Testing
 The code contains requirement
defects, design defects, and coding
defects
 Nature of defects is different for
different injection stages
 One type of testing will be unable
to detect the different types of
defects
 Different levels Testing
of testing are used96
User needs Acceptance testing

Requirement System testing


specification

Design Integration testing

code Unit testing


Testing 97
Unit Testing
 Different modules tested separately
 Focus: defects injected during coding
 Essentially a code verification
technique, covered in previous chapter
 UT is closely associated with coding
 Frequently the programmer does UT;
coding phase sometimes called “coding
and unit testing”

Testing 98
Integration Testing
 Focuses on interaction of modules
in a subsystem
 Unit tested modules combined to
form subsystems
 Test cases to “exercise” the
interaction of modules in different
ways
 May be skipped if the system is not
too large Testing 99
System Testing
 Entire software system is tested
 Focus: does the software implement
the requirements?
 Validation exercise for the system with
respect to the requirements
 Generally the final testing stage
before the software is delivered
 May be done by independent people
 Defects removed by developers
 Most time consuming
Testing
test phase 100
Acceptance Testing
 Focus: Does the software satisfy user
needs?
 Generally done by end users/customer
in customer environment, with real data
 Only after successful AT software is
deployed
 Any defects found,are removed by
developers
 Acceptance test plan is based on the
acceptance test criteria
Testing in the SRS 101
Other forms of testing
 Performance testing

Performance testing gauges how well a
program operates under typical operating
circumstances. Performance testing’s
objectives are to find any performance-
related problems and confirm that the
application can withstand usage levels that
are anticipated.

Performance parameters like reaction time,
throughput, and resource utilisation can all
be measured using performance testing.
Performance testing aims to assess how
well a web application works under various
loads and how quickly it reacts to user
requests. Testing 102

Other forms of testing
 Methods of Performance testing

Testing 103
Other forms of testing
 Methods of Performance testing

Load testing: Load testing is the method of
simulating actual user load on any
application or website is known as load
testing. It examines the behaviour of the
program under both light and heavy loads.

Endurance testing: a kind of non-
functional testing carried out to see if the
software system can withstand a heavy load
that is expected to last for a long duration.

Volume testing: A type of Software
Testing, where the software is subjected to a
huge volume of data
Testing 104
Other forms of testing
 Methods of Performance testing

Scalability testing: A technique for non-
functional testing that assesses how well a
system or network performs when the
volume of user queries is scaled up or down

Spike testing: a type of performance
testing in which an application receives a
sudden and extreme increase or decrease in
load.

Stress testing: A type of intentionally
rigorous or intense testing. It entails testing
past the point at which a system would
normally break in order to notice the
outcomes
Testing 105
Other forms of testing
 Popular Tools for Performance
Testing:

Apache Jmeter, LoadRunner,
Gatling, BlazeMeter
 Regression testing

test that previous functionality works alright

important when changes are made

Previous test records are needed for
comparisons

Prioritization of testcases needed when
complete test suite cannot be executed for
a change
Testing 106
Test Plan
 Testing usually starts with test plan and
ends with acceptance testing
 Test plan is a general document that
defines the scope and approach for
testing for the whole project
 Inputs are SRS, project plan, design
 Test plan identifies what levels of
testing will be done, what units will be
tested, etc in the project

Testing 107
Test Plan…
 Test plan usually contains
 Test unit specs: what units need to be
tested separately
 Features to be tested: these may
include functionality, performance,
usability,…
 Approach: criteria to be used, when to
stop, how to evaluate, etc
 Test deliverables
 Schedule and task allocation
Testing 108
Test case specifications
 Test plan focuses on approach; does not
deal with details of testing a unit
 Test case specification has to be done
separately for each unit
 Based on the plan (approach,
features,..) test cases are determined
for a unit
 Expected outcome also needs to be
specified for each test case
Testing 109
Test case specifications…
 Together the set of test cases should
detect most of the defects
 Would like the set of test cases to detect
any defects, if it exists
 Would also like set of test cases to be
small - each test case consumes effort
 Determining a reasonable set of test
case is the most challenging task of
testing
Testing 110
Test case specifications…
 The effectiveness and cost of testing depends
on the set of test cases
 Q: How to determine if a set of test cases is
good? I.e. the set will detect most of the
defects, and a smaller set cannot catch these
defects
 No easy way to determine goodness; usually
the set of test cases is reviewed by experts
 This requires test cases be specified before
testing – a key reason for having test case
specs
 Test case specs are essentially
Testing a table 111
Test case specifications…

Seq.No Condition Test Data


Expected successful
to be tested result

Testing 112
Test case specifications…
 So for each testing, test case specs are
developed, reviewed, and executed
 Preparing test case specifications is
challenging and time consuming
 Test case criteria can be used
 Special cases and scenarios may be used
 Once specified, the execution and
checking of outputs may be automated
through scripts
 Desired if repeated testing is needed
Testing 113
 Regularly done in large projects
Test case execution and
analysis
 Executing test cases may require drivers or
stubs to be written; some tests can be auto,
others manual
 A separate test procedure document may be
prepared
 Test summary report is often an output – gives
a summary of test cases executed, effort,
defects found, etc
 Monitoring of testing effort is important to
ensure that sufficient time is spent
 Computer time also is an indicator of how
testing is proceeding
Testing 114
Defect logging and
tracking
 A large software may have thousands of
defects, found by many different people
 Often person who fixes (usually the
coder) is different from who finds
 Due to large scope, reporting and fixing
of defects cannot be done informally
 Defects found are usually logged in a
defect tracking system and then tracked
to closure
 Defect logging and tracking is one of
the best practices in industry
Testing 115
Defect logging…
 A defect in a software project has a
life cycle of its own, like
 Found by someone, sometime and
logged along with info about it
(submitted)
 Job of fixing is assigned; person
debugs and then fixes (fixed)
 The manager or the submitter verifies
that the defect is indeed fixed
(closed)
 More elaborate Testing
life cycles possible116
Defect logging…

Testing 117
Defect logging…
 During the life cycle, info about
defect is logged at different stages
to help debug as well as analysis
 Defects generally categorized into
a few types, and type of defects is
recorded
 ODC (Orthogonal Defect
Classification) is one classification
 Some std categories: Logic,
Testing 118
standards, UI, interface, performance,
ODC- Orthogonal Defect
Classification

Testing 119
Defect logging…
 Severity of defects in terms of its
impact on sw is also recorded
 Severity useful for prioritization of
fixing
 One categorization
 Critical: Show stopper
 Major: Has a large impact
 Minor: An isolated defect
 Cosmetic: No impact on functionality
Testing 120
Defect logging and
tracking…
 Ideally, all defects should be closed
 Sometimes, organizations release
software with known defects (hopefully
of lower severity only)
 Organizations have standards for when
a product may be released
 Defect log may be used to track the
trend of how defect arrival and fixing is
happening
Testing 121
Defect arrival and closure
trend

Testing 122
Defect analysis for
prevention
 Quality control focuses on removing
defects
 Goal of defect prevention is to reduce the
defect injection rate in future
 DP done by analyzing defect log,
identifying causes and then remove them
 Is an advanced practice, done only in
mature organizations
 Finally results in actions to be undertaken
by individuals to reduce defects in future
Testing 123
Metrics - Defect removal
efficiency
 Definition : The defect removal efficiency (DRE) gives a
measure of the development team ability to remove
defects prior to release. It is calculated as a ratio of
defects resolved to total number of defects found. It is
typically measured prior and at the moment of release.
 Basic objective of testing is to identify defects present
in the programs
 Testing is good only if it succeeds in this goal
 DRE = Number of defects resolved by the development
team / total number of defects at the moment of
measurement.
 High DRE of a quality control activity means most
defects present at the time will be removed
Testing 124
Defect removal efficiency

 DRE for a project can be evaluated only when
all defects are known, including delivered
defects
 Delivered defects are approximated as the
number of defects found in some duration
after delivery
 The injection stage of a defect is the stage in
which it was introduced in the software, and
detection stage is when it was detected
 These stages are typically logged for defects
 With injection and detection stages of all
defects, DRE for a QCTesting
activity can be 125
Defect Removal Efficiency

 DREs of different QC activities are
a process property - determined
from past data
 Past DRE can be used as expected
value for this project
 Process followed by the project
must be improved for better DRE

Testing 126
Metrics – Reliability
Estimation
 High reliability is an important goal
being achieved by testing
 Reliability is usually quantified as a
probability or a failure rate
 For a system it can be measured by
counting failures over a period of time
 Measurement often not possible for
software as due to fixes reliability
changes, and with on-off, not possible
to measure
Testing 127
Reliability Estimation…
 Sw reliability estimation models are
used to model the failure followed by fix
model of software
 Data about failures and their times
during the last stages of testing is used
by these model
 These models then use this data and
some statistical techniques to predict
the reliability of the software
 Probability of failure = Number of failing
cases/ Total number of cases under
Testing 128
consideration
Summary
 Testing plays a critical role in
removing defects, and in
generating confidence
 Testing should be such that it
catches most defects present, i.e.
a high DRE
 Multiple levels of testing needed
for this
 Incremental testing
Testing also helps 129
Summary …
 Deciding test cases during planning is
the most important aspect of testing
 Two approaches – black box and white
box
 Black box testing - test cases derived
from specifications.
 Equivalence class partitioning, boundary
value, cause effect graphing, error guessing
 White box - aim is to cover code
structures
Testing 130
 statement coverage, branch coverage
Summary…
 In a project both used at lower levels

Test cases initially driven by functional

Coverage measured, test cases enhanced
using coverage data
 At higher levels, mostly functional
testing done; coverage monitored to
evaluate the quality of testing
 Defect data is logged, and defects are
tracked to closure
 The defect data can be used to estimate
reliability, DRE
Testing 131

You might also like