Software - Testing
Software - Testing
E. Dijkstra
BUGS
Errors of all kinds are known as “bugs”.
Offline strategies:
1. syntax checking
3. Inspections
Online strategies:
Software Failure
Failures can be discovered both before and after system delivery, because they
can occur in testing as well as during the system operation.
Software Faults and Failures
Faults and failures provide inside and outside views of the system
- Check that all independent paths within a module have been exercised at
least once.
- Exercise all logical decisions one their true and false sides.
- Execute all loops at their boundaries and within their operational bounds.
where E is the number of flow graph edges and N is the number of flow graph
nodes.
- Testing should begin “in the small” and progress toward testing “in the large”.
15
Testing
• Testing only reveals the presence of defects
• Identifying & removing the defect => role of debugging and rework
TESTING 16
Detecting defects in Testing
• During testing, software under test (SUT) executed with set of test cases
• No failure => confidence grows, but can not say “defects are absent”
TESTING 17
Test Oracle
• To check if a failure has occurred when executed with a test case, we need
to know the correct behavior
• I.e. need a test oracle, which is often a human
• Human oracle makes each test case expensive as someone has to check the
correctness of its output
TESTING 18
Testing…
• At each level, for each SUT, test cases have to be designed and then
executed
• Overall, testing is very complex in a project and has to be done well
• Testing process at a high level has: test planning, test case design, and
test execution
TESTING 19
Test Plan
• Testing usually starts with test plan and ends with acceptance testing
• Test plan is a general document that defines the scope and approach
for testing for the whole project
• Inputs are SRS, project plan, design
• Test plan identifies what levels of testing will be done, what units will
be tested, etc in the project
TESTING 20
Test Plan…
– Test deliverables
TESTING 21
Test case Design
• Based on the plan (approach, features,..) test cases are determined for
a unit
• Expected outcome also needs to be specified for each test case
TESTING 22
Test case design…
• Together the set of test cases should detect most of the defects
• Would like the set of test cases to detect any defects, if it exists
• Would also like set of test cases to be small - each test case consumes
effort
• Determining a reasonable set of test case is the most challenging task
of testing
TESTING 23
Test case design
• The effectiveness and cost of testing depends on the set of test cases
• Q: How to determine if a set of test cases is good? I.e. the set will
detect most of the defects, and a smaller set cannot catch these defects
• No easy way to determine goodness; usually the set of test cases is
reviewed by experts
• This requires test cases be specified before testing – a key reason for
having test case specs
• Test case specs are essentially a table
TESTING 24
Test case specifications
TESTING 25
Test case execution
• Executing test cases may require drivers or stubs to be written; some tests
can be auto, others manual
• Test summary report is often an output – gives a summary of test cases
executed, effort, defects found, etc
• Monitoring of testing effort is important to ensure that sufficient time is
spent
• Computer time also is an indicator of how testing is proceeding
TESTING 26
Defect logging and tracking
TESTING 27
Defect logging…
TESTING 28
Severity of defects in terms of its impact on software is also recorded
One categorization
◦ Critical: Show stopper
• Defect log may be used to track the trend of how defect arrival and
fixing is happening
TESTING 30
BLACK BOX TESTING
A black-box testing method
Equivalence
- divide theClassPartitioning
input domain of a program into classes
of data
- derive test cases based on these partitions.
- If an input condition requires a specific value, one valid and two invalid
equivalence classes are defined.
- If an input condition specifies a member of a set, one valid and one invalid
equivalence classes are defined.
- If an input condition is Boolean, one valid and one invalid classes are
defined.
A test case design
Boundary technique
Value complements to equivalence partition
Analysis
Objective:
Boundary value analysis leads to a selection of test cases that exercise
bounding values.
Guidelines:
- If an input condition specifies a range bounded by values a and b,
test cases should be designed with value a and b, just above and below a and b.
Values just above and below minimum and maximum are also tested.
system validation
test test
User needs Acceptance testing
code TESTING
Unit testing 37
Unit Testing
module
to be
tested
results
software
engineer test cases
Unit Testing
module
to be
tested
interface
local data structures
boundary conditions
independent paths
error handling paths
test cases
Unit Test Environment
driver
interface
local data structures
Module boundary conditions
independent paths
error handling paths
stub stub
test cases
RESULTS
Integration Testing Strategies
Options:
• the “big bang” approach
• an incremental construction strategy
Top Down Integration
A
top module is tested with
stubs
B F G
B F G
cluster
Sandwich Testing
A
Top modules are
tested with stubs
B F G
cluster
High Order Testing
validation test
system test
• Stress testing
– load the system to peak, load generation tools needed
• Regression testing
– test that previous functionality works alright
TESTING 46
In a Project
• Both functional and structural should be used
TESTING 47
Debugging:
A Diagnostic Process
The Debugging
Process
test cases
symptom
cause
Consequences of Bugs
infectious
damage
catastrophic
extreme
serious
disturbing
annoying
mild
Bug Type
Bug Categories:
function-related bugs,
system-related bugs, data bugs, coding bugs,
design bugs, documentation bugs, standards
violations, etc.