0% found this document useful (0 votes)
25 views

Software - Testing

The document discusses software testing strategies including black box and white box testing. It also discusses different types of software faults and failures, as well as reasons software failures may occur.

Uploaded by

lovepreet
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Software - Testing

The document discusses software testing strategies including black box and white box testing. It also discusses different types of software faults and failures, as well as reasons software failures may occur.

Uploaded by

lovepreet
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 52

Testing can only show the presence of bugs,

not their absence.

E. Dijkstra
BUGS
Errors of all kinds are known as “bugs”.

Bugs come in two main types:

– compile-time (e.g., syntax errors)


which are cheap to fix

– run-time (usually logical errors)


which are expensive to fix.
Testing Strategies
Never possible for designer to anticipate every possible use of system.

Systematic testing therefore essential

Offline strategies:

1. syntax checking

2. walkthroughs (“dry runs”);

3. Inspections

Online strategies:

1. black box testing;

2. white box testing.


Software
Software Fault Faults and Failures
A human error that results in a fault in some software product.

Software Failure

A departure of the required system’s behavior.

Failures can be discovered both before and after system delivery, because they
can occur in testing as well as during the system operation.
Software Faults and Failures
Faults and failures provide inside and outside views of the system

faults represent problems that developers see

failures are problems that users see


Reasons for having software failures

The specification may be wrong or have missing requirements.

The specification may contain a requirement that is impossible to implement.

The system design may contain a fault.

The program implementation may be wrong.


White-box testing OR glass-box testing.
Test case design method that uses the control structure of the procedural design to
derive test cases.

Using white-box testing methods, we derive test cases that

- Check that all independent paths within a module have been exercised at
least once.

- Exercise all logical decisions one their true and false sides.

- Execute all loops at their boundaries and within their operational bounds.

- Exercise internal data structures to assure their validity.


Cyclomatic
Cyclomatic Complexity
complexity is a software metric
-> provides a quantitative measure of the global complexity of a
program.

Defines the number of independent paths in the basis set of a program.


Cyclomatic Complexity
Three ways to compute cyclomatic complexity:

- The number of regions of the flow graph correspond to the cyclomatic


complexity.

- Cyclomatic complexity, V(G), for a flow graph G is defined as


V(G) = E - N +2

where E is the number of flow graph edges and N is the number of flow graph
nodes.

- Cyclomatic complexity, V(G) = P + 1


where P is the number of predicate nodes contained in the flow graph G.
What is black box testing?
Also known as specification-based testing.

The major testing focuses:


- specification-based function errors

- specification-based component/system behavior errors

- specification-based performance errors

- user-oriented usage errors

- black box interface errors


What do you need?
Software components, subsystems, or systems

For software components: component specification, user interface document

For a software subsystem: requirements specification, and product


specification document.

You also need:


- Specification-based software testing methods
- Specification-based software testing criteria

- good understanding of software components (or system)


All tests should be traceable to customer requirements.
Software Testing Principles
- Tests should be planned long before testing begins.

- The Pareto principle applies to software testing.


- 80% of all errors uncovered during testing will likely be
traceable to 20% of all program modules.

- Testing should begin “in the small” and progress toward testing “in the large”.

- Exhaustive testing is not possible.

- To be most effective, testing should be conducted by an independent third party.


Testing Process

15
Testing
• Testing only reveals the presence of defects

• Does not identify nature and location of defects

• Identifying & removing the defect => role of debugging and rework

• Preparing test cases, performing testing, defects identification & removal


all consume effort
• Overall testing becomes very expensive : 30-50% development cost

TESTING 16
Detecting defects in Testing
• During testing, software under test (SUT) executed with set of test cases

• Failure during testing => defects are present

• No failure => confidence grows, but can not say “defects are absent”

• To detect defects, must cause failures during testing

TESTING 17
Test Oracle
• To check if a failure has occurred when executed with a test case, we need
to know the correct behavior
• I.e. need a test oracle, which is often a human

• Human oracle makes each test case expensive as someone has to check the
correctness of its output

TESTING 18
Testing…

• Multiple levels of testing are done in a project

• At each level, for each SUT, test cases have to be designed and then
executed
• Overall, testing is very complex in a project and has to be done well

• Testing process at a high level has: test planning, test case design, and
test execution

TESTING 19
Test Plan

• Testing usually starts with test plan and ends with acceptance testing

• Test plan is a general document that defines the scope and approach
for testing for the whole project
• Inputs are SRS, project plan, design

• Test plan identifies what levels of testing will be done, what units will
be tested, etc in the project

TESTING 20
Test Plan…

• Test plan usually contains


– Test unit specs: what units need to be tested separately

– Features to be tested: these may include functionality,


performance, usability,…
– Approach: criteria to be used, when to stop, how to evaluate, etc

– Test deliverables

– Schedule and task allocation

TESTING 21
Test case Design

• Test plan focuses on testing a project; does not focus on details of


testing a SUT
• Test case design has to be done separately for each SUT

• Based on the plan (approach, features,..) test cases are determined for
a unit
• Expected outcome also needs to be specified for each test case

TESTING 22
Test case design…

• Together the set of test cases should detect most of the defects

• Would like the set of test cases to detect any defects, if it exists

• Would also like set of test cases to be small - each test case consumes
effort
• Determining a reasonable set of test case is the most challenging task
of testing

TESTING 23
Test case design

• The effectiveness and cost of testing depends on the set of test cases

• Q: How to determine if a set of test cases is good? I.e. the set will
detect most of the defects, and a smaller set cannot catch these defects
• No easy way to determine goodness; usually the set of test cases is
reviewed by experts
• This requires test cases be specified before testing – a key reason for
having test case specs
• Test case specs are essentially a table

TESTING 24
Test case specifications

Seq.No Condition Test Data


Expected successful
to be tested result

TESTING 25
Test case execution
• Executing test cases may require drivers or stubs to be written; some tests
can be auto, others manual
• Test summary report is often an output – gives a summary of test cases
executed, effort, defects found, etc
• Monitoring of testing effort is important to ensure that sufficient time is
spent
• Computer time also is an indicator of how testing is proceeding

TESTING 26
Defect logging and tracking

• A large software may have thousands of defects, found by many different


people
• Often person who fixes (usually the coder) is different from who finds

• Due to large scope, reporting and fixing of defects cannot be done


informally
• Defects found are usually logged in a defect tracking system and then
tracked to closure
• Defect logging and tracking is one of the best practices in industry

TESTING 27
Defect logging…

TESTING 28
Severity of defects in terms of its impact on software is also recorded

Severity useful for prioritization of fixing

One categorization
◦ Critical: Show stopper

◦ Major: Has a large impact

◦ Minor: An isolated defect

◦ Cosmetic: No impact on functionality


Defect logging…

• Ideally, all defects should be closed

• Sometimes, organizations release software with known defects


(hopefully of lower severity only)
• Organizations have standards for when a product may be released

• Defect log may be used to track the trend of how defect arrival and
fixing is happening

TESTING 30
BLACK BOX TESTING
A black-box testing method
Equivalence
- divide theClassPartitioning
input domain of a program into classes
of data
- derive test cases based on these partitions.

An equivalence class represents a set of valid or invalid states for input


condition.

An input condition is:


- a specific numeric value, a range of values
- a set of related values, or a Boolean condition
Equivalence classesClasses
Equivalence can be defined using the following guidelines:
- If an input condition specifies a range, one valid and two invalid
equivalence class are defined.

- If an input condition requires a specific value, one valid and two invalid
equivalence classes are defined.

- If an input condition specifies a member of a set, one valid and one invalid
equivalence classes are defined.

- If an input condition is Boolean, one valid and one invalid classes are
defined.
A test case design
Boundary technique
Value complements to equivalence partition
Analysis

Objective:
Boundary value analysis leads to a selection of test cases that exercise
bounding values.

Guidelines:
- If an input condition specifies a range bounded by values a and b,
test cases should be designed with value a and b, just above and below a and b.

Example: Integer D with input condition [-3, 10],


test values: -3, 10, 11, -2, 0
Boundary Value Analysis
- If an input condition specifies a number values,
test cases should be developed to exercise the minimum and
maximum numbers.

Values just above and below minimum and maximum are also tested.

Example: Enumerate data E with input condition: {3, 5, 100, 102}


test values: 3, 102, -1, 200, 5
Testing Strategy
unit test integration
test

system validation
test test
User needs Acceptance testing

Requirement System testing


specification

Design Integration testing

code TESTING
Unit testing 37
Unit Testing

module
to be
tested

results
software
engineer test cases
Unit Testing
module
to be
tested
interface
local data structures
boundary conditions
independent paths
error handling paths

test cases
Unit Test Environment
driver
interface
local data structures
Module boundary conditions
independent paths
error handling paths

stub stub

test cases
RESULTS
Integration Testing Strategies
Options:
• the “big bang” approach
• an incremental construction strategy
Top Down Integration
A
top module is tested with
stubs

B F G

stubs are replaced one at


a time, "depth first"
C
as new modules are integrated,
some subset of tests is re-run
D E
Bottom-Up Integration
A

B F G

drivers are replaced one at a


time, "depth first"
C

worker modules are grouped into


builds and integrated
D E

cluster
Sandwich Testing
A
Top modules are
tested with stubs

B F G

Worker modules are grouped into


builds and integrated
D E

cluster
High Order Testing

validation test

system test

alpha and beta test

other specialized testing


Other forms of testing
• Performance testing
– tools needed to “measure” performance

• Stress testing
– load the system to peak, load generation tools needed

• Regression testing
– test that previous functionality works alright

– important when changes are made

– Previous test records are needed for comparisons

TESTING 46
In a Project
• Both functional and structural should be used

• Test plans are usually determined using functional methods; during


testing, for further rounds, based on the coverage, more test cases can be
added
• Structural testing is useful at lower levels only; at higher levels ensuring
coverage is difficult
• Hence, a combination of functional and structural at unit testing

• Functional testing (but monitoring of coverage) at higher levels

TESTING 47
Debugging:
A Diagnostic Process
The Debugging
Process
test cases

new test results


regression cases
tests suspected
causes
corrections
Debugging
identified
causes
Debugging Effort
time required
to diagnose the
time required symptom and
to correct the error determine the
and conduct cause
regression tests
Symptoms & Causes
symptom and cause may be
geographically separated
symptom may disappear when
another problem is fixed

cause may be due to a


combination of non-errors

cause may be due to a system


or compiler error

symptom
cause
Consequences of Bugs
infectious
damage
catastrophic
extreme
serious
disturbing
annoying
mild
Bug Type
Bug Categories:
function-related bugs,
system-related bugs, data bugs, coding bugs,
design bugs, documentation bugs, standards
violations, etc.

You might also like