0% found this document useful (0 votes)
626 views

Chap-16 Testing Types

This document discusses various software testing techniques including unit testing, integration testing, verification, and validation. It describes testing as a critical part of software quality assurance aimed at finding errors. The key objectives of testing are to have a high probability of finding undiscovered errors and to uncover errors, not prove their absence. It also discusses testability principles and characteristics, different types of testing like white-box and black-box testing, and techniques for testing things like code structure, conditions, data flows, and loops.

Uploaded by

halcyon.g3285
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
626 views

Chap-16 Testing Types

This document discusses various software testing techniques including unit testing, integration testing, verification, and validation. It describes testing as a critical part of software quality assurance aimed at finding errors. The key objectives of testing are to have a high probability of finding undiscovered errors and to uncover errors, not prove their absence. It also discusses testability principles and characteristics, different types of testing like white-box and black-box testing, and techniques for testing things like code structure, conditions, data flows, and loops.

Uploaded by

halcyon.g3285
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 47

Software Testing

Techniques
Introduction
• Many aspects to achieving software quality
– Formal reviews (of both the software process
and the various stages of development), audits,
documentation, etc.
– Unit testing
– Integration testing
– Verification
• Does the module meet the specifications
– Validation
• Does the product meet the requirements

2
Introduction
• A Critical element of the Software
Quality Assurance
• Represents a critical review of
Specifications, Design and Coding
• Destructive rather than Constructive (try
to break the system)
• Major objective is to find errors not to
show the absence of errors (as distinct
from Verification and Validation)
3
Objectives

• A good test case is one that has a high


probability of finding an as-yet
undiscovered error
• A Successful test is one that uncovers
an as-yet undiscovered error

4
Principles
• All tests should be traceable to customer
requirements
• Tests should be planned long before testing begins
• The Pareto principle applies to Testing
– Typically, 80% of the errors come from 20% of the
modules
• Testing should begin ‘‘in the small’’ and progress
towards “in the large”
• Exhaustive Testing is not possible, but,
– if time permits, conduct multiple failure mode testing
• Test plans must receive independent review

5
Testability

• The ease with which a computer


program can be tested .

6
Characteristics for Testability

• Operability
– the better it works ,
the more efficiently it can be tested
• The system has few bugs
• No bugs block the execution of tests
• The product evolves in functional stages

7
Characteristics for Testability
• Observability
– what you see is what you test
• Distinct output for each input
• System states and variables visible during
execution
• Past system states and variables are visible
• All factors affecting the output are visible
• Incorrect output is easily identified
• Internal errors are automatically detected and
reported
8
Characteristics for Testability
• Controllability
– the better we can control the software , the
more testing can be automated
• All possible outputs can be generated through some
combination of input
• All code is executable through some combination of
input
• Input and Output formats are consistent and
structured
• All sequences of task interaction can be generated
• Tests can be conveniently specified and reproduced

9
Characteristics for Testability
• Decomposability
– By controlling the scope of testing , isolate
problems and perform smarter
retesting
• The Software system is built from independent
modules
• Software modules can be tested independently
– While this is very important, it does not
obviate the need for integration testing

10
Characteristics for Testability

• Simplicity
– the less there is to test ,
the more quickly we can
test it
• Functional simplicity
• Structural simplicity
• Code simplicity

11
Characteristics for Testability

• Stability
– the fewer the changes , the fewer
disruptions to testing
• Changes are infrequent
• Changes are controlled
• Changes do not invalidate existing tests
• The software recovers well from failures

12
Characteristics for Testability
• Understandability
– the more information we have ,
the smarter we will
test
• The design is well understood
• Dependencies between internal, external and
shared components well understood
• Changes to design are well communicated
• Technical documentation is instantly accessible,
well-organized, specific and accurate
13
Types of Testing

• White-Box Testing
– Knowing the internal workings of a product,
tests are conducted to ensure that “all gears
mesh”
• Black-Box Testing
– Knowing the specified function that a product
has been designed to perform , tests are
conducted to demonstrate that each function
is fully operational (note: this is still different
from validation)
14
White Box Testing
• Uses the control structure of the procedural
design to derive test cases
• Guarantees that all independent paths within a
module have been exercised at least once
• Exercises all loops at their boundaries and within
their operational bounds
• Exercises internal data structures to assure their
validity - again, at their boundaries and with their
operational bounds

15
Control Structure Testing

• Attacks the control flow of the program


• Provides us with a logical complexity
measure of a procedural design
• Use this measure as a guide for defining
a Basis set of execution paths
• Test cases derived to exercise the Basis
set are guaranteed to execute every
statement in the program at least once
16
Basis Path Testing
• Basis path testing is a white-Box testing
technique first proposed by Tom McCabe.
• The basis path method enables the test case
designer to derive a logical complexity
measure of a procedural design use this
measure as a guide for defining a basis set of
execution paths.
• Test cases which derived to exercise, are
guaranteed to execute every stmt in the
program at least one time during testing.
17
Basis path testing(contd..)
• A Flow Graph notation(Simple notation for
representationof control flow)

– represents the control flow of the program


– each node in the graph represents one or
more procedural statements
– Any procedural design representation can be
translated into a flow graph

18
Flow Graph Notation

19
Basis Path Testing ( contd.)
• Flow Graph
1

2
4 3

6 5
7a

7b

8 20
Basis Path Testing ( contd.)

• Cyclomatic Complexity
– Quantitative measure of the complexity of a
program
– is the number of independent paths in the
basis set of a program
– Upper bound for the number of tests that must
be conducted to ensure that all statements
have been executed at least once

21
Basis Path Testing ( contd.)
• Cyclomatic Complexity calculation V (G) = E -N+ 2
= P+1 = No. of regions in the graph
where E = no. of edges, N = no. of nodes, and P = no. of predicate
nodes
• For the previous example
– Independent paths path 1 : 1-8 path 2 :
1 - 2 - 3 - 7b - 1 - 8 path 3 : 1 - 2 - 4 - 6 - 7a - 7b - 1 - 8
path 4 : 1- 2 - 4 - 5 - 7a - 7b - 1 - 8
– Cyclomatic complexity = 11 - 9 + 2 = 3 + 1 = 4

22
Basis Path Testing ( contd.)

• Prepare test cases that will force


execution of each independent path in
the Basis Path set
• Each test case is executed and
compared to expected results

23
Example

24
Example

25
Condition Testing

• Exercises all the logical conditions in


a module

• Types of possible errors


– Boolean variable error
– Boolean Parenthesis error
– Boolean Operator error
– Arithmetic expression error

26
Types of Condition Testing
• Branch Testing
– the TRUE and FALSE branches of the
condition and every simple condition in it are
tested

• Domain Testing
– for every Boolean expression of n
variables , all of 2n possible tests are
required
27
Data Flow Testing
• Assume functions do not modify their
arguments or global variables. Then define
– DEF ( S ) = { X | Statement S contains a
definition of X }
– USE ( S ) = { X | Statement S contains a
use of X }
– Definition - Use chain ( DU chain )
• [ X , S , S ‘ ] , where X DEF ( S ) and X
USE ( S ‘ ) and the definition of X in S is live
at S ’
• Every DU chain to be covered at least once

28
Kinds of Loops

29
Loop Testing
• Focus is on the validity of loop constructs
• Simple loop ( n is the max. no. of allowable passes )
– Skip the loop entirely
– Only one pass through the loop
– Two passes
– m passes , where m < n
– n-1 , n , n+1 passes
• Nested loop
– Start at innermost loop
– Conduct simple loop test for this loop
– Move outwards one loop at a time

30
Loop Testing ( contd.)
• Concatenated loops
– Multiple simple loop tests if independent
– Nested loop approach if dependent

• Unstructured loops
– Should be restructured into a combination of
simple and nested loops

31
Black Box Testing
• Focus is on the functional requirements of the
software
• Uncovers errors such as
– Incorrect or missing functions
– Interface errors
– Errors in data structures
– Performance errors
– Initialization and Termination errors
• Unlike White Box Testing , this is performed
at later stages of testing
32
• How is functional validity tested?
• How is system behavior and performance tested?
• What classes of input will make good test cases?
• Is the system particularly sensitive to certain input
values?
• How are the boundaries of a data class isolated?
• What data rates and data volume can the system
tolerate?
• What effect will specific combinations of data have on
system operation?

33
Graph Based Testing

• Identify all objects modeled by the software


• Identify the relationships that connect these
objects
• Create an Object-Relationship graph
– node
– node weights
– links
– link weights

34
Graph Testing ( contd.)

• Example graph
menu select generates
Document
new generation < 1 sec window
file
allows
editing
of Attributes :
is represented as contains start dimension
Background color
text color

Document
text

35
Equivalence Partitioning
• It is a Black box testing method that divides,
Input domain of a program into classes of data
from which test cases are derived
• Goal is to design a single test case that
uncovers classes of errors , thereby reducing
the total number of test cases to be developed
• Each class represents a set of valid or invalid
states for input conditions
36
Equivalence Partitioning ( contd.)
• Test case design is based on an evaluation of
equivalence classes for an input condition
– range specified , one valid and two invalid
equivalence classes
– requires a specific value , one valid and two invalid
equivalence classes
– specifies a member of a set , one valid and one
invalid equivalence classes
– is boolean , one valid & one invalid equivalence
class

37
Equivalence Partitioning ( cont. )
• Example
Automatic Banking
– area code : input condition , boolean
input condition , range [200,999]
– prefix : input condition , range >200, no 0’s, < 1000
– suffix : input condition , value -- 4 digits
– password : input condition , boolean
input condition , value -- 6 char str
– command : input condition , set

38
Boundary Value Analysis
• Greater number of errors tend to occur at the
boundaries of the input domain
• Select test cases that exercise bounding values
• Input condition
– range , test cases are just below min and just above
max
– set , test cases are minimum and maximum
values, if ordered
• The above guidelines are also applied to output
conditions
– example
• outputs that produce minimum and maximum
values in the output range 39
Comparison Testing
• Multiple copies of the software are constructed in
case of critical applications
– Example: Shuttle Flight Control Software
• Each version is independently built from the
specs by different teams
• Test cases derived from other BB Testing
techniques are provided as input to each version
• Outputs are compared and versions validated
• Not fool proof

40
Other Cases
• GUI testing
– See text for partial list of things to test
• Client Server
– Often distributed -- complicates testing
– Additional emphasis on non-interference among
clients
• Documentation
– Use black box techniques
• Real-time
– Beyond the scope of this course

41
Summary
• Destructive rather than constructive
• Main goal is to find errors not to prove the
absence of errors
• White Box Testing
– control structure testing
– Condition testing
– Data flow testing
– Loop testing
• Black Box Testing - Functional requirements
– Graph based testing
– Equivalence partitioning
– Boundary Value testing
– Comparison testing
42
CLCS Example

Software Vendor Version Platform Facility


IRIX (UNIX Silicon Graphics 6.2 SGI Indigo 2, SDE,
operating system) Incorporated (SGI) SGI Indy, LCC-X
SGI Challenge
IRIX (UNIX Silicon Graphics 6.3 SGI O2 SDE,
operating system) Incorporated (SGI) LCC-X
VxWorks (Gateway VxWorks 5.2 SDS Gateway, SDE,
OS) CS Gateways LCC-X

Table 1.1: Juno Baselined COTS Software


43
CLCS Example

data.
Step Description Expected Results
1. Turn on SDE1 network hardware and PC’s Blinky lights start blinking on the
network devices, PC’s execute power
on self tests, boots OS
2. Turn on the sde1net workstation, wait for it to finish POST (Power On Self Test) tests
booting (login screen will be displayed), then turn on occur, Operating system start
the sde1boot workstation and wait for it to finish procedures initiate, Login screens
booting. appear.
3. Turn on all remaining HCI workstations and the POST (Power On Self Test) tests
sde1ddp1 machine. occur, Operating system start
procedures initiate, Login screens
appear.

44
CLCS Example

16. Initiate data acquisition at the sde1hci7 workstation. The System messages window
In the Dnav master menu, select “Global Apps”, then indicates that the Start receive
select “Start receive process”, then select “GW to process is executing, no unexpected
HCI JUNO_DDP_8” errors are displayed in the console
window.
17. Start data display. In the Dnav master menu, select The command is accepted (as shown
“Shuttle”, then select any of the following: in the System messages and console
windows), the appropriate display(s)
Wind Speed are started and are regularly updated.
Wind Direction PAD A
Wind Direction PAD B
Temperature
18. Stop data display at the workstation. Select quit from Display windows are closed.
display menu(s)
45
Test Results

Number Title Opened Criticality Date Current


During Opened Status
Juno-11 Telnet from sde1hci1 to System Major 4/14/97 Open
sde1ddp-r failed Test
Juno-12 Remote delog written System Major 4/15/97 Open
into wrong directory Integration
Juno-13 Application displays System Minor 4/15/97 Open
CPU intensive Integration
Juno-14 Telnet to SDS Gateway System Minor 4/22/97 Open
not working Test
Juno-15 Error received when System Major 4/22/97 Open
attempting to start Test
receive process 46
Lessons Learned
• The configuration management process
was not complete, multiple baselines of
some software components existed.
• A CLCS software sustaining process needs
to be defined and implemented as soon as
possible.
• Requirements buy-off process needs to be
refined.
• Hardware identification could be improved.
47

You might also like