0% found this document useful (0 votes)
63 views

Black Box Testing

The document discusses various software testing techniques, including the objectives, principles, and attributes of testing. It covers testability characteristics, test case design, black box testing, equivalence partitioning, and boundary value analysis. The key points are: 1) The primary objective of testing is to systematically uncover errors with minimum time and effort by designing tests that have a high probability of finding errors. 2) Exhaustive testing is not possible, so techniques like equivalence partitioning and boundary value analysis help reduce the test space. 3) Black box testing focuses on functional requirements and treats the system as a black box, while white box testing ensures internal operations are correct.

Uploaded by

NITIN SHUKLA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views

Black Box Testing

The document discusses various software testing techniques, including the objectives, principles, and attributes of testing. It covers testability characteristics, test case design, black box testing, equivalence partitioning, and boundary value analysis. The key points are: 1) The primary objective of testing is to systematically uncover errors with minimum time and effort by designing tests that have a high probability of finding errors. 2) Exhaustive testing is not possible, so techniques like equivalence partitioning and boundary value analysis help reduce the test space. 3) Black box testing focuses on functional requirements and treats the system as a black box, while white box testing ensures internal operations are correct.

Uploaded by

NITIN SHUKLA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 24

Software Testing Techniques

Testing Objective
• Primary Objective
– Testing is a process of executing a program with the intent of finding
an error.
– A good test case is one that has a high probability of finding an as-
yet undiscovered error.
– A successful test is one that uncovers an as-yet-undiscovered error
• Objective is to design tests that systematically uncover different classes
of errors and to do so with a minimum amount of time and effort.
• Testing demonstrates that software functions appear to be working
according to specification, that behavioral and performance
requirements appear to have been met.
• Data collected as testing is conducted provide a good indication of
software reliability and some indication of software quality as a whole.
• But testing cannot show the absence of errors and defects, it can show
only that software errors and defects are present.
Testing Principles
• All tests should be traceable to customer
requirements
• Tests should be planned long before testing
begins
• The *Pareto principle applies to software testing.
• Testing should begin “in the small” and progress
toward testing “in the large.”
• Exhaustive testing is not possible
• To be most effective, testing should be
conducted by an independent third party

*for many events, roughly 80% of the effects come from 20% of the causes
Testability

• Software testability is simply how easily a


computer program can be tested.
• Testing must exhibit set of characteristics that
achieve the goal of finding errors with a
minimum of effort.
Characteristics of s/w Testability
• Operability: The better it works, the more
efficiently it can be tested.
• Observability: What you see is what you test.
• Controllability: The better we can control the
software, the more the testing can be
automated and optimized."
• Decomposability: By controlling the scope of
testing, we can more quickly isolate problems
and perform smarter retesting.
• Simplicity: The less there is to test, the more
quickly we can test it.
• Stability: The fewer the changes, the fewer
the disruptions to testing.
• Understandability: The more information we
have, the smarter we will test.
Testing attributes
1. A good test has a high probability of finding an error.
– Tester must understand the software and attempt to develop a mental
picture of how the software might fail.
2. A good test is not redundant.
– Testing time and resources are limited.
– There is no point in conducting a test that has the same purpose as
another test.
– Every test should have a different purpose
– Ex. Valid/ invalid password.
3. A good test should be “best of breed”
– In a group of tests that have a similar intent, time and resource
limitations may mitigate toward the execution of only a subset of these
tests.
4. A good test should be neither too simple nor too complex.
– sometimes possible to combine a series of tests into one test case, the
possible side effects associated with this approach may mask errors.
– Each test should be executed separately
Test Case Design
• Objectives of testing, we must design tests that have the highest
likelihood of finding the most errors with a minimum amount of time
and effort.
• Test case design methods provide a mechanism that can help to ensure
the completeness of tests and provide the highest likelihood for
uncovering errors in software.
• Any engineered product (and most other things) can be tested in one of
two ways:
– Tests can be conducted that demonstrate each function is fully
operational while at the same time searching for errors in each
function;
– tests can be conducted to ensure that all internal operations are
performed according to specifications and all internal components
have been adequately exercised.
• The first test approach is called black-box testing and the second,
white-box testing.
Test cases and Test suites
• Test case is a triplet [I, S, O] where
– I is input data
– S is state of system at which data will be input
– O is the expected output
• Test suite is set of all test cases
• Test cases are not randomly selected. Instead
even they need to be designed.
Need for designing test cases
• Almost every non-trivial system has an
extremely large input data domain thereby
making exhaustive testing impractical
• If randomly selected then test case may loose
significance since it may expose an already
detected error by some other test case
Design of test cases
• Number of test cases do not determine the
effectiveness
• To detect error in following code
if(x>y) max = x; else max = y;
• {(x=3, y=2); (x=2, y=3)} will suffice
• {(x=3, y=2); (x=4, y=3); (x=5, y = 1)} will falter
• Each test case should detect different errors
Test Data and Test Cases
• Test data: Inputs which have been devised to
test the system.
• Test cases: Inputs to test the system and the
predicted outputs from these inputs if the
system operates according to its specification.
Test-to-pass and test-to-fail
• Test-to-pass:
– assures that the software minimally works,
– does not push the capabilities of the software,
– applies simple and straightforward test cases,
– does not try to “break” the program.
• Test-to-fail:
– designing and running test cases with the sole purpose of
breaking the software.
– strategically chosen test cases to probe for common
weaknesses in the software.
Black box testing
• Also called behavioral testing, focuses on the functional
requirements of the software.
• It enables the software engineer to derive sets of input
conditions that will fully exercise all functional requirements
for a program.
• Black-box testing is not an alternative to white-box techniques
but it is complementary approach.
• Black-box testing attempts to find errors in the following
categories:
– Incorrect or missing functions,
– Interface errors,
– Errors in data structures or external data base access.
– Behavior or performance errors,
– Initialization and termination errors.
Black-box testing
• Characteristics of Black-box testing:
– Program is treated as a black box.
– Implementation details do not matter.
– Requires an end-user perspective.
– Criteria are not precise.
– Test planning can begin early.
Black-box testing
Inputs causing
anomalous
Input test data I behaviour
e

System

Outputs which reveal


the presence of
Output test results Oe defects
Equivalence Partitioning
• Equivalence partitioning is a black-box testing method that
divides the input domain of a program into classes of data
from which test cases can be derived.
• Test case design for equivalence partitioning is based on an
evaluation of equivalence classes for an input condition.
• An equivalence class represents a set of valid or invalid states
for input conditions.
• Typically, an input condition is either a specific numeric value,
a range of values, a set of related values, or a Boolean
condition.
Equivalence
Partitioning
Invalid inputs Valid inputs

• Equivalence
partitioning is the
process of
methodically System
reducing the huge (or
infinite) set of
possible test cases
into a small, but
equally effective, set
of test cases.
Outputs
To define equivalence classes follow
the guideline
1. If an input condition specifies a range, one valid and
two invalid equivalence classes are defined.
2. If an input condition requires a specific value, one
valid and two invalid equivalence classes are
defined.
3. If an input condition specifies a member of a set,
one valid and one invalid equivalence class are
defined.
4. If an input condition is Boolean, one valid and one
invalid class are defined.
Formation of EC
– One valid EC and two invalid EC
– Input is 0<n<max
– Valid EC is (0-max)
– Invalid EC1 : all n < 0
– Invalid EC2 : all n > max
Boundary Value Analysis (BVA)
• Boundary value analysis is a test case design
technique that complements equivalence
partitioning.
• Rather than selecting any element of an equivalence
class, BVA leads to the selection of test cases at the
"edges" of the class.
• In other word, Rather than focusing solely on input
conditions, BVA derives test cases from the output
domain as well.
Guidelines for BVA
1. If an input condition specifies a range bounded by values a
and b, test cases should be designed with values a and b
and just above and just below a and b.
2. If an input condition specifies a number of values, test
cases should be developed that exercise the minimum and
maximum numbers. Values just above and below
minimum and maximum are also tested.
3. Apply guidelines 1 and 2 to output conditions.
4. If internal program data structures have prescribed
boundaries be certain to design a test case to exercise the
data structure at its boundary
Boundary Value Analysis

• Some typical programming errors occur:


– at boundaries of equivalence classes
– might be purely due to psychological factors.
• Programmers often fail to see:
– special processing required at the boundaries of
equivalence classes.
Boundary Value Analysis
• Programmers may improperly use < instead of
<=
• Boundary value analysis:
– select test cases at the boundaries of different
equivalence classes.

You might also like