0% found this document useful (0 votes)
1 views

Unit 3 STQA notes

White box testing is a software testing technique that examines the internal structure and logic of an application using knowledge of its source code. It focuses on path checking, output validation, security testing, and loop testing, and includes types such as unit, integration, and regression testing. Techniques like statement coverage, branch coverage, and cyclomatic complexity are employed to ensure thorough testing of the code's functionality and efficiency.

Uploaded by

glace.babu
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Unit 3 STQA notes

White box testing is a software testing technique that examines the internal structure and logic of an application using knowledge of its source code. It focuses on path checking, output validation, security testing, and loop testing, and includes types such as unit, integration, and regression testing. Techniques like statement coverage, branch coverage, and cyclomatic complexity are employed to ensure thorough testing of the code's functionality and efficiency.

Uploaded by

glace.babu
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

White box Testing

White box testing is a software testing technique that involves testing the internal structure
and workings of a software application . The tester has access to the source code and uses
this knowledge to design test cases that can verify the correctness of the software at the
code level.

White box testing is also known as structural testing or code-based testing, and it is used to
test the software’s internal logic, flow, and structure. The tester creates test cases to
examine the code paths and logic flows to ensure they meet the specified requirements.

What Does White Box Testing Focus On?


White box testing uses detailed knowledge of a software’s inner workings to create very
specific test cases.
 Path Checking: Examines the different routes the program can take when it runs.
Ensures that all decisions made by the program are correct, necessary, and efficient.
 Output Validation: Tests different inputs to see if the function gives the right output
each time.
 Security Testing: Uses techniques like static code analysis to find and fix potential
security issues in the software. Ensures the software is developed using secure
practices.
 Loop Testing: Checks the loops in the program to make sure they work correctly and
efficiently. Ensures that loops handle variables properly within their scope.
 Data Flow Testing: Follows the path of variables through the program to ensure they are
declared, initialized, used, and manipulated correctly.
Types Of White Box Testing
White box testing can be done for different purposes. The three main types are:
1. Unit Testing
2. Integration Testing
3. Regression Testing

Unit Testing
 Checks if each part or function of the application works correctly.
 Ensures the application meets design requirements during development.
Integration Testing
 Examines how different parts of the application work together.
 Done after unit testing to make sure components work well both alone and together.
Regression Testing
 Verifies that changes or updates don’t break existing functionality.
 Ensures the application still passes all existing tests after updates.
White Box Testing Techniques
One of the main benefits of white box testing is that it allows for testing every part of an
application. To achieve complete code coverage, white box testing uses the following
techniques:
1. Statement Coverage
In this technique, the aim is to traverse all statements at least once. Hence, each line of code
is tested. In the case of a flowchart, every node must be traversed at least once. Since all
lines of code are covered, it helps in pointing out faulty code.

Statement Coverage Example

2. Branch Coverage
In this technique, test cases are designed so that each branch from all decision points is
traversed at least once. In a flowchart, all edges must be traversed at least once.
4 test cases are required such that all branches of all decisions are covered, i.e, all edges

of the flowchart are covered

3. Condition Coverage
In this technique, all individual conditions must be covered as shown in the following
example:
 READ X, Y
 IF(X == 0 || Y == 0)
 PRINT ‘0’
 #TC1 – X = 0, Y = 55
 #TC2 – X = 5, Y = 0
4. Multiple Condition Coverage
In this technique, all the possible combinations of the possible outcomes of conditions are
tested at least once. Let’s consider the following example:
 READ X, Y
 IF(X == 0 || Y == 0)
 PRINT ‘0’
 #TC1: X = 0, Y = 0
 #TC2: X = 0, Y = 5
 #TC3: X = 55, Y = 0
 #TC4: X = 55, Y = 5
5. Basis Path Testing
In this technique, control flow graphs are made from code or flowchart and then
Cyclomatic complexity is calculated which defines the number of independent paths so that
the minimal number of test cases can be designed for each independent path. Steps:
 Make the corresponding control flow graph
 Calculate the cyclomatic complexity
 Find the independent paths
 Design test cases corresponding to each independent path
 V(G) = P + 1, where P is the number of predicate nodes in the flow graph
 V(G) = E – N + 2, where E is the number of edges and N is the total number of nodes
 V(G) = Number of non-overlapping regions in the graph
 #P1: 1 – 2 – 4 – 7 – 8
 #P2: 1 – 2 – 3 – 5 – 7 – 8
 #P3: 1 – 2 – 3 – 6 – 7 – 8
 #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
6. Loop Testing
Loops are widely used and these are fundamental to many algorithms hence, their testing is
very important. Errors often occur at the beginnings and ends of loops.
 Simple loops: For simple loops of size n, test cases are designed that:
1. Skip the loop entirely
2. Only one pass through the loop
3. 2 passes
4. m passes, where m < n
5. n-1 ans n+1 passes
 Nested loops: For nested loops, all the loops are set to their minimum count, and we
start from the innermost loop. Simple loop tests are conducted for the innermost loop
and this is worked outwards till all the loops have been tested.
 Concatenated loops: Independent loops, one after another. Simple loop tests are applied
for each. If they’re not independent, treat them like nesting.
Cyclomatic Complexity
The cyclomatic complexity of a code section is the quantitative measure of the number of
linearly independent paths in it. It is a software metric used to indicate the complexity of a
program. It is computed using the control flow graph of the program.
Formula for Calculating Cyclomatic Complexity
Mathematically, for a structured program, the directed graph inside the control flow is the
edge joining two basic blocks of the program as control may pass from first to second.
So, cyclomatic complexity M would be defined as,
M = E – N + 2P
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
In case, when exit point is directly connected back to the entry point. Here, the graph is
strongly connected, and cyclometric complexity is defined as
M=E–N+P
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
In the case of a single method, P is equal to 1. So, for a single subroutine, the formula can
be defined as
M=E–N+2
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
How to Calculate Cyclomatic Complexity?
Steps that should be followed in calculating cyclomatic complexity and test cases design
are:
Construction of graph with nodes and edges from code.
 Identification of independent paths.
 Cyclomatic Complexity Calculation
 Design of Test Cases
Let a section of code as such:
A = 10
IF B > C THEN
A=B
ELSE
A=C
ENDIF
Print A
Print B
Print C
Control Flow Graph of the above code:
The cyclomatic complexity calculated for the above code will be from the control flow
graph.
The graph shows seven shapes(nodes), and seven lines(edges), hence cyclomatic complexity
is 7-7+2 = 2.

Use of Cyclomatic Complexity


 Determining the independent path executions thus proven to be very helpful for
Developers and Testers.
 It can make sure that every path has been tested at least once.
 Thus help to focus more on uncovered paths.
 Code coverage can be improved.
 Risks associated with the program can be evaluated.
 These metrics being used earlier in the program help in reducing the risks.
Advantages of Cyclomatic Complexity
 It can be used as a quality metric, given the relative complexity of various designs.
 It is able to compute faster than Halstead’s metrics.
 It is used to measure the minimum effort and best areas of concentration for testing.
 It is able to guide the testing process.
 It is easy to apply.
Disadvantages of Cyclomatic Complexity
 It is the measure of the program’s control complexity and not the data complexity.
 In this, nested conditional structures are harder to understand than non-nested
structures.
 In the case of simple comparisons and decision structures, it may give a misleading
figure.
Important Questions on Cyclomatic Complexity
1. Consider the following program module.
int module1 (int x, int y) {
while (x!=y){
if(x>y)
x=x-y,
else y=y-x;
}
return x;
}
What is the Cyclomatic Complexity of the above module? [GATE CS 2004]
(A) 1
(B) 2
(C) 3
(D) 4
Answer: Correct answer is (C).
Control Structure Testing
Control structure testing is used to increase the coverage area by testing various control
structures present in the program. The different types of testing performed under control
structure testing are as follows-
1. Condition Testing
2. Data Flow Testing
3. Loop Testing
1. Condition Testing : Condition testing is a test cased design method, which ensures that
the logical condition and decision statements are free from errors. The errors present in
logical conditions can be incorrect boolean operators, missing parenthesis in a booleans
expression, error in relational operators, arithmetic expressions, and so on. The common
types of logical conditions that are tested using condition testing are-
1. A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic expressions and
‘OP’ is an operator.
2. A simple condition like any relational expression preceded by a NOT (~) operator. For
example, (~E1) where ‘E1’ is an arithmetic expression and ‘a’ denotes NOT operator.
3. A compound condition consists of two or more simple conditions, Boolean operator,
and parenthesis. For example, (E1 & E2)|(E2 & E3) where E1, E2, E3 denote arithmetic
expression and ‘&’ and ‘|’ denote AND or OR operators.
4. A Boolean expression consists of operands and a Boolean operator like ‘AND’, OR,
NOT. For example, ‘A|B’ is a Boolean expression where ‘A’ and ‘B’ denote operands
and | denotes OR operator.
2. Data Flow Testing : The data flow test method chooses the test path of a program based
on the locations of the definitions and uses all the variables in the program. The data flow
test approach is depicted as follows suppose each statement in a program is assigned a
unique statement number and that theme function cannot modify its parameters or global
variables. For example, with S as its statement number.
DEF (S) = {X | Statement S has a definition of X}
USE (S) = {X | Statement S has a use of X}
If statement S is an if loop statement, them its DEF set is empty and its USE set depends on
the state of statement S. The definition of the variable X at statement S is called the line of
statement S’ if the statement is any way from S to statement S’ then there is no other
definition of X. A definition use (DU) chain of variable X has the form [X, S, S’], where S
and S’ denote statement numbers, X is in DEF(S) and USE(S’), and the definition of X in
statement S is line at statement S’. A simple data flow test approach requires that each DU
chain be covered at least once. This approach is known as the DU test approach. The DU
testing does not ensure coverage of all branches of a program. However, a branch is not
guaranteed to be covered by DU testing only in rare cases such as then in which the other
construct does not have any certainty of any variable in its later part and the other part is
not present. Data flow testing strategies are appropriate for choosing test paths of a program
containing nested if and loop statements. 3. Loop Testing : Loop testing is actually a white
box testing technique. It specifically focuses on the validity of loop construction. Following
are the types of loops.
1. Simple Loop – The following set of test can be applied to simple loops, where the
maximum allowable number through the loop is n.
1. Skip the entire loop.
2. Traverse the loop only once.
3. Traverse the loop two times.
4. Make p passes through the loop where p<n.
5. Traverse the loop n-1, n, n+1 times.
2. Concatenated Loops – If loops are not dependent on each other, contact loops can be
tested using the approach used in simple loops. if the loops are interdependent, the steps
are followed in nested loops.

3. Nested Loops – Loops within loops are called as nested loops. when testing nested
loops, the number of tested increases as level nesting increases. The following steps for
testing nested loops are as follows-
1. Start with inner loop. set all other loops to minimum values.
2. Conduct simple loop testing on inner loop.
3. Work outwards.
4. Continue until all loops tested.

4. Unstructured loops – This type of loops should be redesigned, whenever possible, to


reflect the use of unstructured the structured programming constructs.
Graph Matrices in Software Testing
A graph matrix is a data structure that can assist in developing a tool for automation
of path testing. Properties of graph matrices are fundamental for developing a test tool and
hence graph matrices are very useful in understanding software testing concepts and theory.
What is a Graph Matrix ?
A graph matrix is a square matrix whose size represents the number of nodes in the control
flow graph. If you do not know what control flow graphs are, then read this article. Each
row and column in the matrix identifies a node and the entries in the matrix represent the
edges or links between these nodes. Conventionally, nodes are denoted by digits and edges
are denoted by letters.
Let’s take an example.

Let’s convert this control flow graph into a graph matrix. Since the graph has 4 nodes, so
the graph matrix would have a dimension of 4 X 4. Matrix entries will be filled as follows :
 (1, 1) will be filled with ‘a’ as an edge exists from node 1 to node 1
 (1, 2) will be filled with ‘b’ as an edge exists from node 1 to node 2. It is important to
note that (2, 1) will not be filled as the edge is unidirectional and not bidirectional
 (1, 3) will be filled with ‘c’ as edge c exists from node 1 to node 3
 (2, 4) will be filled with ‘d’ as edge exists from node 2 to node 4
 (3, 4) will be filled with ‘e’ as an edge exists from node 3 to node 4
The graph matrix formed is shown below :
Connection Matrix :
A connection matrix is a matrix defined with edges weight. In simple form, when a
connection exists between two nodes of control flow graph, then the edge weight is 1,
otherwise, it is 0. However, 0 is not usually entered in the matrix cells to reduce the
complexity.
For example, if we represent the above control flow graph as a connection matrix, then the
result would be :

As we can see, the weight of the edges are simply replaced by 1 and the cells which were
empty before are left as it is, i.e., representing 0.
A connection matrix is used to find the cyclomatic complexity of the control graph.
Although there are three other methods to find the cyclomatic complexity but this method
works well too.
Following are the steps to compute the cyclomatic complexity :
1. Count the number of 1s in each row and write it in the end of the row
2. Subtract 1 from this count for each row (Ignore the row if its count is 0)
3. Add the count of each row calculated previously
4. Add 1 to this total count
5. The final sum in Step 4 is the cyclomatic complexity of the control flow graph
Let’s apply these steps to the graph above to compute the cyclomatic complexity.

We can verify this value for cyclomatic complexity using other methods :
Method-1 :
Cyclomatic complexity
=e-n+2*P
Since here,
e=5
n=4
and, P = 1
Therefore, cyclomatic complexity,
=5-4+2*1
=3
Method-2 :
Cyclomatic complexity
=d+P
Here,
d=2
and, P = 1
Therefore, cyclomatic complexity,
=2+1
=3
Method-3:
Cyclomatic complexity
= number of regions in the graph
 Region 1: bounded by edges b, c, d, and e
 Region 2: bounded by edge a (in loop)
 Region 3: outside the graph
Therefore, cyclomatic complexity,
=1+1+1
=3

Black Box Testing


Black Box Testing is an important part of making sure software works as it should. Instead
of peeking into the code, testers check how the software behaves from the outside, just like
users would. This helps catch any issues or bugs that might affect how the software works.
Types Of Black Box Testing
The following are the several categories of black box testing:
1. Functional Testing
2. Regression Testing
3. Nonfunctional Testing (NFT)
Before we move in depth of the Black box testing do you known that their are many
different type of testing used in industry

Functional Testing
 Functional testing is defined as a type of testing that verifies that each function of the
software application works in conformance with the requirement and specification.
 This testing is not concerned with the source code of the application. Each functionality
of the software application is tested by providing appropriate test input, expecting the
output, and comparing the actual output with the expected output.
 This testing focuses on checking the user interface, APIs, database, security, client or
server application, and functionality of the Application Under Test. Functional testing
can be manual or automated. It determines the system’s software functional
requirements.
Regression Testing
 Regression Testing is the process of testing the modified parts of the code and the parts
that might get affected due to the modifications to ensure that no new errors have been
introduced in the software after the modifications have been made.
 Regression means the return of something and in the software field, it refers to the
return of a bug. It ensures that the newly added code is compatible with the existing
code.
 In other words, a new software update has no impact on the functionality of the
software. This is carried out after a system maintenance operation and upgrades.
Nonfunctional Testing
 Non-functional testing is a software testing technique that checks the non-functional
attributes of the system.
 Non-functional testing is defined as a type of software testing to check non-functional
aspects of a software application.
 It is designed to test the readiness of a system as per nonfunctional parameters which
are never addressed by functional testing.
 Non-functional testing is as important as functional testing.
 Non-functional testing is also known as NFT. This testing is not functional testing of
software. It focuses on the software’s performance, usability, and scalability.
Advantages of Black Box Testing
 The tester does not need to have more functional knowledge or programming skills to
implement the Black Box Testing.
 It is efficient for implementing the tests in the larger system.
 Tests are executed from the user’s or client’s point of view.
 Test cases are easily reproducible.
 It is used to find the ambiguity and contradictions in the functional specifications.
Disadvantages of Black Box Testing
 There is a possibility of repeating the same tests while implementing the testing
process.
 Without clear functional specifications, test cases are difficult to implement.
 It is difficult to execute the test cases because of complex inputs at different stages of
testing.
 Sometimes, the reason for the test failure cannot be detected.
 Some programs in the application are not tested.
 It does not reveal the errors in the control structure.
 Working with a large sample space of inputs can be exhaustive and consumes a lot of
time.
What is Boundary Value Analysis (BVA)?
BVA is used to check the behavior of application using test data that exist at boundary
values or in more easy words, for a range of input data values, boundary values (extreme
end values) are used as input for testing. It is mostly used design technique as it is believed
that software is most likely to fail at upper and lower limits of input data values.
Example: A software allows people of age 20 to 50 years (both 20 and 50 are inclusive) to
fill a form, for which the user has to enter his age in the age field option of the software.
The boundary values are 20 (min value) and 50 (max value).
Invalid Value Valid Value Invalid Value
(min-1) (min, min+1, nominal value, max-1, max) (max+1)

19 20, 21, 30, 49, 50 51

In the above table, one can clearly identify all valid and invalid test values (values consider
during testing the system).
1. Valid value: Test values at which the system does not fail and function properly as per
user requirement.
2. Invalid Values: test values that do not meet the system requirement.
What is Equivalence Partitioning (EP)?
It is also termed Equivalence Class Partitioning (ECP). It is a Black Box Testing technique,
where a range of input values are divided into equivalence data classes. In this, the tester
tests a random input value from the defined interval of equivalence data classes and if the
output for that input value is valid, then the whole class interval is considered valid and
vice-versa.
Example: An application allow the user to enter the password of length 8-12 numbers
(minimum 8 and maximum 12 numbers).
Invalid Equivalence Invalid Equivalence
Class Valid Equivalence Class Class

<8 8-12 >12

Let’s consider some password values for valid and invalid class
1. 1234 is of length 4 which is an invalid password as 4<8.
2. 567890234 is of length 9 which is a valid password as 9 lies between 8-12
3. 4536278654329 is of length 13 which is an invalid password as 13>12.
Boundary Value Analysis vs Equivalence Partitioning
Below are some of the differences between BVA and Equivalence Partitioning.

Boundary Value
Analysis Equivalence Partitioning

BVA considers the Equivalence Partitioning examines


input data values from input data values from the range of
Uses
the defined equivalence class intervals.
boundaries.

It considers min+1,
In Equivalence partitioning, valid
min, min-1, nominal,
and invalid ranges of equivalence
Testing Values max+1, max, and
classes are taken for testing
max-1 values as input
developed applications.
test data values.

It helps in identifying bugs in-


It identifies bugs at
Bug Identification between the partitioned
boundary values only.
equivalence data class.

It is a part of stress It can be performed at any stage of


Application Areas
and negative testing. software testing like unit testing.

Correctness of Equivalence
It is restricted to
Partitioning is dependent on how
Usage Condition applications with
correctly the tester identifies
close boundary values.
equivalence class.

However, both Boundary Value Analysis and Equivalence Partitioning are used together as
one helps in finding bugs at boundaries and another helps to determine bugs that exist
between the defined range of input data values.
Terminologies for Orthogonal Array Testing
Before understanding the actual implementation of Orthogonal Array Testing, it is essential
to understand the terminologies related to it.

Enlisted below are the widely used terminologies for Orthogonal Array Testing:
Term Description

Runs It is the number of rows which represents the number of test conditions to be performed.

Factors It is the number of columns which represents in the number of variable to be tested

Levels It represents the number of values for a Factor

 As the rows represent the number of test conditions (experiment test) to be performed,
the goal is to minimize the number of rows as much as possible.
 Factors indicate the number of columns, which is the number of variables.
 Levels represent the maximum number of values for a factor (0 – levels – 1).
Together, the values in Levels and Factors are called LRUNS (Levels**Factors).
Also, Read => State Transition Testing Technique
Implementation Techniques of OATS
The Orthogonal Array Testing technique has the following steps:
#1) Decide the number of variables that will be tested for interaction. Map these variables to
the factors of the array.
#2) Decide the maximum number of values that each independent variable will have. Map
these values to the levels of the array.
#3) Find a suitable orthogonal array with the smallest number of runs. The number of runs
can be derived from various websites. One such website is listed here.
#4) Map the factors and levels onto the array.
#5) Translate them into suitable Test Cases
#6) Look out for the leftover or special Test Cases (if any)
After performing the above steps, your Array will be ready for testing with all the possible
combinations covered.

You might also like