Unit 3 STQA notes
Unit 3 STQA notes
White box testing is a software testing technique that involves testing the internal structure
and workings of a software application . The tester has access to the source code and uses
this knowledge to design test cases that can verify the correctness of the software at the
code level.
White box testing is also known as structural testing or code-based testing, and it is used to
test the software’s internal logic, flow, and structure. The tester creates test cases to
examine the code paths and logic flows to ensure they meet the specified requirements.
Unit Testing
Checks if each part or function of the application works correctly.
Ensures the application meets design requirements during development.
Integration Testing
Examines how different parts of the application work together.
Done after unit testing to make sure components work well both alone and together.
Regression Testing
Verifies that changes or updates don’t break existing functionality.
Ensures the application still passes all existing tests after updates.
White Box Testing Techniques
One of the main benefits of white box testing is that it allows for testing every part of an
application. To achieve complete code coverage, white box testing uses the following
techniques:
1. Statement Coverage
In this technique, the aim is to traverse all statements at least once. Hence, each line of code
is tested. In the case of a flowchart, every node must be traversed at least once. Since all
lines of code are covered, it helps in pointing out faulty code.
2. Branch Coverage
In this technique, test cases are designed so that each branch from all decision points is
traversed at least once. In a flowchart, all edges must be traversed at least once.
4 test cases are required such that all branches of all decisions are covered, i.e, all edges
3. Condition Coverage
In this technique, all individual conditions must be covered as shown in the following
example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1 – X = 0, Y = 55
#TC2 – X = 5, Y = 0
4. Multiple Condition Coverage
In this technique, all the possible combinations of the possible outcomes of conditions are
tested at least once. Let’s consider the following example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1: X = 0, Y = 0
#TC2: X = 0, Y = 5
#TC3: X = 55, Y = 0
#TC4: X = 55, Y = 5
5. Basis Path Testing
In this technique, control flow graphs are made from code or flowchart and then
Cyclomatic complexity is calculated which defines the number of independent paths so that
the minimal number of test cases can be designed for each independent path. Steps:
Make the corresponding control flow graph
Calculate the cyclomatic complexity
Find the independent paths
Design test cases corresponding to each independent path
V(G) = P + 1, where P is the number of predicate nodes in the flow graph
V(G) = E – N + 2, where E is the number of edges and N is the total number of nodes
V(G) = Number of non-overlapping regions in the graph
#P1: 1 – 2 – 4 – 7 – 8
#P2: 1 – 2 – 3 – 5 – 7 – 8
#P3: 1 – 2 – 3 – 6 – 7 – 8
#P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
6. Loop Testing
Loops are widely used and these are fundamental to many algorithms hence, their testing is
very important. Errors often occur at the beginnings and ends of loops.
Simple loops: For simple loops of size n, test cases are designed that:
1. Skip the loop entirely
2. Only one pass through the loop
3. 2 passes
4. m passes, where m < n
5. n-1 ans n+1 passes
Nested loops: For nested loops, all the loops are set to their minimum count, and we
start from the innermost loop. Simple loop tests are conducted for the innermost loop
and this is worked outwards till all the loops have been tested.
Concatenated loops: Independent loops, one after another. Simple loop tests are applied
for each. If they’re not independent, treat them like nesting.
Cyclomatic Complexity
The cyclomatic complexity of a code section is the quantitative measure of the number of
linearly independent paths in it. It is a software metric used to indicate the complexity of a
program. It is computed using the control flow graph of the program.
Formula for Calculating Cyclomatic Complexity
Mathematically, for a structured program, the directed graph inside the control flow is the
edge joining two basic blocks of the program as control may pass from first to second.
So, cyclomatic complexity M would be defined as,
M = E – N + 2P
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
In case, when exit point is directly connected back to the entry point. Here, the graph is
strongly connected, and cyclometric complexity is defined as
M=E–N+P
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
In the case of a single method, P is equal to 1. So, for a single subroutine, the formula can
be defined as
M=E–N+2
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
How to Calculate Cyclomatic Complexity?
Steps that should be followed in calculating cyclomatic complexity and test cases design
are:
Construction of graph with nodes and edges from code.
Identification of independent paths.
Cyclomatic Complexity Calculation
Design of Test Cases
Let a section of code as such:
A = 10
IF B > C THEN
A=B
ELSE
A=C
ENDIF
Print A
Print B
Print C
Control Flow Graph of the above code:
The cyclomatic complexity calculated for the above code will be from the control flow
graph.
The graph shows seven shapes(nodes), and seven lines(edges), hence cyclomatic complexity
is 7-7+2 = 2.
3. Nested Loops – Loops within loops are called as nested loops. when testing nested
loops, the number of tested increases as level nesting increases. The following steps for
testing nested loops are as follows-
1. Start with inner loop. set all other loops to minimum values.
2. Conduct simple loop testing on inner loop.
3. Work outwards.
4. Continue until all loops tested.
Let’s convert this control flow graph into a graph matrix. Since the graph has 4 nodes, so
the graph matrix would have a dimension of 4 X 4. Matrix entries will be filled as follows :
(1, 1) will be filled with ‘a’ as an edge exists from node 1 to node 1
(1, 2) will be filled with ‘b’ as an edge exists from node 1 to node 2. It is important to
note that (2, 1) will not be filled as the edge is unidirectional and not bidirectional
(1, 3) will be filled with ‘c’ as edge c exists from node 1 to node 3
(2, 4) will be filled with ‘d’ as edge exists from node 2 to node 4
(3, 4) will be filled with ‘e’ as an edge exists from node 3 to node 4
The graph matrix formed is shown below :
Connection Matrix :
A connection matrix is a matrix defined with edges weight. In simple form, when a
connection exists between two nodes of control flow graph, then the edge weight is 1,
otherwise, it is 0. However, 0 is not usually entered in the matrix cells to reduce the
complexity.
For example, if we represent the above control flow graph as a connection matrix, then the
result would be :
As we can see, the weight of the edges are simply replaced by 1 and the cells which were
empty before are left as it is, i.e., representing 0.
A connection matrix is used to find the cyclomatic complexity of the control graph.
Although there are three other methods to find the cyclomatic complexity but this method
works well too.
Following are the steps to compute the cyclomatic complexity :
1. Count the number of 1s in each row and write it in the end of the row
2. Subtract 1 from this count for each row (Ignore the row if its count is 0)
3. Add the count of each row calculated previously
4. Add 1 to this total count
5. The final sum in Step 4 is the cyclomatic complexity of the control flow graph
Let’s apply these steps to the graph above to compute the cyclomatic complexity.
We can verify this value for cyclomatic complexity using other methods :
Method-1 :
Cyclomatic complexity
=e-n+2*P
Since here,
e=5
n=4
and, P = 1
Therefore, cyclomatic complexity,
=5-4+2*1
=3
Method-2 :
Cyclomatic complexity
=d+P
Here,
d=2
and, P = 1
Therefore, cyclomatic complexity,
=2+1
=3
Method-3:
Cyclomatic complexity
= number of regions in the graph
Region 1: bounded by edges b, c, d, and e
Region 2: bounded by edge a (in loop)
Region 3: outside the graph
Therefore, cyclomatic complexity,
=1+1+1
=3
Functional Testing
Functional testing is defined as a type of testing that verifies that each function of the
software application works in conformance with the requirement and specification.
This testing is not concerned with the source code of the application. Each functionality
of the software application is tested by providing appropriate test input, expecting the
output, and comparing the actual output with the expected output.
This testing focuses on checking the user interface, APIs, database, security, client or
server application, and functionality of the Application Under Test. Functional testing
can be manual or automated. It determines the system’s software functional
requirements.
Regression Testing
Regression Testing is the process of testing the modified parts of the code and the parts
that might get affected due to the modifications to ensure that no new errors have been
introduced in the software after the modifications have been made.
Regression means the return of something and in the software field, it refers to the
return of a bug. It ensures that the newly added code is compatible with the existing
code.
In other words, a new software update has no impact on the functionality of the
software. This is carried out after a system maintenance operation and upgrades.
Nonfunctional Testing
Non-functional testing is a software testing technique that checks the non-functional
attributes of the system.
Non-functional testing is defined as a type of software testing to check non-functional
aspects of a software application.
It is designed to test the readiness of a system as per nonfunctional parameters which
are never addressed by functional testing.
Non-functional testing is as important as functional testing.
Non-functional testing is also known as NFT. This testing is not functional testing of
software. It focuses on the software’s performance, usability, and scalability.
Advantages of Black Box Testing
The tester does not need to have more functional knowledge or programming skills to
implement the Black Box Testing.
It is efficient for implementing the tests in the larger system.
Tests are executed from the user’s or client’s point of view.
Test cases are easily reproducible.
It is used to find the ambiguity and contradictions in the functional specifications.
Disadvantages of Black Box Testing
There is a possibility of repeating the same tests while implementing the testing
process.
Without clear functional specifications, test cases are difficult to implement.
It is difficult to execute the test cases because of complex inputs at different stages of
testing.
Sometimes, the reason for the test failure cannot be detected.
Some programs in the application are not tested.
It does not reveal the errors in the control structure.
Working with a large sample space of inputs can be exhaustive and consumes a lot of
time.
What is Boundary Value Analysis (BVA)?
BVA is used to check the behavior of application using test data that exist at boundary
values or in more easy words, for a range of input data values, boundary values (extreme
end values) are used as input for testing. It is mostly used design technique as it is believed
that software is most likely to fail at upper and lower limits of input data values.
Example: A software allows people of age 20 to 50 years (both 20 and 50 are inclusive) to
fill a form, for which the user has to enter his age in the age field option of the software.
The boundary values are 20 (min value) and 50 (max value).
Invalid Value Valid Value Invalid Value
(min-1) (min, min+1, nominal value, max-1, max) (max+1)
In the above table, one can clearly identify all valid and invalid test values (values consider
during testing the system).
1. Valid value: Test values at which the system does not fail and function properly as per
user requirement.
2. Invalid Values: test values that do not meet the system requirement.
What is Equivalence Partitioning (EP)?
It is also termed Equivalence Class Partitioning (ECP). It is a Black Box Testing technique,
where a range of input values are divided into equivalence data classes. In this, the tester
tests a random input value from the defined interval of equivalence data classes and if the
output for that input value is valid, then the whole class interval is considered valid and
vice-versa.
Example: An application allow the user to enter the password of length 8-12 numbers
(minimum 8 and maximum 12 numbers).
Invalid Equivalence Invalid Equivalence
Class Valid Equivalence Class Class
Let’s consider some password values for valid and invalid class
1. 1234 is of length 4 which is an invalid password as 4<8.
2. 567890234 is of length 9 which is a valid password as 9 lies between 8-12
3. 4536278654329 is of length 13 which is an invalid password as 13>12.
Boundary Value Analysis vs Equivalence Partitioning
Below are some of the differences between BVA and Equivalence Partitioning.
Boundary Value
Analysis Equivalence Partitioning
It considers min+1,
In Equivalence partitioning, valid
min, min-1, nominal,
and invalid ranges of equivalence
Testing Values max+1, max, and
classes are taken for testing
max-1 values as input
developed applications.
test data values.
Correctness of Equivalence
It is restricted to
Partitioning is dependent on how
Usage Condition applications with
correctly the tester identifies
close boundary values.
equivalence class.
However, both Boundary Value Analysis and Equivalence Partitioning are used together as
one helps in finding bugs at boundaries and another helps to determine bugs that exist
between the defined range of input data values.
Terminologies for Orthogonal Array Testing
Before understanding the actual implementation of Orthogonal Array Testing, it is essential
to understand the terminologies related to it.
Enlisted below are the widely used terminologies for Orthogonal Array Testing:
Term Description
Runs It is the number of rows which represents the number of test conditions to be performed.
Factors It is the number of columns which represents in the number of variable to be tested
As the rows represent the number of test conditions (experiment test) to be performed,
the goal is to minimize the number of rows as much as possible.
Factors indicate the number of columns, which is the number of variables.
Levels represent the maximum number of values for a factor (0 – levels – 1).
Together, the values in Levels and Factors are called LRUNS (Levels**Factors).
Also, Read => State Transition Testing Technique
Implementation Techniques of OATS
The Orthogonal Array Testing technique has the following steps:
#1) Decide the number of variables that will be tested for interaction. Map these variables to
the factors of the array.
#2) Decide the maximum number of values that each independent variable will have. Map
these values to the levels of the array.
#3) Find a suitable orthogonal array with the smallest number of runs. The number of runs
can be derived from various websites. One such website is listed here.
#4) Map the factors and levels onto the array.
#5) Translate them into suitable Test Cases
#6) Look out for the leftover or special Test Cases (if any)
After performing the above steps, your Array will be ready for testing with all the possible
combinations covered.