Unit_I[1]
Unit_I[1]
Why do we test Software? Black-Box Testing and White-Box Testing, Software Testing Life
Cycle, V-model of Software Testing, Program Correctness and Verification, Reliability versus
Safety, Failures, Errors and Faults (Defects), Software Testing Principles, Program Inspections,
Stages of Testing: Unit Testing, Integration Testing, System Testing
What is Testing?
• Testing is a group of techniques to determine the correctness of the application under the
predefined script but, testing cannot find all the defect of application.
• The main intent of testing is to detect failures of the application so that failures can be
discovered and corrected. It does not demonstrate that a product functions properly under all
conditions but only that it is not working in some specific conditions.
• Testing furnishes comparison that compares the behaviour and state of software against
mechanisms because the problem can be recognized by the mechanism.
• The mechanism may include past versions of the same specified product, comparable products,
and interfaces of expected purpose, relevant standards, or other criteria but not limited up to
these.
• Testing includes an examination of code and also the execution of code in various
environments, conditions as well as all the examining aspects of the code.
• In the current scenario of software development, a testing team may be separate from the
development team so that Information derived from testing can be used to correct the process
of software development.
• The success of software depends upon acceptance of its targeted audience, easy graphical user
interface, strong functionality load test, etc.
• For example, the audience of banking is totally different from the audience of a video game.
• Therefore, when an organization develops a software product, it can assess whether the software
product will be beneficial to its purchasers and other audience.
Software Testing
• Software Testing is a method to check whether the actual software product matches expected
requirements and to ensure that software product is Defect free.
• It involves execution of software/system components using manual or automated tools to
evaluate one or more properties of interest.
• The purpose of software testing is to identify errors, gaps or missing requirements in contrast
to actual requirements.
1
Why Software Testing is Important?
• Software Testing is Important because if there are any bugs or errors in the software, it can
be identified early and can be solved before delivery of the software product.
• Properly tested software product ensures reliability, security and high performance which
further results in time saving, cost effectiveness and customer satisfaction.
Identifies defects early. Developing complex applications can leave room for errors. Software
testing is imperative, as it identifies any issues and defects with the written code so they can be fixed
before the software product is delivered.
Improves product quality. When it comes to customer appeal, delivering a quality product is
an important metric to consider. An exceptional product can only be delivered if it's tested effectively
before launch. Software testing helps the product pass quality assurance (QA) and meet the criteria and
specifications defined by the users.
Increases customer trust and satisfaction. Testing a product throughout its development
lifecycle builds customer trust and satisfaction, as it provides visibility into the product's strong and
weak points. By the time customers receive the product, it has been tried and tested multiple times and
delivers on quality.
Detects security vulnerabilities. Insecure application code can leave vulnerabilities that
attackers can exploit. Since most applications are online today, they can be a leading vector for cyber
attacks and should be tested thoroughly during various stages of application development. For example,
a web application published without proper software testing can easily fall victim to a cross-site
scripting attack where the attackers try to inject malicious code into the user's web browser by gaining
access through the vulnerable web application. The nontested application thus becomes the vehicle for
delivering the malicious code, which could have been prevented with proper software testing.
Helps with scalability. A type of nonfunctional software testing process, scalability testing is
done to gauge how well an application scales with increasing workloads, such as user traffic, data
volume and transaction counts. It can also identify the point where an application might stop functioning
and the reasons behind it, which may include meeting or exceeding a certain threshold, such as the total
number of concurrent app users.
Saves money. Software development issues that go unnoticed due to a lack of software testing
can haunt organizations later with a bigger price tag. After the application launches, it can be more
difficult to trace and resolve the issues, as software patching is generally more expensive than testing
during the development stages.
• Black box testing is a software testing method that does not require knowledge about how an
application is built. It uses a wide range of testing techniques to discover vulnerabilities or
weaknesses in the product, simulating how a real-world attacker would look for exploitable
holes in the software.
• Black box testing is a technique of software testing which examines the functionality of
software without peering into its internal structure or coding.
2
• The primary source of black box testing is a specification of requirements that is stated by the
customer.
• In this method, tester selects a function and gives input value to examine its functionality, and
checks whether the function is giving expected output or not.
• If the function produces correct output, then it is passed in testing, otherwise failed.
• The test team reports the result to the development team and then tests the next function.
• After completing testing of all functions if there are severe problems, then it is given back to
the development team for correction.
Generic steps of black box testing
o The black box test is based on the specification of requirements, so it is examined in the
beginning.
o In the second step, the tester creates a positive test scenario and an adverse test scenario by
selecting valid and invalid input values to check that the software is processing them correctly
or incorrectly.
o In the third step, the tester develops various test cases such as decision table, all pairs test,
equivalent division, error estimation, cause-effect graph, etc.
o The fourth phase includes the execution of all test cases.
o In the fifth step, the tester compares the expected output against the actual output.
o In the sixth and final step, if there is any flaw in the software, then it is cured and tested again.
Given our inputs (the member status and whether or not it’s the member’s birthday), we can define what
the expected discount should be. A decision table provides us with an overview of the cases we should
be testing.
3
Of course, the amount of combinations increases when you have more inputs and more possible
values. That is where pairwise testing can help us.
ii. Pairwise Testing
• Pairwise testing is sometimes called all-pairs testing.
• Most software bugs are caused by the combination of specific values of two parameters.
• It’s increasingly less common that bugs are caused by a combination of more parameters.
• This allows us to reduce the number of test cases significantly when many combinations of
inputs are possible.
Let’s assume a system that accepts three parameters: a Boolean, one of three colors and a value
between one and four. This gives us a total of 2 x 3 x 4 possible combinations, meaning we would have
to write 24 test cases.
With pairwise testing, we can reduce this to 12 cases. The way to do so is to take the following steps:
• List the values of the parameter with the most possible values in a column. Repeat each value
n times, where n is the number of possible values of the parameter with the second most possible
values
• Then, in a second column, list the values of the second parameter and make sure you have made
each possible combination with the first value.
Now in a third column, add the possible values of the last parameter. Again, making sure to make each
possible combination with the previous parameter. This gives us the resulting table with 12 test cases:
4
We went from 24 possible test cases to 12. The more parameters and possible values you have, the more
you can gain from pairwise testing. To know how many test cases you should end up with, you can
multiply the number of possible values of the two parameters with the most possible values.
• It means of partitioning the the class into equal clusters and designing test cases for each
partitions.
• It consists of valid and invalid input classes.
• Many applications have points of entry that accept a range of values.
Example: Let’s say a system has an entry point that accepts an integer between 0 and 10.
With these restrictions, we can identify three partitions:
negative infinity to -1
0 to 10
11 to positive infinity
We can also say the inputs belong to a certain equivalence class. With these equivalence classes or
partitions, you can define three test cases:
a negative number, for example, -4,
a valid number, for example, 5,
an invalid positive number, for example, 12
• -1
• 0
• 10
• 11
• If a system can only be in a limited amount of states and if it can move from one state to
another based on some input and predefined rules, then it can be regarded as a “state
machine.”
• Given this state machine, we can define a starting state, an input, and the expected resulting
state.
• These scenarios will define our tests.
For example, we can describe a simple media player as a state machine. To keep it simple, this media
player has only three commands: play, stop and pause. This results in the following state machine:
5
This diagram results in the following state transition table:
This final table now lists our test scenarios. Each test will set up the system in a given state, send the
command to the system and verify the new state.
A state transition table is similar to a decision table. The difference is that in a decision table you can
have multiple inputs and outputs per scenario, whereas in a state transition table you have one starting
state, one action, and one final state per scenario.
• The term 'white box' is used because of the internal perspective of the system.
• The clear box or white box or transparent box name denote the ability to see through the
software's outer shell into its inner workings.
• White Box Testing is a testing technique in which software’s internal structure, design, and
coding are tested to verify input-output flow and improve design, usability, and security.
• In white box testing, code is visible to testers, so it is also called Clear box testing, Open box
testing, Transparent box testing, Code-based testing, and Glass box testing.
6
Generic steps of white box testing
o Design all test scenarios, test cases and prioritize them according to high priority number.
o This step involves the study of code at runtime to examine the resource utilization, not accessed
areas of the code, time taken by various methods and operations and so on.
o In this step testing of internal subroutines takes place. Internal subroutines such as nonpublic
methods, interfaces are able to handle all types of data appropriately or not.
o This step focuses on testing of control statements like loops and conditional statements to check
the efficiency and accuracy for different data inputs.
o In the last step white box testing includes security testing to check all possible security
loopholes by looking at how the code handles security.
The white box testing contains various tests, which are as follows:
o Path testing
o Loop testing
o Condition testing
a. Path testing
• In the path testing, we will write the flow graphs and test all independent paths.
• Here writing the flow graph implies that flow graphs are representing the flow of the program
and also show how every program is added with one another as we can see in the below image:
7
And test all the independent paths implies that suppose a path from main() to function G, first set the
parameters and test if the program is correct in that particular path, and in the same way test all other
paths and fix the bugs.
b. Loop testing
• In the loop testing, we will test the loops such as while, for, and do-while, etc. and also check
for ending condition if working correctly and if the size of the conditions is enough.
For example: we have one program where the developers have given about 50,000 loops.
1. {
2. while(50,000)
3. ……
4. ……
5. }
We cannot test this program manually for all the 50,000 loops cycle. So we write a small program that
helps for all 50,000 cycles, as we can see in the below program, that test P is written in the similar
language as the source code program, and this is known as a Unit test. And it is written by the developers
only.
1. Test P
2. {
3. ……
4. …… }
As we can see in the below image that, we have various requirements such as 1, 2, 3, 4. And then, the
developer writes the programs such as program 1,2,3,4 for the parallel conditions. Here the application
contains the 100s line of codes.
8
• The developer will do the white box testing, and they will test all the five programs line by line
of code to find the bug.
• If they found any bug in any of the programs, they will correct it.
• And they again have to test the system then this process contains lots of time and effort and
slows down the product release time.
Now, suppose we have another case, where the clients want to modify the requirements, then the
developer will do the required changes and test all four program again, which take lots of time and
efforts.
In this, we will write test for a similar program where the developer writes these test code in the related
language as the source code. Then they execute these test code, which is also known as unit test
programs. These test programs linked to the main program and implemented as programs.
Therefore, if there is any requirement of modification or bug in the code, then the developer makes the
adjustment both in the main program and the test program and then executes the test program.
c. Condition testing
• In this, we will test all logical conditions for both true and false values; that is, we will verify
for both if and else condition.
For example:
1. if(condition) - true
2. {
3. …..
4. ……
5. ……
6. }
7. else - false
9
8. {
9. …..
10. ……
11. ……
12. }
The above program will work fine for both the conditions, which means that if the condition is accurate,
and then else should be false and conversely.
Following are the significant differences between white box testing and black box testing:
The developers can perform white box testing. The test engineers perform the black box testing.
In this, we will look into the source code and test In this, we will verify the functionality of the application
the logic of the code. based on the requirement specification.
In this, the developer should know about the In this, there is no need to know about the internal design
internal design of the code. of the code.
Software testing life cycle contains the following steps:ymorphism in Java | Dynamic Method
1. Requirement Analysis
2. Test Plan Creation
3. Environment setup
4. Test case Execution
5. Defect Logging
6. Test Cycle Closure
10
Requirement Analysis:
The first step of the manual testing procedure is requirement analysis. In this phase, tester analyses
requirement document of SDLC (Software Development Life Cycle) to examine requirements stated
by the client. After examining the requirements, the tester makes a test plan to check whether the
software is meeting the requirements or not.
Test plan creation is the crucial phase of STLC where all the testing strategies are defined. Tester
determines the estimated effort and cost of the entire project. This phase takes place after the successful
completion of the Requirement Analysis Phase. Testing strategy and effort estimation documents
provided by this phase. Test case execution can be started after the successful completion of Test Plan
Creation.
11
Environment setup:
Setup of the test environment is an independent activity and can be started along with Test Case
Development. This is an essential part of the manual testing procedure as without environment testing
is not possible. Environment setup requires a group of essential software and hardware to create a test
environment. The testing team is not involved in setting up the testing environment, its senior
developers who create it.
Test strategy and test Prepare the list of software and hardware by analyzing Execution
plan document. requirement specification. report.
Test case document. After the setup of the test environment, execute the Defect report.
Testing data. smoke test cases to check the readiness of the test
environment.
Test case Execution takes place after the successful completion of test planning. In this phase, the testing
team starts case development and execution activity. The testing team writes down the detailed test
cases, also prepares the test data if required. The prepared test cases are reviewed by peer members of
the team or Quality Assurance leader.
RTM (Requirement Traceability Matrix) is also prepared in this phase. Requirement Traceability
Matrix is industry level format, used for tracking requirements. Each test case is mapped with the
requirement specification. Backward & forward traceability can be done via RTM.
Defect Logging:
Testers and developers evaluate the completion criteria of the software based on test coverage, quality,
time consumption, cost, and critical business objectives. This phase determines the characteristics and
drawbacks of the software. Test cases and bug reports are analyzed in depth to detect the type of defect
and its severity.
Defect logging analysis mainly works to find out defect distribution depending upon severity and
types.If any defect is detected, then the software is returned to the development team to fix the defect,
then the software is re-tested on all aspects of the testing.
Once the test cycle is fully completed then test closure report, and test metrics are prepared.
12
Entry Criteria Activities Deliverable
Test case It evaluates the completion criteria of the software based on test Closure
execution report. coverage, quality, time consumption, cost, and critical business report
Defect report objectives. Test metrics
Defect logging analysis finds out defect distribution by categorizing in
types and severity.
The test cycle closure report includes all the documentation related to software design, development,
testing results, and defect reports.
This phase evaluates the strategy of development, testing procedure, possible defects in order to use
these practices in the future if there is a software with the same specification.
All document and Evaluates the strategy of development, testing procedure, Test closure
reports related to possible defects to use these practices in the future if there is a report
software. software with the same specification
13
• Verification: It involves a static analysis method (review) done without executing code. It is
the process of evaluation of the product development process to find whether specified
requirements meet.
• Validation: It involves dynamic analysis method (functional, non-functional), testing is done
by executing code. Validation is the process to classify the software after the completion of the
development process to determine whether the software meets the customer expectations and
requirements.
So V-Model contains Verification phases on one side of the Validation phases on the other side.
Verification and Validation process is joined by coding phase in V-shape. Thus it is known as V-Model.
1. Business requirement analysis: This is the first step where product requirements understood
from the customer's side. This phase contains detailed communication to understand customer's
expectations and exact requirements.
2. System Design: In this stage system engineers analyze and interpret the business of the
proposed system by studying the user requirements document.
3. Architecture Design: The baseline in selecting the architecture is that it should understand all
which typically consists of the list of modules, brief functionality of each module, their interface
relationships, dependencies, database tables, architecture diagrams, technology detail, etc. The
integration testing model is carried out in a particular phase.
4. Module Design: In the module design phase, the system breaks down into small modules. The
detailed design of the modules is specified, which is known as Low-Level Design
5. Coding Phase: After designing, the coding phase is started. Based on the requirements, a
suitable programming language is decided. There are some guidelines and standards for coding.
Before checking in the repository, the final build is optimized for better performance, and the
code goes through many code reviews to check the performance.
1. Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed during the module design
phase. These UTPs are executed to eliminate errors at code level or unit level. A unit is the
smallest entity which can independently exist, e.g., a program module. Unit testing verifies that
the smallest entity can function correctly when isolated from the rest of the codes/ units.
2. Integration Testing: Integration Test Plans are developed during the Architectural Design
Phase. These tests verify that groups created and tested independently can coexist and
communicate among themselves.
3. System Testing: System Tests Plans are developed during System Design Phase. Unlike Unit
and Integration Test Plans, System Tests Plans are composed by the client?s business team.
System Test ensures that expectations from an application developer are met.
14
4. Acceptance Testing: Acceptance testing is related to the business requirement analysis part. It
includes testing the software product in user atmosphere. Acceptance tests reveal the
compatibility problems with the different systems, which is available within the user
atmosphere. It conjointly discovers the non-functional problems like load and performance
defects within the real user atmosphere.
• The focus of software testing is to run the candidate program on selected input data and check
whether the program behaves correctly with respect to its specification.
• The behavior of the program can be analyzed only if we know what is a correct behavior; hence
the study of correctness is an integral part of software testing.
• The study of program correctness leads ,
➢ To analyze candidate programs at arbitrary levels of granularity;
15
➢ To make assumptions on the behavior of the program at specific stages in its
execution
➢ To verify (or disprove) these assumptions; the same assumptions can be
checked at run-time during testing, giving us valuable information as we try to
diagnose the program or establish its correctness.
Reliability Versus Safety:
Definition: Reliability
➢ The reliability of a system is the probability of accomplishment of a function under specified
environmental conditions and over a specified time .
➢ Reliability is oriented towards the purpose of the system and to the intended action, it is the
extent to which a system can be expected to perform the specified task.
➢ Reliability requirements are concerned with making a system failure-free .
Definition: Safety
➢ Safety is the probability, that no catastrophic accidents will occur during system operation, over
a specified period of time.
➢ Safety looks at the consequences and possible accidents.
➢ Safety requirements are concerned with making a system accident-free.
➢ It is the task of the safety requirements to guarantee, that the system does not reach a hazardous
or unsafe state, where an independent event may cause an accident.
➢ Moreover it must be transparent from the safety requirements, what to do, if an event in the
environment leads to an unsafe state.
➢ From the point of safety it doesn't matter, if the system does not reach its purpose, as long as
the safety requirements are not violated.
➢ On the other hand it is possible, that a system is ultra-reliable but unsafe: A system with a
lormally verified software system where a safety critical situation has not been specified.
Differences:
Furthermore there may be a tradeoff between reliability ard safety:
In case of an internal fault it may be necessary to to power down the system in order to guarantee
salety, thus reducing the reliability of the system due to safety requirements (for example an internal
failure in a nuclear power plant).
There is another difference between safety and reliability: Software has a reliability by its own,
it is possible to analyse the reliability of software packages. On the other hand it is meaningless to speak
about software safety on its own . For a safety analysis it is necessary to look at the total system, the
software must be seen in the context of the particular application .
Reliability and safety are important qualities of realtime systems, which have to be specified,
analysed and verified separately, so that an optimal solution for the application may be found .
Hardware safety and hardware reliability are both well established techniques, there exist
models and procedures to increase the reliability and to eliminate the risk of hazards, Software reliability
is beginning to emerge. last years qualitative analysis of safety has been carried out.
The safety and the reliability of the system can be influenced by the following three classes of
results of the computer system:
✓ intended result: The result is the intended one for the problem, not only for the
specification. Correct value and timely.
16
✓ unintended result: Incorrect value, or wrong sequencing.
✓ no result· No result is obtained due to system crash, an internal error detection
mechanism (program exceptions, time-out), or missing a real-time requirement.
Error, Faults and Failures:
Errors:
✓ An error is a mistake, misconception, or misunderstanding on the part of a software developer.
✓ ✓ An error is a mistake, misconception, or misunderstanding on the part of a software
developer.
✓ ✓ In the category of developer we include software engineers, programmers, analysts, and
testers.
✓ For example, a developer may misunderstand a design notation, or a programmer might type a
variable name incorrectly.
✓ Example: Memory bit got stuck but CPU does not access this data Software ―bug in a
subroutine is not ―visible while the subroutine is not called.
❖ Faults (Defects):
✓ A fault (defect) is introduced into the software as the result of an error. It is an irregularity in
the software that may cause it to behave incorrectly, and not according to its specification.
✓ ✓ A fault (defect) is introduced into the software as the result of an error.
✓ ✓ It is an anomaly in the software that may cause it to behave incorrectly, and not according to
its specification.
✓ ✓ Faults or defects are sometimes called ―bugs.
✓ Use of the latter term trivializes the impact faults have on software quality.
✓ Use of the term ―defect is also associated with software artifacts such as requirements and
design documents.
✓ Defects occurring in these artifacts are also caused by errors and are usually detected in the
review process.
✓ Examples: Software bug Random hardware fault Memory bit ―stuck Omission or commission
fault in data transfer etc.
❖ Failures:
✓ A failure is the inability of a software system or component to perform its required functions
within specified performance requirements.
✓ ✓ Presence of an error might cause a whole system to deviate from its required operation
✓ ✓ One of the goals of safety-critical systems is that error should not result in system failure
✓ ✓ During execution of a software component or system, a tester, developer, or user observes
that it does not produce the expected results.
✓ ✓ In some cases a particular type of misbehaviour indicates a certain type of fault is present.
We can say that the type of misbehaviour is a symptom of the fault.
✓ ✓ An experienced developer/tester will have a knowledge base of fault/symptoms/ failure cases
stored in memory.
✓ ✓ Incorrect behaviour can include producing incorrect values for output variables, an incorrect
response on the part of a device, or an incorrect image on a screen.
✓ ✓ During development failures are usually observed by testers, and faults are located and
repaired by developers.
✓ ✓ When the software is in operation, users may observe failures which are reported back to the
development organization so repairs can be made.
✓ ✓ A fault in the code does not always produce a failure. In fact, faulty software may operate
over a long period of time without exhibiting any incorrect behaviour.
✓ ✓ However when the proper conditions occur the fault will manifest itself as a failure.
✓ ✓ Voas is among the researchers who discuss these conditions, which are as follows: The input
to the software must cause the faulty statement to be executed.
17
✓ The faulty statement must produce a different result than the correct statement.
✓ This event produces an incorrect internal state for the software.
✓ The incorrect internal state must propagate to the output, so that the result of the fault is
observable.
✓ ✓ Software that easily reveals its’ faults as failures is said to be more testable.
✓ ✓ From the testers point-of-view this is a desirable software attribute. Testers need to work
with designers to in sure that software is testable.
❖ In the software domain, principles may also refer to rules or codes of conduct relating to
professionals, who design, develop, test, and maintain software systems.
Principle 1: Testing is the process of exercising a software component using a selected set of test
cases, with the intent of revealing defects, and evaluating quality.
❖ The term ―software component‖ means any unit of software ranging in size and complexity
from an individual procedure or method, to an entire software system.
❖ The term ―defects‖ represents any deviations in the software that have a negative impact
on its functionality, performance, reliability, security, and/or any other of its specified quality attributes.
Principle 2: When the test objective is to detect defects, then a good test case is one that
has a high probability of revealing yet undetected defects.
❖ Testers must carry out testing in the same way as scientists carry out experiments.
❖ Testers need to create a hypothesis and work towards proving or disproving it, it means
he/she must prove the presence or absence or a particular type of defect. Principle 3: Test results should
be inspected meticulously.
❖ Testers need to carefully inspect and interpret test results. Several erroneous and costly
scenarios may occur if care is not taken.
❖ A failure may be overlooked, and the test may be granted a ―pass‖ status when in reality the
software has failed the test.
18
❖ The defect may be revealed at some later stage of testing, but in that case it may be more
costly and difficult to locate and repair.
Principle 4: A test case must contain the expected output or result.
❖ The test case is of no value unless there is an explicit statement of the expected outputs or
results. Expected outputs allow the tester to determine ✓ Whether a defect has been revealed, ✓ Pass/fail
status for the test.
❖ It is very important to have a correct statement of the output so that time is not spent due to
misconceptions about the outcome of a test.
❖ The specification of test inputs and outputs should be part of test design activities.
Principle 5: Test cases should be developed for both valid and invalid input conditions.
❖ A tester must not assume that the software under test will always be provided with valid inputs. Inputs
may be incorrect for several reasons.
❖ Software users may have misunderstandings, or lack information about the nature of the
inputs
❖ They often make typographical errors even when complete/correct information is available.
❖ Devices may also provide invalid inputs due to erroneous conditions and malfunctions.
Principle 6: The probability of the existence of additional defects in a software component is
proportional to the number of defects already detected in that component.
❖ The higher the number of defects already detected in a component, the more likely it is to
have additional defects when it undergoes further testing.
❖ If there are two components A and B, and testers have found 20 defects in A and 3 defects in
B, then the probability of the existence of additional defects in A is higher than B.
Principle 7: Testing should be carried out by a group that is independent of the
development group.
❖ This principle is true for psychological as well as practical reasons. It is difficult for a
developer to admit that software he/she has created and developed can be faulty.
❖ The tester needs to record the exact conditions of the test, any special events that occurred,
equipment used, and a carefully note the results.
❖ This information is very useful to the developers when the code is returned for debugging so
that they can duplicate test conditions.
❖ It is also useful for tests that need to be repeated after defect repair.
19
Principle 9: Testing should be planned.
❖ Test plans should be developed for each level of testing. The objective for each level should
be described in the associated plan. The objectives should be stated as quantitatively as possible.
Principle 10: Testing activities should be integrated into the software life cycle.
❖ Testing activity should be integrated into the software life cycle starting as early as in the
requirements analysis phase, and continue on throughout the software life cycle in parallel with
development activities.
Principle 11: Testing is a creative and challenging task.
❖ A tester needs to have knowledge from both experience and education about software
specification, designed, and developed.
❖ A tester needs to have knowledge of fault types and where faults of a certain type might
occur in code construction.
❖ A tester needs to reason like a scientist and make hypotheses that relate to presence of
specific types of defects.
❖ A tester needs to have a good understanding of the problem domain of the software that
he/she is testing. Familiarly with a domain may come from educational, training, and work related
experiences. A tester needs to create and document test cases.
❖ To design the test cases the tester must select inputs often from a very wide domain.
❖ The selected test cases should have the highest probability of revealing a defect. Familiarly
with the domain is essential.
❖ A tester needs to design and record test procedures for running the tests. A tester needs to
plan for testing and allocate proper resources
. ❖ A tester needs to execute the tests and is responsible for recording results.
❖ A tester needs to analyse test results and decide on success or failure for a test.
❖ This involves understanding and keeping track of huge amount of detailed information.
❖ A tester needs to learn to use tools and keep updated of the newest test tools.
❖ A tester needs to work and cooperate with requirements engineers, designers, and developers,
and often must establish a working relationship with clients and users.
20
LEVELS OF TESTING
❖ Execution-based software testing, especially for large systems, is usually carried out at different
levels.
✓ Unit Testing
✓ Integration Testing
✓ System Testing
✓ Acceptance Testing
❖ At unit test a single component is tested. A principal goal is to detect functional and structural defects
in the unit.
❖ At the integration level several components are tested as a group, and the tester investigates
component interactions.
❖ At the system level the system as a whole is tested and a principle goal is to evaluate attributes such
as usability, reliability, and performance.
❖ The major testing levels for both types of system are similar.
The nature of the code that results from each developmental approach demands different testing
strategies, to identify individual components, and to assemble them into subsystems.
❖ For both types of systems the testing process begins with the smallest units or components to identify
functional and structural defects.
❖ Testers check for defects and adherence to specifications. Proper interaction at the component
interfaces is of special interest at the integration level.
❖ White box tests are used to reveal defects in control and data flow between the integrated modules.
❖ System test begins when all of the components have been integrated successfully. It usually requires
the bulk of testing resources.
❖ At the system level the tester looks for defects, but the focus is on evaluating performance, usability,
reliability, and other quality-related requirements.
❖ If the system is being custom made for an individual client then the next step following system test
is acceptance test. This is a very important testing stage for the developers.
During acceptance test the development organization must show that the software meets all of the
client’s requirements.
❖ A successful acceptance test provides a good opportunity for developers to request recommendation
letters from the client.
❖ Software developed for the mass market goes through a series of tests called alpha and beta tests.
21
❖ Alpha tests bring potential users to the developer’s site to use the software. Developers note any
problems.
❖ Beta tests send the software out to potential users who use it under real-world conditions and report
defects to the developing organization. Implementing all of these levels of testing require a large
investment in time and organizational resources.
Levels of Testing:
1. UNIT TEST
❖ In object-oriented systems both the method and the class/object have been suggested by researchers
as the choice for a unit.
❖ A unit may also be a small-sized COTS component purchased from an outside vendor that is
undergoing evaluation by the purchaser, or a simple module retrieved from an in-house reuse library.
❖ Since the software component being tested is relatively small in size and simple in function, it is
easier to design, execute, record, and analyse test results.
❖ If a defect is revealed by the tests it is easier to locate and repair since only the one unit is under
consideration.
i. The Need for Preparation
❖ The principal goal for unit testing is insure that each individual software unit is functioning
according to its specification.
❖ Good testing practice is based on unit tests that are planned and public.
❖ Planning includes designing tests to reveal defects such as functional description defects,
algorithmic defects, data defects, and control logic and sequence defects.
❖ Resources should be allocated, and test cases should be developed, using both white and black
box test design strategies.
❖ The unit should be tested by an independent tester (someone other than the developer) and the
test results and defects found should be recorded as a part of the unit history.
❖ Each unit should also be reviewed by a team of reviewers, preferably before the unit test.
❖ Unit test in many cases is performed informally by the unit developer soon after the module is
completed, and it compiles cleanly.
22
ii. UNIT TESTING PLANNING
❖ A general unit test plan should be prepared. It may be prepared as a component of the master
test plan or as a stand-alone plan.
❖ It should be developed in conjunction with the master test plan and the project plan for each
project
❖ Documents that provide inputs for the unit test plan are the project plan, as well the
requirements, specification, and design documents that describe the target units.
❖ The phases allow a steady evolution of the unit test plan as more information becomes
available.
Phase 1: Describe Unit Test Approach and Risks
❖ In this phase of unit testing planning the general approach to unit testing is outlined.
The test planner:
✓ Identifies test risks;
✓ Describes techniques to be used for designing the test cases for the units;
✓ Describes techniques to be used for data validation and recording of test results;
✓ Describes the requirements for test harnesses and other software that interfaces with
the units to be tested, for example, any special objects needed for testing object-oriented units.
Phase 2: Identify Unit Features to be tested
❖ This phase requires information from the unit specification and detailed design
description.
The planner determines which features of each unit will be tested, for example:
functions, performance requirements, states, and state transitions, control structures, messages,
and data flow patterns.
Phase 3: Add Levels of Detail to the Plan
❖ In this phase the planner refines the plan as produced in the previous two phases. The planner
adds new details to the approach, resource, and scheduling portions of the unit test plan.
❖ Part of the preparation work for unit test involves unit test design. It is important to specify
the test cases and, the test procedures. Test case data should be tabulated for ease of use, and
reuse.
❖ As part of the unit test design process, developers/testers should also describe the
relationships between the tests.
❖ Test suites can be defined that bind related tests together as a group. All of this test design
information is attached to the unit test plan.
❖ Test cases, test procedures, and test suites may be reused from past projects.
❖ Test case design at the unit level can be based on use of the black and white box test design
strategies.
❖ Both of these approaches are useful for designing test cases for functions and procedures.
❖ They are also useful for designing tests for the individual methods (member functions)
contained in a class.
❖ White-Box method can be used because the size is small. This approach gives the tester the
opportunity to exercise logic structures and/or data flow sequences, or to use mutation analysis,
all with the goal of evaluating the structural integrity of the unit.
❖ Some black box–based testing is also done at unit level; however, the bulk of black box
testing is usually done at the integration and system levels and beyond.
❖ In the case of a smaller-sized COTS component selected for unit testing, a black box test
design approach may be the only option.
23
❖ It should be mentioned that for units that perform mission/safely/business critical functions,
it is often useful and prudent to design stress, security, and performance tests at the unit level if
possible.
❖ This approach may prevent larger scale failures at higher levels of test.
Figure 3.1 : Test Harness ❖ Drivers and stubs as shown in the figure are developed as procedures and
functions for traditional imperative-language based systems. ❖ For object-oriented systems, developing
drivers and stubs often means the design and implementation of special classes to perform the required
testing tasks. ❖ The test harness itself may be a hierarchy of classes. The test planner must realize that,
the higher the degree of functionally for the harness, the more resources it will require to design,
implement, and test.
24
Developers/testers will have to decide depending on the nature of the code under test, just how complex
the test harness needs to be.
iv. RUNNING THE UNIT TEST AND RECORDING RESULT
✓ The test harness, and any other supplemental supporting tools, is available.
❖ The testers then proceed to run the tests and record results. The status of the test efforts for a unit,
and a summary of the test results, could be recorded in a simple format as shown below.
❖ Differences from expected behaviour should be described in detail. During testing the tester may
determine that additional tests are required.
❖ The test set will have to be augmented and the test plan documents should reflect these changes.
❖ When a unit fails a test there may be several reasons for the failure.
❖ The most likely reason for the failure is a fault in the unit implementation (the code).
❖ Other likely causes that need to be carefully investigated by the tester are the following:
❖ When a unit has been completely tested and finally passes all of the required tests it is ready for
integration.
❖ Under some circumstances a unit may be given a conditional acceptance for integration test.
❖ This may occur when the unit fails some tests, but the impact of the failure is not significant.
❖ When testing of the units is complete, a test summary report should be prepared.
❖ This is a valuable document for the groups responsible for integration and system tests.
❖ Finally, the tester should insure that the test cases, test procedures, and test harnesses are preserved
for future reuse.
2. Integration testing
• Integration testing is the second level of the software testing process comes after unit testing.
• In this testing, units or individual components of the software are tested in a group.
• The focus of the integration testing level is to expose defects at the time of interaction between
integrated components or units.
25
• Unit testing uses modules for testing purpose, and these modules are combined and tested in
integration testing.
• The Software is developed with a number of software modules that are coded by different
coders or programmers. The goal of integration testing is to check the correctness of
communication among all the modules.
• Once all the components or modules are working independently, then we need to check the data
flow between the dependent modules is known as integration testing.
Let us see one sample example of a banking application, as we can see in the below image of amount
transfer.
o First, we will login as a user P to amount transfer and send Rs200 amount, the confirmation
message should be displayed on the screen as amount transfer successfully. Now logout as P and login
as user Q and go to amount balance page and check for a balance in that account = Present balance +
Received Balance. Therefore, the integration test is successful.
o Also, we check if the amount of balance has reduced by Rs200 in P user account.
o Click on the transaction, in P and Q, the message should be displayed regarding the data and
time of the amount transfer.
o We go for the integration testing only after the functional testing is completed on each module
of the application.
o First, determine the test case strategy through which executable test cases can be prepared
according to test data.
o Examine the structure and architecture of the application and identify the crucial modules to
test them first and also identify all possible scenarios.
o Choose input data for test case execution. Input data plays a significant role in testing.
o If we find any bugs then communicate the bug reports to developers and fix defects and retest.
Although all modules of software application already tested in unit testing, errors still exist due to the
following reasons:
1. Each module is designed by individual software developer whose programming logic may
differ from developers of other modules so; integration testing becomes essential to determine
the working of software modules.
26
2. To check the interaction of software modules with the database whether it is an erroneous or
not.
3. Requirements can be changed or enhanced at the time of module development. These new
requirements may not be tested at the level of unit testing hence integration testing becomes
mandatory.
4. Incompatibility between modules of software could create errors.
5. To test hardware's compatibility with software.
6. If exception handling is inadequate between modules, it can create bugs.
Incremental Approach
• In the Incremental Approach, modules are added in ascending order one by one or according to
need. The selected modules must be logically related.
• Generally, two or more than two modules are added and tested to determine the correctness of
functions.
• The process continues until the successful testing of all the modules.
• In this type of testing, there is a strong relationship between the dependent modules.
• Suppose we take two or more modules and verify that the data flow between them is working
fine. If it is, then add more modules and test again.
27
For example: Suppose we have a Flipkart application, we will perform incremental integration testing,
and the flow of the application would like this:
o Top-Down approach
o Bottom-Up approach
a. Top-Down Approach
The top-down testing strategy deals with the process in which higher level modules are tested with
lower level modules until the successful completion of testing of all the modules. Major design flaws
can be detected and fixed early because critical modules tested first. In this type of method, we will add
the modules incrementally or one by one and check the data flow in the same order.
In the top-down approach, we will be ensuring that the module we are adding is the child of the
previous one like Child C is a child of Child B and so on as we can see in the below image:
Advantages:
28
o An early prototype is possible.
Disadvantages:
b. Bottom-Up Method
The bottom to up testing strategy deals with the process in which lower level modules are tested with
higher level modules until the successful completion of testing of all the modules. Top level critical
modules are tested at last, so it may cause a defect. Or we can say that we will be adding the modules
from bottom to the top and check the data flow in the same order.
In the bottom-up method, we will ensure that the modules we are adding are the parent of the previous
one as we can see in the below image:
Advantages
29
o Do not need to wait for the development of all the modules as it saves time.
Disadvantages
o Critical modules are tested last due to which the defects can occur.
o There is no possibility of an early prototype.
In this approach, both Top-Down and Bottom-Up approaches are combined for testing. In this process,
top-level modules are tested with lower level modules and lower level modules tested with high-level
modules simultaneously. There is less possibility of occurrence of defect because each module interface
is tested.
Advantages
o The hybrid method provides features of both Bottom Up and Top Down methods.
o It is most time reducing method.
o It provides complete testing of all modules.
Disadvantages
o This method needs a higher level of concentration as the process carried out in both directions
simultaneously.
o Complicated method.
We will go for this method, when the data flow is very complex and when it is difficult to find who is
a parent and who is a child. And in such case, we will create the data in any module bang on all other
existing modules and check if the data is present. Hence, it is also known as the Big bang method.
30
Big Bang Method
In this approach, testing is done via integration of all modules at once. It is convenient for small software
systems, if used for large software systems identification of defects is difficult.
Since this testing can be done after completion of all modules due to that testing team has less time for
execution of this process so that internally linked interfaces and high-risk critical modules can be missed
easily.
Advantages:
Disadvantages:
o Identification of defects is difficult because finding the error where it came from is a problem,
and we don't know the source of the bug.
o Small modules missed easily.
o Time provided for testing is very less.
o We may miss to test some of the interfaces.
31
Let us see examples for our better understanding of the non-incremental integrating testing or big bang
method:
Example1
In the below example, the development team develops the application and sends it to the CEO of the
testing team. Then the CEO will log in to the application and generate the username and password and
send a mail to the manager. After that, the CEO will tell them to start testing the application.
Then the manager manages the username and the password and produces a username and password and
sends it to the test leads. And the test leads will send it to the test engineers for further testing
purposes. This order from the CEO to the test engineer is top-down incremental integrating testing.
In the same way, when the test engineers are done with testing, they send a report to the test leads, who
then submit a report to the manager, and the manager will send a report to the CEO. This process is
known as Bottom-up incremental integration testing as we can see in the below image:
3. SYSTEM TESTING
❖ When integration tests are completed, a software system has been assembled and its major
subsystems have been tested.
❖ At this point the developers/ testers begin to test it as a whole. System test planning should begin
at the requirements phase with the development of a master test plan and requirements-based (black
box) tests.
❖ There are many components of the plan that need to be prepared such as test approaches, costs,
schedules, test cases, and test procedures.
❖ System testing itself requires a large amount of resources. The goal is to ensure that the system
performs according to its requirements.
32
❖ System test evaluates both functional behaviour and quality requirements such as reliability,
usability, performance and security.
❖ This phase of testing is especially useful for detecting external hardware and software interface
defects, for example, those causing race conditions, and deadlocks, problems with interrupts and
exception handling, and ineffective memory usage.
❖ System test often requires many resources, special laboratory equipment, and long test times; it
is usually performed by a team of testers.
❖ The best scenario is for the team to be part of an independent testing group.
❖ Functional Tests may overlap with acceptance tests. Functional tests at the system level are
used to ensure that the behaviour of the system adheres to the requirements specification.
❖ All functional requirements for the system must be achievable by the system.
❖ Functional tests are black box in nature. The focus is on the inputs and proper outputs for
each function.
❖ An examination of a requirements document shows that there are two major types of
requirements:
a) Functional requirements. Users describe what functions the software should perform.
b) Quality requirements. There are non-functional in nature but describe quality levels
expected for the software.
❖ The goal of system performance tests is to see if the software meets the performance
requirements.
33
❖ Testers also learn from performance test whether there are any hardware or software factors
that impact on the system’s performance.
❖ Resources for performance testing must be allocated in the system test plan.
c.Stress Testing
❖ When a system is tested with a load that causes it to allocate its resources in maximum
amounts, this is called stress testing.
❖ Stress testing is important because it can reveal defects in real-time and other types of
systems, as well as weak areas where poor design could cause unavailability of service.
❖ When system operates correctly under conditions of stress then clients have confidence that
the software can perform as required.
d. Configuration Testing
❖ Configuration testing also requires many resources including the multiple hardware devices
used for the tests.
❖ If a system does not have specific requirements for device configuration changes then large-
scale configuration testing is not essential.
✓ Show that all the configuration changing commands and menus work properly.
✓ Show that all interchangeable devices are really interchangeable, and that they each
enter the proper states for the specified conditions.
✓ Show that the systems’ performance level is maintained when devices are
interchanged, or when they fail.
e. Security Testing
❖ Designing and testing software systems to insure that they are safe and secure is a big issue
facing software developers and test specialists.
✓ Confidentiality
✓ Integrity
✓ Authentication
✓ Authorization
✓ Availability
✓ Non-repudiation
❖ Security testing evaluates system characteristics that relate to the availability, integrity, and
confidentially of system data and services.
34
❖ Users/clients should be encouraged to make sure their security needs are clearly known at
requirements time, so that security issues can be addressed by designers and testers.
❖ Both criminal behaviour and errors that do damage can be perpetuated by those inside and
outside of an organization.
Attacks can be random or systematic. Damage can be done through various means such as
✓ Viruses
✓ Trojan horses
✓ Trap doors
✓ Illicit channels
❖ The effects of security breaches could be extensive and can cause:
✓ Loss of information
✓ Corruption of information
✓ Misinformation
✓ Privacy violations
✓ Denial of service
There are four main focus areas to be considered in security testing (Especially for web
sites/applications):
✓ Network security: This involves looking for vulnerabilities in the network
infrastructure (resources and policies).
✓ System software security: This involves assessing weaknesses in the various
software (operating system, database system, and other software) the application depends on.
✓ Client-side application security: This deals with ensuring that the client (browser or
any such tool) cannot be manipulated.
✓ Server-side application security: This involves making sure that the server code and
its technologies are robust enough to fend off any intrusion.
❖ There are seven main types of security testing as per Open Source Security Testing
methodology manual. They are explained as follows:
✓ Vulnerability Scanning: This is done through automated software to scan a system
against known vulnerability signatures.
✓ Security Scanning: It involves identifying network and system weaknesses, and later
provides solutions for reducing these risks. This scanning can be performed for both Manual
and Automated scanning.
✓ Penetration testing: This kind of testing simulates an attack from malicious hacker.
This testing involves analysis of a particular system to check for potential vulnerabilities to an
external hacking attempt.
✓ Risk Assessment: This testing involves analysis of security risks observed in the
organization. Risks are classified as Low, Medium and High. This testing recommends controls
and measures to reduce the risk.
✓ Security Auditing: This is internal inspection of Applications and Operating systems
for security flaws. Audit can also be done via line by line inspection of code
✓ Ethical hacking: It's hacking Organization Software systems. Unlike malicious
hackers, who steal for their own gains, the intent is to expose security flaws in the system.
✓ Posture Assessment: This combines Security scanning, Ethical Hacking and Risk
Assessments to show an overall security posture of an organization.
f. Recovery Testing
35
❖ Recovery testing subjects a system to losses of resources in order to determine if it
can recover properly from these losses.
❖ This type of testing is especially important for transaction systems.
❖ Tests would determine if the system could return to a well-known state, and that no
transactions have been compromised. Systems with automated recovery are designed for this
purpose.
❖ Testers focus on the following areas during recovery testing,
1. Restart. The current system state and transaction states are discarded.
2. Switchover. The ability of the system to switch to a new processor must be
tested.
36