Basic of Software Testing - ch1
Basic of Software Testing - ch1
Testing methods
Softare Testing
• Software testing is a procedure to verify whether the actual results
are same as of expected result.
• Software testing is performe to provide assurance that the software
system does not contain any defects.
• Software testing helps us to find out whether all user requirements
are fulfilled by our software or not.
• Testing is executing a system in order to identify any gaps, errors, or
missing requirements in contrary to the actual requirements.
• Software testing is defined as performing Verification and Validation
of the Software Product for its correctness and accuracy of working.
• Software Testing is the process of executing a program with the intent
of finding errors.
• A successful test is one that uncovers an as-yet-undiscovered error.
Testing can show the presence of bugs but never their absence.
• Testing is a support function that helps developers look good by
finding their mistakes before anyone else does.
• Execution of a work product with intent to find a defect.
Objective of testing
• Finding defects which may get created by the programmer while
developing the software.
• Gaining confidence in and providing information about the level of
quality.
• To prevent defects.
• To make sure that the end result meets the business and user
requirements.
• To ensure that it satisfies the BRS that is Business Requirement
Specification and SRS that is System Requirement Specifications.
• To gain the confidence of the customers by providing them a quality
product.
Terminology
• Failure-A failure is said to occur whenever the external behavior of a system
does not conform to that prescribed in the system specification. A software
fault becomes a software failure only when it is activated.
• Error:A human action that produces an incorrect result.
An error can be a grammatical error in one or more of the code lines, or a
logical error in carrying out one or more of the client‘s requirements.
• Fault:An incorrect step, process, or data definition in a computer program.
• Defect: If the actual result of the software deviates from the one expected &
anticipated by the team tester, while testing the software, then it results into
defect.
• Bug: A bug can be defined as the initiation of error or a problem due to
which fault, failure, incident or an anomaly occurs.
Skills of Software Tester
1. Analytical Skills:
• These are those skills that help to break up a complex software system into
smaller units to gain better understanding of the system
• This kind of skill also helps in creating appropriate test cases.
2. Communication Skills:
• A software tester must have verbal as well as written communication skills.
• He/she should be good listener.
• He should be able to convince the need for testing the module to developer
as well as customer.
3. Presentation Skills:
• A good tester must also possess good presentation skills to provide
the exact status of the test project and application under test.
• The tester is supposed to present test result to dveloper team,
customer & management team & convince them for further
improvements.
4. Technical skill:
• A good tester must have knowledge of database, programming &
commands.
• He should have knowledge and hands on experience of test
management tools and automation tools.
• He should be enthusiastic to learn new techniques & skills required
for testing.
Test case
• Test case is a set of conditions under which a tester will determine
whether a functionality of software is working correctlyn or not.
• Test case is a well-documented procedure designed to test the
functionality of the feature in the system. For designing the test case,
it needs to provide set of inputs and its corresponding expected
outputs.
When you test-to-pass, you really assure only that the software minimally
works . You do not push its capabilities. You don’t see what you can do to
break it. You treat it as the most straightforward test cases. When
designing & running your test ases, always run the test to pass cases first.
Designing & running test cases with the sole purpose of breaking the
softare is called testing to fail or error forcing.
When to Start Testing of Software (Entry Criteria)
• Entry criteria, specifies when that phase can be started also included the
inputs for the phase. Tasks or steps that need to be carried out in that
phase.
• Verification, which specifies methods of checking that tasks have been
carried out correctly. Clear entry criteria make sure that a given phase does
not start prematurely.
• The verification for each phase helps to prevent defects. At least defects
can be minimized.
• Entry criteria is a set of conditions that permits a task to perform, or in
absence of any of these conditions, the task cannot be performed.
Entry Criteria gives the prerequisite items that must be completed before testing
can begin.
To start the Test Cases development phase, the following conditions should be
met −
• The requirement document should be available.
• Complete understanding of the application flow is required.
• The Test Plan Document should be ready.
When to Stop Testing of Software (Exit Criteria)
• Exit criteria, which stipulate the conditions under which one can consider
the phases as done.
• Exit Criteria defines the items that must be completed before testing can
be concluded
• Exit criteria may include:
1. All test plans have been run
2. All requirements coverage has been achieved.
3. All severe bugs are resolved.
Verification & validation model (V model)
1. Overall Business Requirement: In this first phase of the development cycle, the product
requirements are understood from customer perspective. This phase involves detailed
communication with the customer to understand his expectations and exact requirements. The
acceptance test design planning is done at this stage as business requirements can be used as an
input for acceptance testing.
2. Software Requirement: Once the product requirements are clearly known, the system can be
designed. The system design comprises of understanding & detailing the complete hardware,
software & communication set up for the product under development. System test plan is
designed based on system design. Doing this at earlier stage leaves more time for actual test
execution later.
3. High level design: High level specification are understood & designed in this phase.Usually more
than one technical approach is proposed & based on the technical & financial feasibility, the final
decision is taken. System design is broken down further into modules taking up different
functionality.
4. Low level design: In this phase the detailed integral design for all the system modules is specified.
It is important that the design is compatible with the other modules in the system & other external
system. Components tests can be designed at this stage based on the internal module design.
5. Coding: The actual coding of the system modules designed in the design phase is taken up in the
coding phase. The base suitable programming language is decided base on the coding guidelines &
standards.
Validation Phases
1. Verification is a process of checking documents, design, 1. Validation is a dynamic mechanism of testing and
code, and program in order to check if the software has validating if the software product actually meets the
been built according to the requirements or not. exact needs of the customer or not.
2. It includes checking documents, design, codes and 2. It includes testing and validating the actual product.
programs
3. Validation is the dynamic testing.
3. Verification is the static testing.
4. It includes the execution of the code.
4. It does not include the execution of the code.
5. Methods used in validation are Black Box Testing,
5. Methods used in verification are reviews, walkthroughs, White Box Testing and non-functional testing.
inspections and desk-checking.
6. It can only find the bugs that could not be found by
6. It can find the bugs in the early stage of the the verification process.
development..
7. Validation is executed on software code with the help
7. Quality assurance team does verification. of testing team.
8. It comes before validation. 8. It comes after verification.
9. It consists of checking of documents/files and is 9. It consists of execution of program and is performed
performed by human. by computer.
Quality Assurance
• It is Process oriented activities.
• A part of quality management focused on providing confidence that quality
requirements will be fulfilled.
• They measure the process, identify the deficiencies/weakness and suggest
improvements.
• Relates to all products that will ever be created by a process
• Activities of QA are Process Definition and Implementation, Audits and Training
• Verification is an example of QA
• Preventive activities.
• Quality assurance is a proactive process
• QA is a managerial tool
Quality Control
• It is Product oriented activities.
• A part of quality management focused on fulfilling quality
requirements.
• They measure the product, identify the deficiencies/weakness and
suggest improvements.
• Relates to specific product
• Quality Control Activities of QC are Reviews and Testing
• Validation/Software Testing is an example of QC
• It is a corrective process.
• Quality control is a reactive process.
• QC is a corrective tool
• QA QC video link https://www.youtube.com/watch?v=0viDDeGLODs
Quality Assurance Quality Control
• It is Process oriented activities.
• A part of quality management focused on providing • It is Product oriented activities.
confidence that quality requirements will be • A part of quality management focused on fulfilling
fulfilled. quality requirements.
• They measure the process, identify the • They measure the product, identify the
deficiencies/weakness and suggest improvements. deficiencies/weakness and suggest improvements.
• Relates to all products that will ever be created by a • Relates to specific product
process
• Quality Control Activities of QC are Reviews and
• Activities of QA are Process Definition and Testing
Implementation, Audits and Training
• Validation/Software Testing is an example of QC
• Verification is an example of QA
• It is a corrective process.
• Preventive activities.
• Quality control is a reactive process.
• Quality assurance is a proactive process
• QC is a corrective tool
• QA is a managerial tool
Methods of testing
Static testing:
In static testing code is not executed. Rather it manually checks the code,
requirement documents, and design documents to find errors.
Main objective of this testing is to improve the quality of software products by
finding errors in early stages of the development cycle.
Static testing requires only the source code of the product not the executables.
It involves to select people going through the code to find out whether
• the code works according to functional requirement
• the code has been written in accordance with design developed earlier in life
cycle
• the code for any functionality has been missed out
• the code handles erroes properly
Dynamic testing/ Structural testing:
The dynamic testing is done by executing program. Main objective of this
testing is to confirm that the software product works in conformance with
the business requirements.
• Stuctural testing takes into account the code, code structure, internal
design, & how they are coded.
• Tests are actually run by the computer.
• Runs predesigned test cases.
The box approach: White Box testing
• White-box testing is the detailed
investigation of internal logic and
structure of the code. White-box testing
is also called glass testing or open-box ,
clear box testing.
• In white box testing, test cases are
created by looking at the code to detect
any Potential failure scenarios.
Advantages & disadvantages of white box testing
Advantages disadvantages
1. Each procedure can be tested 1. The knowldge of internal structure
thoroughly. & coding is desired for the tested.
2. It is easily automated. Thus the skilled tester is required
for white box testing.
3. Due to knowledge of internal
coding it is easy to find out which 2. This is costly.
type of input data can help in 3. Sometimes it is difficult to test
testing the application efficiently each & every path of the software
& hence many paths may go
untested.
4. Missing functionality can not be
identified.
Classifiction of white box testing
White box testing
Structural testing/
static testing Dynamic White box testing
function coverage
Inspections
It is a method with high degree of formalism. The focus of this method is to detect all faults, violatons &
other side effects. They are subject matter experts who review the work product. In this:
1. Thorough preparation is required before an inspection/review
2. Enlisting multiple diverse views.
3. Assigning specific roles to the multiple participants
4. Going sequentially through the code in a structured manner.
There are four roles in inspection:
1. Author of the code: the person who had written the code
2. Moderator: who is expected to formally run the inspection according to the process
3. Inspectors: are the people who actually provide review comments for the code.
4. Scribe: who takes detail notes during the inspection meeting and circulates them to the inspection team
after the meeting.
The author or moderator selects review team. The inspection team assembles at the agreed time for
inspection meeting. The moderator takes the team sequentially through the program code. If any defect is
found they will classify it as minor or major. A scribe documents the defects. For major defects the review
team meets again to check whether the bugs are resolved or not.
Walkthrough
Author presents their developed code to an audience
of peers. Peers question and comment on the code to identify as
many defects as possible. It involves no prior preparation by the
audience. Usually involves minimal documentation of either the
process or any arising issues. Defect tracking in walk through is
inconsistent. A walk through is an evaluation process which is an
informal meeting, which does not require preparation.
The product is described by the producer and queries for the
comments of participants. The results are the information to the
participants about the product instead of correcting it.
Technical reviews
i. Formal Review:
A formal review is the process under which static white box testing is
performed. A formal review can range from a simple meeting between two
programmers to a detailed, rigorous inspection of the code.
There are four essential elements to a formal review
1. Identify Problems: The goal of the review is to find problems with the
software not just items that are wrong, but missing items as well.
2. Follow Rules: A fixed set of rules should be followed
3. Prepare: - Each participant is expected to prepare for & contribute to the
review.
4. Write a Report: The review group must produce a written report
summarizing the results of the review & make that report available to the rest
of the product developement team.
continue.
ii. Peer Reviews:
The easiest way to get team members together and doing their first formal
reviews of the software is through peer reviews, the least formal method.
Sometimes called buddy reviews, this method is really more of a discussion.
Peer reviews are often held with just the programmer who wrote the code and
one or two other programmers or testers acting as reviewers.
Small group simply reviews the code together and looks for problems and
oversights.
To assure that the review is highly effective all the participants need to make sure
that the four key elements of a formal review are in place:
Look for problems, follow rules, prepare for the review, and write a report.
As peer reviews are informal , these elements are often scaled back. Still, just
getting together to discuss the code can find bugs
Functional Testing
• FUNCTIONAL TESTING is a type of software testing that validates the software
system against the functional requirements/specifications. The purpose of
Functional tests is to test each function of the software application, by
providing appropriate input, verifying the output against the Functional
requirements. It is normally performed during levels of System testing &
Acceptance testing
How to do Functional Testing ?
• Understand the Functional Requirements
• Identify test input or test data based on requirements
• Compute the expected outcomes with selected test input values
• Execute test cases
• Compare actual and computed expected results
Functional & non-functinal
Functional testing Non Functional testing
1. Functional testing verifies each function/feature 1. Non Functional testing verifies non-functional aspects like
of the software performance, usability, reliability, etc.
2. Non Functional testing is hard to perform manually.
2. Functional testing can be done manually
3. Non Functional testing is based on customer’s expectations.
3. Functional testing is based on customer’s
requirements 4. Non Functional testing has a goal to validate the performance
of the software.
4. Functional testing has a goal to validate software
5. Non Functional testing example is to check the dashboard
actions should load in 2 seconds.
5. A Functional Testing example is to check the 6. Non Functional describes how the product works.
login functionality
7. example:
6. Functional describes what the product does Performance testing
7. example: Load Testing
Unit testing Stress testing
Integration testing Volume testing
System Testing Security testing
Acceptance Testing Installation testing
Recovery testing
Unit/Code Functional Testing
i. Code Functional Testing involves tracking a piece of data completely through the software.
ii. At the unit test level this would just be through an individual module or function.
iii. The same tracking could be done through several integrated modules or even through the
entire software product although it would be more time consuming to do so.
iv. During data flow, the check is made for the proper declaration of variables declared and
the loops used are declared and used properly.
For example
1. #include<stdio.h>
2. void main()
3. {
4. int i , fact= 1, n;
6. scanf(“%d”,&n);
7. for(i =1; i<=n; i++)
8. fact = fact * i;
10. }
• This is some quick checks that a developer performs before subjecting the
code to more extensive code coverage testing or complexity testing.
• The developer can perform certain tests knowing the input variables &
corresponding expected output. This checks out any obvious mistakes.
• For modules with complex logic or condition, the developer can build a
debug version of the product by putting intermediate print statements &
making sure the program is passing through right loops & iterations the
right number of times.
Code coverage testing
Code coverage testing involves designing & executing test cases and finding
out the percetange of code that is coverd by testing.
• It is one form of white box testing which finds the areas of the program
not exercised by a set of test cases. It also creates some test cases to
increase coverage and determining a quantitative measure of code
coverage.
• Code coverage testing is performed using following techniques.
1. statement coverage
2. path coverage
3. condition coverage
4. function coverage
Statement coverage/ Line coverage
• Statement Coverage is a white box testing technique in which all the executable
statements in the source code are executed at least once. It is used for
calculation of the number of statements in source code which have been
executed. The main purpose of Statement Coverage is to cover all the possible
paths, lines and statements in source code.
Statement coverage= no.of executed statements *100
Total no of statements
Simple Program
1.PRINT “Hello World” 100% statement coverage in 1 to 4 lines
2.PRINT “Date is”;Date$ It can tell you if every statement
3.PRINT “Time is”;Time$ is executed but can’t tell if you have
4. END taken all the paths.
Path coverage
• Path coverage is a requirement that, for each path in the
program (e.g., if statements, loops), each path have been
executed at least once during testing. (It is sometimes also
described as saying that each branch condition must have
been true at least once and false at least once during
testing.)
Cyclomatic Complexity
Method-1: Method 2
=E–N+2 =P+1
=8–7+2 =2+1
=3 =3
Black Box testing
• Black Box testing involves looking at the specifications and does
not require examining the code of the program. It is done from
customer’s point of view. The testers know the input and
expected output. They will check whether with given input they
are getting expected output or not.
• Black Box Testing is a software testing method in which the
functionalities of software applications are tested without having
knowledge of internal code structure, implementation details
and internal paths. Black Box Testing mainly focuses on input and
output of software applications and it is entirely based on
software requirements and specifications. It is also known as
Behavioral Testing.
• Different techniques of Black Box test are:
1. Requirement base testing
2. Boundary value analysis
3.Equivalence partitioning
Advantages & disadvantages of Black box testing
Advantages disadvantages
• Efficient when used on large systems. • Test cases are challenging to design without
• Since the tester and developer are having clear functional specifications.
independent of each other, testing is • It is difficult to identify tricky inputs if the
balanced and unprejudiced. test cases are not developed based on
• Tester can be non-technical. specifications.
• There is no need for the tester to have • It is difficult to identify all possible inputs in
detailed functional knowledge of limited testing time. As a result, writing test
system. cases may be slow and difficult.
• Tests will be done from an end user's • There are chances of having unidentified
paths during the testing process.
point of view, because the end user
should accept the system. • There is a high probability of repeating tests
already performed by the programmer.
• Test cases can be designed as soon as
the functional specifications are
complete.
Requirement Based Testing
• Requirements-based testing is a testing approach in which test cases, conditions and data are
derived from requirements. It includes functional tests and also non-functional attributes such as
performance, reliability or usability. Requirement review ensures that they are consistent,
correct , complete & testable.
Stages in Requirements based Testing:
• Defining Test Completion Criteria - Testing is completed only when all the functional and non-
functional testing is complete.
• Design Test Cases - A Test case has five parameters namely the initial state or precondition,
data setup, the inputs, expected outcomes and actual outcomes.
• Execute Tests - Execute the test cases against the system under test and document the results.
• Verify Test Results - Verify if the expected and actual results match each other.
• Verify Test Coverage - Verify if the tests cover both functional and non functional aspects of the
requirement.
• Track and Manage Defects - Any defects detected during the testing process goes through the
defect life cycle and are tracked to resolution. Defect Statistics are maintained which will give us
the overall status of the project.
Positive & Negative testing
• Positive Testing: Positive Testing is testing process where the system validated against the
valid input data. In this testing tester always check for only valid set of values and check if a
application behaves as expected with its expected inputs. The main intention of this testing
is to check whether software application not showing error when not supposed to &
showing error when supposed to. Such testing is to be carried out keeping positive point of
view & only execute the positive scenario. Positive Testing always tries to prove that a
given product and project always meets the requirements and specifications.
• Negative Testing: Negative Testing is testing process where the system validated against
the invalid input data. A negative test checks if a application behaves as expected with its
negative inputs. The main intention of this testing is to check whether software application
not showing error when supposed to & showing error when not supposed to. Such testing
is to be carried out keeping negative point of view & only execute the test cases for only
invalid set of input data. The main reason behind Negative testing is to check the stability
of the software application against the influences of different variety of incorrect validation
data set.
Boundary Value Analysis
• Most of the defects in software products are around conditions and boundaries. By
conditions, we mean situations wherein, based on the values of various variables,
certain actions would have to be taken. By boundaries, we mean “limits” of values of
the various variables.
• This is one of the software testing technique in which the test cases are designed to
include values at the boundary. If the input data is used within the boundary value
limits, then it is said to be Positive Testing. If the input data is picked outside the
boundary value limits, then it is said to be Negative Testing.
• Boundary value analysis is another black box test design technique and it is used to
find the errors at boundaries of input domain rather than finding those errors in the
center of input.
• Each boundary has a valid boundary value and an invalid boundary value. Test cases
are designed based on the both valid and invalid boundary values. Typically, we
choose one test case from each boundary.
Examples of BVA
1) A system can accept the numbers
from 1 to 10 numeric values.
All other numbers are invalid values. 2)
Under this technique, boundary values
0, 1,2, 9,10,11 can be tested.
Boundary values are validated against
both the valid boundaries and invalid
boundaries. The Invalid Boundary
Cases for the above example can be
given as follows
0 - for lower limit boundary value
11 - for upper limit boundary value
Eqivalence Partitioning
• It helps to reduce the total number of test cases from infinite to finite. The
selected test cases from these groups ensure coverage of all possible scenarios.
Example 1
Assume, we have to test a field which accepts Age
18 – 56
Valid Input: 18 – 56
Invalid Input: less than or equal to 17 (<=17),
greater than or equal to 57 (>=57)
Valid Class: 18 – 56 = Pick any one input test data
from 18 – 56
Invalid Class 1: <=17 = Pick any one input test data
less than or equal to 17
Invalid Class 2: >=57 = Pick any one input test data
greater than or equal to 57
We have one valid and two invalid conditions here.
Example 2
• Assume, we have to test a filed which
accepts a Mobile Number of ten digits.