Software Testing Materials PDF
Software Testing Materials PDF
Definition:
Testing Methods:
Static Testing
Dynamic Testing
Testing Approaches:
Testing Levels:
1. Unit Testing
2. Integration Testing
3. System Testing
4. Acceptance Testing
1. Functionality Testing
2. Non-functionality Testing
Detailed Explanation:
Requirement Phase:
Analysis Phase:
Once the requirement gathering and analysis is done the next step is to
define and document the product requirements and get them approved by
the customer. This is done through SRS (Software Requirement Specification)
document. SRS consists of all the product requirements to be designed and
developed during the project life cycle. Key people involved in this phase are
Project Manager, Business Analysist and Senior members of the Team. The
outcome of this phase is Software Requirement Specification.
Design Phase:
HLD – High Level Design – It gives the architecture of the software product
to be developed and is done by architects and senior developers
The outcome from this phase is High Level Document and Low Level
Document which works as an input to the next phase
Development Phase:
Testing Phase:
When the software is ready, it is sent to the testing department where Test
team tests it thoroughly for different defects. They either test the software
manually or using automated testing tools depends on process defined in
STLC (Software Testing Life Cycle) and ensure that each and every
component of the software works fine. Once the QA makes sure that the
software is error-free, it goes to the next stage, which is Implementation.
The outcome of this phase is the Quality Product and the Testing Artifacts.
Waterfall Model
Spiral
V Model
Prototype
Agile
The other related models are Agile Model, Rapid Application Development,
Rational Unified Model, Hybrid Model etc.,
Software Testing Life Cycle (STLC) identifies what test activities to carry out
and when to accomplish those test activities. Even though testing differs
between Organizations, there is a testing life cycle.
The different phases of Software Testing Life Cycle are:
Every phase of STLC (Software Testing Life Cycle) has a definite Entry and
Exit Criteria.
Requirement Analysis:
Test Planning:
Test planning is the first step of the testing process. In this phase typically
Test Manager/Test Lead involves determining the effort and cost estimates
for the entire project. Preparation of Test Plan will be done based on the
requirement analysis. Activities like resource planning, determining roles and
responsibilities, tool selection (if automation), training requirement etc.,
carried out in this phase. The deliverables of this phase are Test Plan & Effort
estimation documents.
Deliverables: Test Strategy, Test Plan, and Test Effort estimation document.
Test Design:
Test team starts with test cases development activity here in this phase. Test
team prepares test cases, test scripts (if automation) and test data. Once the
test cases are ready then these test cases are reviewed by peer members or
team lead. Also, test team prepares the Requirement Traceability Matrix
(RTM). RTM traces the requirements to the test cases that are needed to
verify whether the requirements are fulfilled. The deliverables of this phase
are Test Cases, Test Scripts, Test Data, Requirements Traceability Matrix
This phase can be started in parallel with Test design phase. Test
environment setup is done based on the hardware and software requirement
list. Some cases test team may not be involved in this phase. Development
team or customer provides the test environment. Meanwhile, test team
should prepare the smoke test cases to check the readiness of the given test
environment.
Test Execution:
Test team starts executing the test cases based on the planned test cases. If
a test case result is Pass/Fail then the same should be updated in the test
cases. Defect report should be prepared for failed test cases and should be
reported to the Development Team through bug tracking tool (eg., Quality
Center) for fixing the defects. Retesting will be performed once the defect
was fixed. Click here to see the Bug Life Cycle.
Entry Criteria: Test Plan document, Test cases, Test data, Test Environment.
Test Closure:
The final stage where we prepare Test Closure Report, Test Metrics.
Testing team will be called out for a meeting to evaluate cycle completion
criteria based on Test coverage, Quality, Time, Cost, Software, Business
objectives. Test team analyses the test artifacts (such as Test cases, Defect
reports etc.,) to identify strategies that have to be implemented in future,
which will help to remove process bottlenecks in the upcoming projects. Test
metrics and Test closure report will be prepared based on the above criteria.
Entry Criteria: Test Case Execution report (make sure there are no high
severity defects opened), Defect report
Bug life cycle is also known as Defect life cycle. In Software Development
process, the bug has a life cycle. The bug should go through the life cycle to
be closed. Bug life cycle varies depends upon the tools (QC, JIRA etc.,) used
and the process followed in the organization.
Software bug can be defined as the abnormal behavior of the software. Bug
starts when the defect is found and ends when a defect is closed, after
ensuring it is not reproduced.
The different states of a bug in the bug life cycle are as follows:
New: When a tester finds a new defect. He should provide a proper Defect
document to the Development team to reproduce and fix the defect. In this
state, the status of the defect posted by tester is “New”
Assigned: Defects which are in the status of New will be approved (if valid)
and assigned to the development team by Test Lead/Project Lead/Project
Manager. Once the defect is assigned then the status of the bug changes to
“Assigned”
Open: The development team starts analyzing and works on the defect fix
Fixed: When a developer makes the necessary code change and verifies the
change, then the status of the bug will be changed as “Fixed” and the bug is
passed to the testing team.
Test: If the status is “Test”, it means the defect is fixed and ready to do test
whether it is fixed or not.
Verified: The tester re-tests the bug after it got fixed by the developer. If
there is no bug detected in the software, then the bug is fixed and the status
assigned is “verified.”
Closed: After verified the fix, if the bug is no longer exits then the status of
bug will be assigned as “Closed.”
Reopen: If the defect remains same after the retest, then the tester posts
the defect using defect retesting document and changes the status to
“Reopen”. Again the bug goes through the life cycle to be fixed.
Duplicate: If the defect is repeated twice or the defect corresponds the same
concept of the bug, the status is changed to “duplicate” by the development
team.
Deferred: In some cases, Project Manager/Lead may set the bug status as
deferred.
If the bug found during end of release and the bug is minor or not important
to fix immediately
In such cases the status will be changed as “deferred” and it will be fixed in
the next release.
This is all about Bug Life Cycle / Defect Life Cycle. Some companies use
these bug id’s in RTM to map with the test cases.
Waterfall Model:
Waterfall Model is a traditional model. It is aka Sequential Design Process,
often used in SDLC, in which the progress is seen as flowing downwards like
a waterfall, through the different phases such as Requirement Gathering,
Feasibility Study/Analysis, Design, Coding, Testing, Installation and
Maintenance. Every next phase is begun only once the goal of previous phase
is completed. This methodology is preferred in projects where quality is more
important as compared to schedule or cost. This methodology is best suitable
for short term projects where the requirements will not change. (E.g.
Calculator, Attendance Management)
Advantages:
Requirements do not change nor does design and code, so we get a stable
product.
Every phase has specific deliverable’s. It gives high visibility to the project
manager and clients about the progress of the project.
Disadvantages:
Customer may not be satisfied, if the changes they required are not
incorporated in the product.
It is not suitable for long term projects where requirements may change time
to time
Waterfall model can be used only when the requirements are very well known
and fixed
Final words: Testing is not just finding bugs. As per the Waterfall Model,
Testers involve only almost at the end of the SDLC. Ages ago the mantra of
testing is just to finding bugs in the software. Things changed a lot now.
There are some other SDLC models implemented. I would post other models
in the upcoming posts in detail with their advantages and disadvantages. It is
up to your team to choose the SDLC model depends on the project you are
working.
Spiral Model:
and also they know that they will release next version of product when the
current version is in existence. They prefer Spiral Model to develop the
product in an iterative nature. They could release one version of the product
to the end user and start developing next version which includes new
enhancements and improvements on previous version (based on the issues
faced by the user in the previous version). Like Microsoft released Windows 8
and improved it based on user feedback and released the next version
(Windows 8.1).
Evaluation Phase – Client Evaluation (Client side Testing) to get the feedback
Advantages:
Disadvantages:
Applications:
Microsoft Office
V Model:
Don’t you think that why do we use this V Model, if it is same as Waterfall
Model. �
Let me mention the next point on why do we need this Verification and
Validation Model.
It overcomes the disadvantages of waterfall model. In the waterfall model,
we have seen that testers involve in the project only at the last phase of the
development process.
In this, test team involves in the early phases of SDLC. Testing starts in early
stages of product development which avoids downward flow of defects which
in turns reduces lot of rework. Both teams (test and development) work in
parallel. The test team works on various activities like preparing test
strategy, test plan and test cases/scripts while the development team works
on SRS, Design and Coding.
Once the requirements were received, both the development and test team
start their activities.
Deliverables are parallel in this model. Whilst, developers are working on SRS
(System Requirement Specification), testers work on BRS (Business
Requirement Specification) and prepare ATP(Acceptance Test Plan) and ATC
(Acceptance Test Cases) and so on.
Testers will be ready with all the required artifacts (such as Test Plan, Test
Cases) by the time developers release the finished product. It saves lots of
time.
Let’s see the how the development team and test team involves in each
phase of SDLC in V Model.
1. Once client sends BRS, both the teams (test and development) start their
activities. The developers translate the BRS to SRS. The test team involves in
reviewing the BRS to find the missing or wrong requirements and writes
acceptance test plan and acceptance test cases.
2. In the next stage, the development team sends the SRS the testing team
for review and the developers start building the HLD (High Level Design
Document) of the product. The test team involves in reviewing the SRS
against the BRS and writes system test plan and test cases.
3. In the next stage, the development team starts building the LLD (Low
Level Design) of the product. The test team involves in reviewing the HLD
(High Level Design) and writes Integration test plan and integration test
cases.
4. In the next stage, the development team starts with the coding of the
product. The test team involves in reviewing the LLD and writes functional
test plan and functional test cases.
5. In the next stage, the development team releases the build to the test
team once the unit testing was done. The test team carries out functional
testing, integration testing, system testing and acceptance testing on the
release build step by step.
Advantages:
Test team will be ready with the test cases by the time developers release
the software which in turns saves a lot of time
Disadvantages:
Initial investment is more because test team involves right from the early
stage.
Applications:
Automation Testing:
Selenium
LoadRunner
SilkTest
TestComplete
WinRunner
WATIR
Manual Testing:
Manual testing is the process of testing the software manually to find the
defects. Testers should have the perspective of an end user and to ensure all
the features are working as mentioned in the requirement document. In this
process, testers execute the test cases and generate the reports manually
without using any automation tools.
Unit Testing
System Testing
Integration Testing
Acceptance Testing
Black Box Testing: Black Box Testing is a software testing method in which
testers evaluate the functionality of the software under test without looking
at the internal code structure. This can be applied to every level of software
testing such as Unit, Integration, System and Acceptance Testing.
White Box Testing: White Box Testing is also called as Glass Box, Clear Box,
and Structural Testing. It is based on applications internal code structure. In
white-box testing, an internal perspective of the system, as well as
programming skills, are used to design test cases. This testing usually done
at the unit level.
5. Black box testing: Black Box Testing is a software testing method in which
testers evaluate the functionality of the software under test without looking
at the internal code structure. This can be applied to every level of software
testing such as Unit, Integration, System and Acceptance Testing.
7. White box testing: White Box Testing is also called as Glass Box, Clear
Box, and Structural Testing. It is based on applications internal code
structure. In white-box testing, an internal perspective of the system, as well
as programming skills, are used to design test cases. This testing usually was
done at the unit level.
10. Gray box testing: Grey box is the combination of both White Box and
Black Box Testing. The tester who works on this type of testing needs to have
access to design documents. This helps to create better test cases in this
process.
11. Unit testing: Unit Testing is also called as Module Testing or Component
Testing. It is done to check whether the individual unit or module of the
source code is working properly. It is done by the developers in developer’s
environment.
15. System testing: Testing the fully integrated application to evaluate the
system’s compliance with its specified requirements is called System Testing
AKA End to End testing. Verifying the completed system to ensure that the
application works as intended or not.
17. Big bang Integration Testing: Combining all the modules once and
verifying the functionality after completion of individual module testing.
Top down and bottom up are carried out by using dummy modules known as
Stubs and Drivers. These Stubs and Drivers are used to stand-in for missing
components to simulate data communication between modules.
18. Top-down Integration Testing: Testing takes place from top to bottom.
High-level modules are tested first and then low-level modules and finally
integrating the low-level modules to a high level to ensure the system is
working as intended. Stubs are used as a temporary module if a module is
not ready for integration testing.
21. Alpha testing: Alpha testing is done by the in-house developers (who
developed the software) and testers. Sometimes alpha testing is done by the
client or outsourcing team with the presence of developers or testers.
22. Beta testing: Beta testing is done by a limited number of end users
before delivery. Usually, it is done in the client place.
23. Gamma Testing: Gamma testing is done when the software is ready for
release with specified requirements. It is done at the client place. It is done
directly by skipping all the in-house testing activities.
25. Boundary value analysis testing: Boundary value analysis (BVA) is based
on testing the boundary values of valid and invalid partitions. The Behavior at
the edge of each equivalence partition is more likely to be incorrect than the
behavior within the partition, so boundaries are an area where testing is
likely to yield defects. Every partition has its maximum and minimum values
and these maximum and minimum values are the boundary values of a
partition. A boundary value for a valid partition is a valid boundary value.
Similarly, a boundary value for an invalid partition is an invalid boundary
value.
26. Decision tables testing: Decision Table is aka Cause-Effect Table. This test
technique is appropriate for functionalities which has logical relationships
between inputs (if-else logic). In Decision table technique, we deal with
combinations of inputs. To identify the test cases with decision table, we
consider conditions and actions. We take conditions as inputs and actions as
outputs.
28. State transition testing: Using state transition testing, we pick test cases
from an application where we need to test different system transitions. We
can apply this when an application gives a different output for the same
input, depending on what has happened in the earlier state.
29. Exhaustive Testing: Testing all the functionalities using all valid and
invalid inputs and preconditions is known as Exhaustive testing.
30. Early Testing: Defects detected in early phases of SDLC are less
expensive to fix. So conducting early testing reduces the cost of fixing
defects.
31. Use case testing: Use case testing is carried out with the help of use case
document. It is done to identify test scenarios to test end to end testing
32. Scenario testing: Scenario testing is a software testing technique which is
based on a scenario. It involves in converting business requirements to test
scenarios for better understanding and achieve end to end testing. A well
designed scenario should be motivating, credible, complex and the outcome
of which is easy to evaluate.
36. Path testing: Path coverage testing is a white box testing technique which
is to validate that all the paths of the program are executed at least once.
37. Mutation testing: Mutation testing is a type of white box testing which is
to change (mutate) certain statements in the source code and verify if the
tests are able to find the errors.
38. Loop testing: Loop testing is a white box testing technique which is to
validate the different kind of loops such as simple loops, nested loops,
concatenated loops and unstructured loops.
41. Stress testing: It is to verify the behavior of the system once the load
increases more than its design expectations.
42. Soak testing: Running a system at high load for a prolonged period of
time to identify the performance problems is called Soak Testing.
49. Adhoc testing: Ad-hoc testing is quite opposite to the formal testing. It is
an informal testing type. In Adhoc testing, testers randomly test the
application without following any documents and test design techniques. This
testing is primarily performed if the knowledge of testers in the application
under test is very high. Testers randomly test the application without any test
cases or any business requirement document.
50. Exploratory testing: Usually, this process will be carried out by domain
experts. They perform testing just by exploring the functionalities of the
application without having the knowledge of the requirements.
51. Retesting: To ensure that the defects which were found and posted in the
earlier build were fixed or not in the current build. Say, Build 1.0 was
released. Test team found some defects (Defect Id 1.0.1, 1.0.2) and posted.
Build 1.1 was released, now testing the defects 1.0.1 and 1.0.2 in this build
is retesting.
53. Smoke testing: Smoke Testing is done to make sure if the build we
received from the development team is testable or not. It is also called as
“Day 0” check. It is done at the “build level”. It helps not to waste the testing
time to simply testing the whole application when the key features don’t
work or the key bugs have not been fixed yet.
54. Sanity testing: Sanity Testing is done during the release phase to check
for the main functionalities of the application without going deeper. It is also
called as a subset of Regression testing. It is done at the “release level”. At
times due to release time constraints rigorous regression testing can’t be
done to the build, sanity testing does that part by checking main
functionalities.
70. Database testing: Data base testing is done to validate the data in the UI
is matched with the data stored in the database. It involves in checking the
schema, tables, triggers etc., of the database.
78. GUI Testing: Graphical User Interface Testing is to test the interface
between the application and the end user. Mainly testers concern about
appearance of the elements such as fonts and colors conforms to design
specifications.
79. API testing: API stands for Application Programming Interface. API
testingis a type of software testingthat involves testing APIs using some
tools like SOAPUI, PostMan.
80. Agile testing: Agile testing is a type of testing that involves following
principles of agile software development methodology. In this agile testing,
testing is conducted throughout the lifecycle of the continuously evolving
project instead of being confined to a particular phase.
83. Risk based testing: Identify the modules or functionalities which are most
likely cause failures and then testing those functionalities.
85. Formal Testing: It is a process where the testers test the application by
having pre-planned procedures and proper documentation.
86. Pilot testing: Pilot testing is a testing carried out under a real time
operating conditions by the company in order to gain the confidence of the
client
95. UI testing: In UI testing, testers aim to test both GUI and Command Line
Interfaces (CLIs)
96. Destructive testing: Destructive testing is a testing technique which aims
to validate the robustness of the application by testing continuously until the
application breaks.
99. ETL testing: ETL (Extract, Transform and Load) testing involves in
validating the data movement from source to destination and verifying the
data count in both source and destination and verifying data extraction,
transformation and also verifying the table relations.
103. All pair testing: All pair testing approach is to test the application with
all possible combination of the values of input parameters.
In the field of Software Testing, Testers mainly concentrate on Black Box and
White Box Testing. Under the Black Box testing, again there are different
types of testing. The major types of testing are Functionality testing and
Non-functional testing. As I mentioned in the first paragraph of this article,
Performance testing and testing types related to performance testing fall
under Non-functional testing.
Performance Testing:
Capacity Testing:
Load Testing is to verify that the system/application can handle the expected
number of transactions and to verify the system/application behaviour under
both normal and peak load conditions (no. of users).
Volume Testing:
Stress Testing:
Stress Testing is to verify the behaviour of the system once the load
increases more than the system’s design expectations. This testing addresses
which components fail first when we stress the system by applying the load
beyond the design expectations. So that we can design more robust system.
Soak Testing:
Soak Testing is aka Endurance Testing. Running a system at high load for a
prolonged period of time to identify the performance problems is called Soak
Testing. It is to make sure the software can handle the expected load over a
long period of time.
Spike Testing:
Levels of Testing:
1. Unit Testing
2. Integration Testing
3. System Testing
4. Acceptance Testing
UNIT TESTING:
Unit Testing is done to check whether the individual modules of the source
code are working properly. i.e. testing each and every unit of the application
separately by the developer in developer’s environment. It is AKA Module
Testing or Component Testing
INTEGRATION TESTING:
It is sub divided into Top Down Approach, Bottom Up Approach and Sandwich
Approach (Combination of Top Down and Bottom Up). This process is carried
out by using dummy programs called Stubs and Drivers. Stubs and Drivers
do not implement the entire programming logic of the software module but
just simulate data communication with the calling module.
In Big Bang Integration Testing, the individual modules are not integrated
until all the modules are ready. Then they will run to check whether it is
performing well. In this type of testing, some disadvantages might occur like,
defects can be found at the later stage. It would be difficult to find out
whether the defect arouse in interface or in module.
In Top Down Integration Testing, the high level modules are integrated and
tested first. i.e Testing from main module to sub module. In this type of
testing, Stubs are used as temporary module if a module is not ready for
integration testing.
Stub:
Driver:
It’s a black box testing. Testing the fully integrated application this is also
called as end to end scenario testing. To ensure that the software works in all
intended target systems. Verify thorough testing of every input in the
application to check for desired outputs. Testing of the users experience with
the application.
ACCEPTANCE TESTING:
Alpha Testing:
Alpha testing is mostly like performing usability testing which is done by the
in-house developers who developed the software. Sometimes this alpha
testing is done by client or outsiders with the presence of developers or
testers.
Beta Testing:
Beta testing is done by limited number of end users before delivery, the
change request would be fixed if the user gives feedback or reports defect.
Gamma Testing:
Gamma testing is done when the software is ready for release with specified
requirements; this testing is done directly by skipping all the in-house testing
activities.
Manual Testing:
Manual testing is the process of testing the software manually to find the
defects. Tester should have the perspective of an end user and to ensure all
the features are working as mentioned in the requirement document. In this
process, testers execute the test cases and generate the reports manually
without using any automation tools.
Advantages:
Disadvantages:
Automation Testing:
Advantages:
It does not require human intervention. Test scripts can be run unattended
Disadvantages:
Not all the tools support all kinds of testing. Such as windows, web, mobility,
performance/load testing
If you find any other points which we overlooked, just put it in the
comments. We will include and make this post “Manual Testing Vs
Automation Testing” updated.
The tester passes input data to make sure whether the actual output
matches the expected output. So it is AKA Input-Output Testing.
Equivalence Partitioning
Decision Table
State Transition
Decision Table: Decision Table is aka Cause-Effect Table. This test technique
is appropriate for functionalities which has logical relationships between
inputs (if-else logic). In Decision table technique, we deal with combinations
of inputs. To identify the test cases with decision table, we consider
conditions and actions. We take conditions as inputs and actions as outputs.
Click here to see detailed post on decision table.
State Transition: Using state transition testing, we pick test cases from an
application where we need to test different system transitions. We can apply
this when an application gives a different output for the same input,
depending on what has happened in the earlier state. Click here to see
detailed post on state transition technique.
Statement Coverage
Branch Coverage
Path Coverage
Smoke Testing is done to make sure if the build we received from the
development team is testable or not. It is also called as “Day 0” check. It is
done at the “build level”.
It helps not to waste the testing time to simply testing the whole application
when the key features don’t work or the key bugs have not been fixed yet.
Sanity Testing is done during the release phase to check for the main
functionalities of the application without going deeper. It is also called as a
subset of Regression testing. It is done at the “release level”.
For example: In a project for the first release, Development team releases
the build for testing and the test team tests the build. Testing the build for
the very first time is to accept or reject the build. This we call it as Smoke
Testing. If the test team accepts the build then that particular build goes for
further testing. Imagine the build has 3 modules namely Login, Admin,
Employee. The test team tests the main functionalities of the application
without going deeper. This we call it as Sanity Testing.
Smoke Test is done to make sure if the build we received from the
development team is testable or not.
Sanity Test is done during the release phase to check for the main
functionalities of the application without going deeper.
Example: Remember password should be removed from the login page which
is available earlier
Example: Imagine Login button is not working in a login page and a tester
reports a bug stating that the login button is broken. Once the bug is fixed by
the developers, testers test it to make sure whether the Login button is
working as per the expected result. Simultaneously testers test other
functionalities which are related to login button
Example: Loading the home page takes 5 seconds. Reducing the load time to
2 seconds
Selenium
QTP/UFT
SahiPro
Ranorex
TestComplete
Watir
What is Retesting?
Retesting : To ensure that the defects which were found and posted in the
earlier build were fixed or not in the current build.
Retesting is running the previously failed test cases again on the new
software to verify whether the defects posted earlier are fixed or not.
Example: Say, Build 1.0 was released. While testing the Build 1.0, Test team
found some defects (example, Defect Id 1.0.1 and Defect Id 1.0.2) and
posted. The test team tests the defects 1.0.1 and 1.0.2 in the Build 1.1 (only
if these two defects are mentioned in the Release Note of the Build 1.1) to
make sure whether the defects are fixed or not.
Process: As per the Bug Life Cycle, once a tester found a bug, the bug is
reported to the Development Team. The status of Bug should be “New”. The
Development Team may accept or reject the bug. If the development team
accepts the bug then they do fix it and release it in the next release. The
status of the bug will be “Ready For QA”. Now the tester verifies the bug to
find out whether it is resolved or not. This testing is known as retesting.
Retesting is a planned testing. We do use same test cases with same test
data which we used in the earlier build. If the bug is not found then we do
change the status of the bug as “Fixed” else we do change the status as “Not
Fixed” and send a Defect Retesting Document to the development team.
When Do We Do Re-testing:
Once the development team releases the new build, then the test team has
to test the already posted bugs to make sure that the bugs are fixed or not.
At times, development team refuses few bugs which were raised by the
testers and mention the status of the bug as Not Reproducible. In this case,
the testers need to retest the same issue to let the developers know that the
issue is valid and reproducible.
To avoid this scenario, we need to write a good bug report. Here is a post on
how to write a good bug report.
At times, the Client may request us to do the test again to gain the
confidence on the quality of the product. In this case, test teams do test the
product again.
A product should never be released after modification has been done to the
code with just retesting the bug fixes, we need to do Regression Testing too.
REGRESSION TESTING:
Defect Fixing
RETESTING:
To ensure that the defects which were found and posted in the earlier build
were fixed or not in the current build.
Say, Build 1.0 was released. Test team found some defects (Defect Id 1.0.1,
1.0.2) and posted.
Build 1.1 was released, now testing the defects 1.0.1 and 1.0.2 in this build
is retesting.
Case 2: Login Page – Added “Stay signed in” checkbox (New feature)
In Case 1, Login button is not working, so tester reports a bug. Once the bug
is fixed, testers test it to make sure whether the Login button is working as
per the expected result.
Earlier I have posted a detailed post on “Bug Report Template”. If you haven’t
gone through it, you can browse by clicking here. Also, you could download
the Sample Bug Report Template / Defect Report Template from here.
In Case 2, tester tests the new feature to ensure whether the new feature
(Stay signed in) is working as intended.
Case 1 comes under Re-testing. Here tester retests the bug which was found
in the earlier build by using the steps to reproduce which were mentioned in
the bug report.
Also in the Case 1, tester tests other functionalities which are related to login
button which we call as Regression Testing.
Case 2 comes under Regression Testing. Here tester tests the new feature
(Stay signed in) and also tests the relevant functionalities. Testing the
relevant functionalities while testing the new features come under Regression
Testing.
Another Example:
Entry Criteria:
The prerequisites that must be achieved before commencing
the testing process.
Exit Criteria:
The conditions that must be met before testing should be
concluded.
Requirement Analysis:
A quality assurance professional has to verify the requirement
documents prior to starting the phases like Planning, Design,
Environment Setup, Execution, Reporting and Closure . We
prepare test artifacts like Test Strategy, Test Plan and other
based on the analysis of requirement documents.
Test Planning:
Test Manager/Test Lead prepares the Test Strategy and Test
Plan documents and testers will get a chance to involve in the
preparation process. It varies company to company.
Test Execution:
In this phase, testers involve in execution of test cases,
reporting the defects and updating the requirement
traceability matrix.
Entry Criteria: Test Plan document, Test cases, Test data, Test
Environment
Test Case:
Test cases are the set of positive and negative executable
steps of a test scenario which has a set of pre-conditions, test
data, expected result, post-conditions and actual results.
As per IEEE-STD-610:
The process of evaluation software to determine whether the
products of a given development phase satisfy the conditions
imposed at the beginning of that phase.
As per IEEE-STD-610:
The process of evaluating software during or at the end of the
development process to determine whether it satisfies
specified requirements [IEEE-STD-610]
Usually test team starts writing the detailed Test Plan and
continue further phases of testing once the test strategy is
ready. In Agile world, some of the companies are not spending
time on test plan preparation due to the minimal time for each
release but they maintain test strategy document. Maintaining
this document for the entire project leads to mitigate the
unforeseen risks.
; Test levels
; Test types
; Roles and responsibilities
; Environment requirements
Test Levels:
This section lists out the levels of testing that will be
performed during QA Testing. Levels of testing such as unit
testing, integration testing, system testing and user
acceptance testing. Testers are responsible for integration
testing, system testing and user acceptance testing.
Test Types:
This section lists out the testing types that will be performed
during QA Testing.
Environment requirements:
This section lists out the hardware and software for the test
environment in order to commence the testing activities.
Testing tools:
This section will describe the testing tools necessary to conduct
the tests
Example: Name of Test Management Tool, Name of Bug
Tracking Tool, Name of Automation Tool
Test deliverables:
This section lists out the deliverables that need to produce
before, during and at the end of testing.
Testing metrics:
This section describes the metrics that should be used in the
project to analyze the project status.
Test Summary:
This section lists out what kind of test summary reports will be
produced along with the frequency. Test summary reports will
be generated on a daily, weekly or monthly basis depends on
how critical the project is.
Example: ProjectName_0001
References:
This section is to specify all the list of documents that support
the test plan which you are currently creating.
Introduction:
Introduction or summary includes the purpose and scope of
the project
Test Items:
A list of test items which will be tested
Approach:
The overall strategy of how testing will be performed. It
contains details such as Methodology, Test types, Test
techniques etc.,
Pass/Fail Criteria:
In this section, we specify the criteria that will be used to
determine pass or fail percentage of test items.
Suspension Criteria:
In this section, we specify when to stop the testing.
Example: If any of the major functionalities are not functional
or system experiences login issues then testing should
suspend.
Test Deliverables:
List of documents need to be delivered at each phase of
testing life cycle. The list of all test artifacts.
Testing Tasks:
In this section, we specify the list of testing tasks we need to
complete in the current project.
Environmental Needs:
List of hardware, software and any other tools that are needed
for a test environment.
Responsibilities:
We specify the list of roles and responsibilities of each test
tasks.
Schedule:
Complete details on when to start, finish and how much time
each task should take place.
Approvals:
Who should sign off and approve the testing project
We usually write test cases for login page for every application
we test. Every login page should have the following elements.
; User Name
; First Name
; Last Name
; Password
; Confirm Password
; Email Id
; Phone number
; Date of birth
; Gender
; Location
; Terms of use
; Submit
; Login (If you already have an account)
Test Scenarios of a Registration Form:
1. Verify that the Registration form contains Username, First
Name, Last Name, Password, Confirm Password, Email Id,
Phone number, Date of birth, Gender, Location, Terms of
use, Submit, Login (If you already have an account)
2. Verify that tab functionality is working properly or not
3. Verify that Enter/Tab key works as a substitute for the
Submit button
4. Verify that all the fields such as Username, First Name,
Last Name, Password and other fields have a valid
placeholder
5. Verify that the labels float upward when the text field is in
focus or filled (In case of floating label)
6. Verify that all the required/mandatory fields are marked
with * against the field
7. Verify that clicking on submit button after entering all the
mandatory fields, submits the data to the server
8. Verify that system generates a validation message when
clicking on submit button without filling all the mandatory
fields.
9. Verify that entering blank spaces on mandatory fields lead
to validation error
10. Verify that clicking on submit button by leaving optional
fields, submits the data to the server without any
validation error
11. Verify that case sensitivity of Username (usually Username
field should not follow case sensitivity – ‘rajkumar’ &
‘RAJKUMAR’ acts same)
12. Verify that system generates a validation message when
entering existing username
13. Verify that the character limit in all the fields (mainly
username and password) based on business requirement
14. Verifythat the username validation as per business
requirement (in some application, username should not
allow numeric and special characters)
15. Verify that the validation of all the fields are as per
business requirement
16. Verify that the date of birth field should not allow the
dates greater than current date (some applications have
age limit of 18 in that case you have to validate whether
the age is greater than or equal to 18 or not)
17. Verify that the validation of email field by entering
incorrect email id
18. Verify that the validation of numeric fields by entering
alphabets and characters
19. Verify that leading and trailing spaces are trimmed after
clicking on submit button
20. Verify that the “terms and conditions” checkbox is
unselected by default (depends on business logic, it may
be selected or unselected)
21. Verify that the validation message is displayed when
clicking on submit button without selecting “terms and
conditions” checkbox
22. Verify that the password is in encrypted form when
entered
23. Verify whether the password and confirm password are
same or not
Writing test cases for an application takes a little practice. A
well-written test case should allow any tester to understand
and execute the tests and make the testing process smoother
and saves a lot of time in long run. Earlier we have posted a
video on How To Write Test Cases. I am concluding this post
“Test Scenarios Registration form / Test Scenarios of Signup
form”.
See the difference between Error, Bug, Defect and Failure here
Components of Bug Report Template:
Let’s discuss the main fields of a defect report and in the
next post, we learn how to write a good bug report.
Categories of Priority:
; High
; Medium
; Low
;
Severity: Severity talks about the impact of the bug on the
customer’s business. Usually, the severity of the bug is set by
the Managers. Sometimes, testers choose the severity of the
bug but in most cases, it will be selected by Managers/Leads.
Categories of Severity:
; Blocker
; Critical
; Major
; Minor
; Trivial
Status: Specify the status of the bug. If you just found a bug
and about to post it then the status will be “New”. In the
course of bug fixing, the status of the bug will change.
Good:
Defect Close Date: The ‘Defect Close Date’ is the date which
needs to be updated once you ensure that the defect is not
reproducible.
Software Test Metrics:
Before starting what is Software Test Metrics and types, I
would like to start with the famous quotes in terms of metrics.
1. Process metrics
2. Product metrics
Process Metrics:
Software Test Metrics used in the process of test preparation
and test execution phase of STLC.
Formula:
1Test Case Preparation Productivity = (No of Test Case)/ (Effort spent for Test
Case Preparation)
E.g.:
No. of Test cases = 240
Formula:
Formula:
1(No of Test cases executed)/ (Effort spent for execution of test cases)
E.g.:
No of Test cases executed = 180
Formula:
1Test Execution Coverage = (Total no. of test cases executed / Total no. of
test cases planned to execute)*100
E.g.:
Total no. of test cases planned to execute = 240
Formula:
1Test Cases Pass = (Total no. of test cases passed) / (Total no. of test cases
executed) * 100
E.g.:
Test Cases Pass = (80/90)*100 = 88.8 = 89%
Test Cases Failed:
It is to measure the percentage no. of test cases failed
Formula:
1Test Cases Failed = (Total no. of test cases failed) / (Total no. of test
cases executed) * 100
E.g.:
Test Cases Failed= (10/90)*100 = 11.1 = 11%
Formula:
1Test Cases Blocked = (Total no. of test cases blocked) / (Total no. of test
cases executed) * 100
E.g.:
Test Cases Blocked = (5/90)*100 = 5.5 = 6%
Check below video to see “Test Metrics”
Product metric:
Software Test Metrics used in the process of defect analysis
phase of STLC.
Formula:
1Error Discovery Rate = (Total number of defects found /Total no. of test cases
executed)*100
E.g.:
Total no. of test cases executed = 240
Formula:
Defect Density:
It is defined as the ratio of defects to requirements.
Formula:
Actual Size= 10
Defect Leakage:
It is used to review the efficiency of the testing process before
UAT.
Formula:
Formula:
Requirements Traceability
Matrix (RTM) |
SoftwareTestingMaterial
Requirements Traceability Matrix (RTM) is used to trace
the requirements to the tests that are needed to verify
whether the requirements are fulfilled.
Requirement Traceability Matrix AKA Traceability
Matrix or Cross Reference Matrix.
Like all other test artifacts, RTM too varies between
organizations. Most of the organizations use just the
Requirement Id’s and Test Case Id’s in the RTM. It is possible
to make some other fields such as Requirement Description,
Test Phase, Test case result, Document Owner etc., It is
necessary to update the RTM whenever there is a change in
requirement.
Test Deliverables
Test Deliverables are the test artifacts which are given to the
stakeholders of a software project during the SDLC (Software
Development Life Cycle).
A software project which follows SDLC undergoes the different
phases before delivering to the customer. In this process there
will be some deliverables in every phase. Some of the
deliverables are provided before the testing phase commences
and some are provided during the testing phase and rest after
the testing phase is completed.
Note: Both Defect and Bug are the issues in an application but
in which phase of SDLC it was found makes the overall
difference.
What is a defect?
The variation between the actual results and expected results
is known as defect.
What is an error?
We can’t compile or run a program due to coding mistake in a
program. If a developer unable to successfully compile or run a
program then they call it as an error.
What is a failure?
Once the product is deployed and customers find any issues
then they call the product as a failure product. After release, if
an end user finds an issue then that particular issue is called
as failure
Points to know:
Writing good bug report is a skill every tester should have. You
have to give all necessary details to the Dev Team to get your
issue fixed.
If you are sure that bug exists then ascertain whether the
same bug was posted by someone else or not. Use some
keywords related to your bug and search in the Defect
Tracking Tool. If you did not find an issue which is related to
the bug same like you found then you could start writing a bug
report.
Hold on!!
Two-Tier Architecture:
Two Tier application AKA Client-Server application
The Two-tier architecture is divided into two parts:
Three-Tier Architecture:
Three Tier application AKA Web Based application
Testing all the functionalities using all valid and invalid inputs
and preconditions is known as Exhaustive testing.
Why it’s impossible to achieve Exhaustive Testing?
3. Early Testing:
Defects detected in early phases of SDLC are less expensive to
fix. So conducting early testing reduces the cost of fixing
defects.
4. Defect Clustering:
Defect Clustering in software testing means that a small
module or functionality contains most of the bugs or it has the
most operational failures.
5. Pesticide Paradox:
Pesticide Paradox in software testing is the process of
repeating the same test cases again and again, eventually, the
same test cases will no longer find new bugs. So to overcome
this Pesticide Paradox, it is necessary to review the test cases
regularly and add or update them to find more defects.
1. Equivalence Partitioning
2. Boundary Value Analysis
3. Decision Table
4. State Transition
5. Exploratory Testing
6. Error Guessing
Equivalence Partitioning:
It is also known as Equivalence Class Partitioning (ECP).
Decision Table:
Decision Table is aka Cause-Effect Table. This test technique is
appropriate for functionalities which has logical relationships
between inputs (if-else logic). In Decision table technique, we
deal with combinations of inputs. To identify the test cases
with decision table, we consider conditions and actions. We
take conditions as inputs and actions as outputs.
Exploratory Testing:
Usually this process will be carried out by domain experts.
They perform testing just by exploring the functionalities of the
application without having the knowledge of the requirements.
Whilst using this technique, testers could explore and learn the
system. High severity bugs are found very quickly in this type
of testing.
Error Guessing:
Error guessing is one of the testing techniques used to find
bugs in a software application based on tester’s prior
experience. In Error guessing we don’t follow any specific
rules.
Equivalence Partitioning
Test Case Design Technique
Equivalence Partitioning is also known as Equivalence Class
Partitioning. In equivalence partitioning, inputs to the software
or system are divided into groups that are expected to exhibit
similar behavior, so they are likely to be proposed in the same
way. Hence selecting one input from each group to design the
test cases.
Each and every condition of particular partition (group) works
as same as other. If a condition in a partition is valid, other
conditions are valid too. If a condition in a partition is invalid,
other conditions are invalid too.
Valid Input: 18 – 56
Invalid Class 1: <=17 = Pick any one input test data less than
or equal to 17
Invalid Class 2: >=57 = Pick any one input test data greater
than or equal to 57
Example 2:
Example 2:
In the first column I took all the conditions and actions related
to the requirement. All the other columns represent Test
Cases.
T = True, F = False, X = Not possible
Here the Conditions to allow user to login are Enter Valid User
Name and Enter Valid Password.
What is Severity?
Bug/Defect severity can be defined as the impact of the bug on
customer’s business. It can be Critical, Major or Minor. In
simple words, how much effect will be there on the system
because of a particular defect
Critical:
Major:
Minor:
Trivial:
What is Priority?
Defect priority can be defined as how soon the defect should
be fixed. It gives the order in which a defect should be
resolved. Developers decide which defect they should take up
next based on the priority. It can be High, Medium or Low.
High:
Medium:
Low:
For example,
1. Spelling mistake of a company name on the homepage
2. Company logo or tagline issues
PLAN:
Plan a change (either to solve a problem or to improve some
areas) and decide what goal to achieve.
DO:
To design or revise the business requirement as planned
Here we implement the plan (in terms of putting the plan into
an action) and test its performance
CHECK:
Evaluate the results to make sure whether we reach the goals
as planned
ACT:
If the changes are not as planned then continue the cycle to
achieve the goal with a different plan.
Scrum Team:
Product Backlog:
Product Backlog is a repository where the list of Product
Backlog Items stored and maintained by the Product Owner.
The list of Product Backlog Items are prioritized by the Product
Owner as high and low and also could re-prioritize the product
backlog constantly.
Sprint Backlog:
Scrum Team meets again after the Sprint Review Meeting and
documents the lessons learnt in the earlier sprint such as
“What went well”, “What could be improved”. It helps the
Scrum Team to avoid the mistakes in the next Sprints.
Conclusion:
In an Agile Scrum Methodology, all the members in a Scrum
Team gathers and finalize the Product Backlog Items (User
Stories) for a particular Sprint and commits time line to
release the product. Based on the Daily Scrum meetings,
Scrum Development Team develops and tests the product and
presents to the Product Owner on Sprint Review Meeting. If
the Product Owner accepts all the developed User Stories then
the Sprint is completed and the Scrum Team goes for the next
Sprint in a same manner.
8. What is a Sprint?
Scrum Team meets again after the Sprint Review Meeting and
documents the lessons learned in the earlier sprint such as
“What went well”, “What could be improved”. It helps the
Scrum Team to avoid the mistakes in the next Sprints.
White Box Testing is also called as Glass Box, Clear Box, and
Structural Testing. It is based on applications internal code
structure. In white-box testing, an internal perspective of the
system, as well as programming skills, are used to design test
cases. This testing usually was done at the unit level. Click
here for more details.
Test Suite is a collection of test cases. The test cases which are
intended to test an application.
Example:
Test data is the data that is used by the testers to run the test
cases. Whilst running the test cases, testers need to enter
some input data. To do so, testers prepare test data. It can be
prepared manually and also by using tools.
1. Test Strategy
2. Test Plan
3. Effort Estimation Report
4. Test Scenarios
5. Test Cases/Scripts
6. Test Data
7. Requirement Traceability Matrix (RTM)
8. Defect Report/Bug Report
9. Test Execution Report
10. Graphs and Metrics
11. Test summary report
12. Test incident report
13. Test closure report
14. Release Note
15. Installation/configuration guide
16. User guide
17. Test status report
18. Weekly status report (Project manager to client)
Click here for more details.
To ensure that the defects which were found and posted in the
earlier build were fixed or not in the current build. Say, Build
1.0 was released. Test team found some defects (Defect Id
1.0.1, 1.0.2) and posted. Build 1.1 was released, now testing
the defects 1.0.1 and 1.0.2 in this build is retesting.
Testing all the functionalities using all valid and invalid inputs
and preconditions is known as Exhaustive testing.
Low Priority & Low Severity: FAQ page takes a long time to
load
Standalone application:
Client-Server Application:
Web Application:
Web server applications follow three-tier or n-tier architecture.
The presentation layer is in a client system, a Business layer is
in an application server and Database layer is in a Database
server. It works both in Intranet and Internet.
1. Waterfall
2. Spiral
3. V Model
4. Prototype
5. Agile
97. What is STLC?