0% found this document useful (0 votes)
66 views62 pages

Software Testing Basics and Principles

Software testing is a process that evaluates a system to ensure it meets specified requirements and is defect-free. It is crucial for delivering quality products, identifying bugs early, and ensuring customer satisfaction. The document outlines the principles of software testing, roles of testers, differences between manual and automation testing, and various software development life cycle models.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views62 pages

Software Testing Basics and Principles

Software testing is a process that evaluates a system to ensure it meets specified requirements and is defect-free. It is crucial for delivering quality products, identifying bugs early, and ensuring customer satisfaction. The document outlines the principles of software testing, roles of testers, differences between manual and automation testing, and various software development life cycle models.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

BASIC ASPECT OF SOFTWARE TESTING

Software Testing
Testing is a Process of Evaluating a System by Manual OR Automatic Means
and Verify that it satisfies as specified Requirements or identify differences
between Expected and Actual Results.
It can also be stated as the process of verifying and validating that a software
program or application or product:
software testing is a method to check whether the actual software product
matches expected requirements and to ensure that software product is defect
free. The purpose of software testing is to identify errors, gaps or missing
requirements in contrast to actual requirements.

Importance of software testing


It is very important to ensure the Quality of the product. Quality product
delivered to the customers helps in gaining their confidence. software testing
important because if there are any bugs or errors in the software, it can be
identified early and can be solved before delivery of the software product.
Properly tested software product ensures reliability, security and high
performance which further results in time saving, cost effectiveness and customer
satisfaction.

1
Seven principles of software testing
1. Testing shows the presence of bugs

Testing an application can only reveal that one or more defects exist in the
application, however, testing alone cannot prove that the application is error free.
Therefore, it is important to design test cases which find as many defects as
possible.

2. Exhaustive testing is impossible


Testing everything including all combinations of inputs and preconditions is not
possible.

3. Early testing: In the software development life cycle testing activities should
start as early as possible and should be focused on defined objectives.

4. Defect clustering: A small number of modules contains most of the defects


discovered

during pre-release testing or shows the most operational failures.


5. Pesticide paradox: If the same kinds of tests are repeated again and again,
eventually the same set of test cases will no longer be able to find any new bugs.
To overcome this “Pesticide Paradox”, it is really very important to review the test
cases regularly and new and different tests need to be written to exercise
different parts of the software or system to potentially find more defects.
[Link] is context dependent: Testing is basically context dependent. Different
kinds of sites are tested differently. For example, the way you test an e-commerce
site will be different from the way you test a banking domain. All the developed
software’s are not identical. You might use a different approach, methodologies,
techniques, and types of testing depending upon the application type.
7. Absence – of – errors fallacy: If the system built is unusable and does not fulfil
the user’s needs and expectations then finding and fixing defects does not help.

2
Roles and responsibilities of a tester

Test lead/manager: A test lead is responsible for:

• Defining the testing activities for subordinates – testers or test engineers.


• All responsibilities of test planning.
• To check if the team has all the necessary resources to execute the testing
activities.
• To check if testing is going hand in hand with the software development in
all phases.
• Prepare the status report of testing activities.
• Required Interactions with customers.
• Updating project manager regularly about the progress of testing activities.

Test engineers/QA testers/QC testers are responsible for:

• To read all the documents and understand what needs to be tested.


• Based on the information procured in the above step decide how it is to be
tested.
• Inform the test lead about what all resources will be required for software
testing.
• Develop test cases and prioritize testing activities.
• Execute all the test case and report defects, define severity and priority for
each defect.
• Carry out regression testing every time when changes are made to the code
to fix defects.

3
Difference between manual testing and automation

Manual Testing Automation Testing


1. Manual Testing is a process which is 1. Automation Testing is a process which is done
done manually. by the help of automated tools.
2. All the famous phases of STLC like test 2. In Automation Testing all the popular phases of
planning, test deployment, result analysis, STLC are done by various open sources and
test execution, bug tracking and reporting purchased tools like Selenium, J meter, QTP, Load
tools are obviously comes under the Runner, Win Runner and so on.
category of Manual Testing and done
successfully by human efforts.
3. Manual Testing is a start of Testing, 3. Automation Testing is a continuous part of
without this testing we can’t start Manual Testing.
Automation Testing.
4. In Manual Testing testers are allowed 4. In Automation Testing we always test through
to do Random Testing to find the Bugs. Running Scripts.
5. In Manual Testing we find more bugs 5. In Automation Testing we test the repetitive
than automation by Error Guessing. functionalities of the application.
6. It takes lot of time. 6. It takes less time.
7. Manual Testing would be run 7. Automation Testing is done on different
sequentially. machines at same time.
8. Regression Testing process is tough in 8. Regression Testing process is easy in
Manual Testing Automation Testing by Tools.
9. It is not expensive. 9. It is expensive.
10. More testers are required in Manual 10. Few testers are required in Automation Testing
Testing because in this testing test cases because in this testing test cases need to be
need to be executed manually. executed by using Automation Tools.
11. It gives low accuracy result. 11. It gives high accuracy result.
12. It is considered as low quality. 12. It is considered as high quality.
13. In this Testing we cannot do batch 13. In this Testing we can do multiple types of
testing. batch testing.
14. It is considered as less reliable. 14. It is considered as more reliable.
15. No need of programming in Manual 15. Need of programming is must in Automation
Testing. Testing.
16. It is done without interaction of any 16. It is always done using tools.
Tool.

4
Error, defect ,bug, failure

“A mistake in coding is called Error, error found by tester is called Defect,


defect accepted by development team then it is called Bug, build does not
meet the requirements then it Is Failure.”

Error
An error is a mistake, misconception, or misunderstanding on the part of a
software developer.

Categories of defect

Wrong
When requirements are implemented not in the right way. This defect is a
variance from the given specification. It is Wrong!
Missing
A requirement of the customer that was not fulfilled.
Extra
A requirement incorporated into the product that was not given by the end
customer.

BUG
A bug is the result of a coding error. An Error found in the development
environment before the product is shipped to the customer.

FAILURE
- A failure is the inability of a software system or component to perform its
required functions within specified performance requirements. When a defect
reaches the end customer it is called a Failure
-a deviation of the software from its expected delivery or service.

5
Chances of occurring bugs
• Error of requirements
• Error of designing
• Programming error
• S/W complexity
• Changing requirement
• Time pressure

6
Quality

Quality is much more than absence of defects that allow us to meet customer
expectations. Quality requires controlled process improvement, allowing us
loyalty in our organizations.

Quality Assurance
• It is a planned and systematic pattern of all actions necessary to provide
adequate confidence that the product confirms to established technical
requirements that is verifying each and every process. It is a process
oriented defect prevention.
• it is defined as an activity that ensures the approaches, techniques,
methods and processes designed for the projects are implemented
correctly. It recognizes defects in the process.

Quality Control
• The purpose of quality control is to identify defects and how them
corrected so that defect free product are produced. It is a process of defect
detection. It is mainly done by testers.
• QC ensures that the approaches, techniques, methods and processes are
designed in the project are following correctly. QC activities monitor and
verify that the project deliverables meet the defined quality standards

7
Difference b/w QA and QC

Quality Assurance (QA) Quality Control (QC)


• It is performed before Quality Control • It is performed only after QA activity is
• QA aims to prevent the defect • done
QC aims to identify and fix defects

• It is a method to manage the quality- • It is a method to verify the quality-


Verification Validation

• It does not involve executing the • It always involves executing a program


program

• It's a Preventive technique • It's a Corrective technique

• It's a Proactive measure • It's a Reactive measure

• QA involves in full software • QC involves in full software testing life


development life cycle cycle

• In order to meet the customer • QC confirms that the standards are


requirements QA defines standards followed while working on the product
and methodologies

8
Verification
The process of evaluating software to determine whether the products of a given
development phase satisfy the conditions imposed at the start of that phase.
Verification will help to determine whether the software is of high quality, but it
will not ensure that the system is useful. Verification is concerned with whether
the system is well-engineered and error-free.

Validation
The process of evaluating software during or at the end of the development
process to determine whether it satisfies specified requirements.
Validation is the process of evaluating the final product to check whether the
software meets the customer expectations and requirements.

9
SOFTWARE DEVELOPMENT LIFE CYCLE(SDLC)

There are following six phases in every Software development life cycle model:

1. Requirement gathering and analysis


2. Design
3. Implementation or coding
4. Testing
5. Deployment
6. Maintenance

1) Requirement gathering and analysis

10
Business requirements are gathered in this phase. Meetings with managers, stake
holders and users are held in order to determine the requirements like;
Who is going to use the system? How will they use the system? What data
should be input into the system? What data should be output by the
system? These are general questions that get answered during a requirements
gathering phase.
Finally, a Requirement Specification document is created which serves the
purpose of guideline for the next phase of the model. The testing team follows
the Software Testing Life Cycle and starts the Test Planning phase after the
requirements analysis is completed.
For Example, A customer wants to have an application which involves money
transactions. In this case, the requirement has to be clear like what kind of
transactions will be done, how it will be done, in which currency it will be done,
etc.
Once the requirement gathering is done, an analysis is done to check the
feasibility of the development of a product. In case of any ambiguity, a call is set
up for further discussion.
Once the requirement is clearly understood, the SRS (Software Requirement
Specification) document is created. This document should be thoroughly
understood by the developers and also should be reviewed by the customer for
future reference.

2) Design
The Design step of the SDLC process can begin when the Customer has approved
(signed-off) the Functional Requirements Document .
In this phase the system and software design is prepared from the requirement
specifications which were studied in the first phase. System Design helps in
specifying hardware and system requirements and also helps in defining overall
system architecture.
HLD-it gives the architecture of the software product to be developed and is
done by architects and senior developers. It covers the system architecture and
database design. It describes the relation between various modules and functions
of the system. data flow, flow charts and data structures are covered under HLD.
LLD-it is done by senior developers. It describes how each and every feature of
product should work and how every component should work. It defines the actual
11
logic for each and every component of the system. Class diagrams with all the
methods and relation between classes comes under LLD. Programs specs are
covered under LLD.
The outcome form this phase is high level document and low level document.
In this phase the testers comes up with the Test strategy, where they mention
what to test, how to test.
3) Implementation / Coding:
On receiving system design documents, the work is divided in modules/units and
actual coding is started. Developers construct a working software from the
approved design. This is the longest phase of the software development life cycle.
4) Testing:
After the code is developed it is tested against the requirements to make sure
that the product is actually solving the needs addressed and gathered during the
requirements phase. During this phase all types of functional testing like unit
testing, integration testing, system testing, acceptance testing are done as well
as non-functional testing are also done.
5) Deployment:
After successful testing the product is delivered / deployed to the customer for
their use.
6) Maintenance:
Once when the customers starts using the developed system then the actual
problems comes up and needs to be solved from time to time. This process where
the care is taken for the developed product is known as maintenance.

12
Software Development Models(SDLC Models)

Waterfall Model
The waterfall Model is a linear sequential flow. In which progress is seen as
flowing steadily downwards through the phases of software implementation.
This means that any phase in the development process begins only if the previous
phase is complete. The waterfall approach is the earliest approach and most
widely known that was used for software development.

Advantages Disadvantages
▪ Easy to explain to the users. ▪ Long wait for workable products
▪ Structures approach. ▪ no continuous improvement
▪ Stages and activities are well ▪ quality compromised
defined. ▪ Very difficult to go back to any stage after it finished
▪ Helps to plan and schedule the ▪ A little flexibility and adjusting scope is difficult and
project. expensive.
▪ Each phase has specific ▪ Costly and required more time, in addition to the de
deliverables. plan.

13
V-Shaped Model
It is an extension of the waterfall model, Instead of moving down in a linear way,
the process steps are bent upwards after the implementation and coding phase,
to form the typical V shape. The major difference between V-shaped model and
waterfall model is the early test planning in the V-shaped model.

• The left side of the model is Software Development Life Cycle – SDLC
• The right side of the model is Software Test Life Cycle – STLC
• The entire figure looks like a V, hence the name V – model

Spiral Model
It is combining elements of both design and prototyping-in-stages, in an effort to
combine advantages of top-down and bottom-up concepts. The spiral model is
favored for large, expensive, and complicated projects. This model uses many of
the same phases as the waterfall model, in essentially the same order, separated
by planning, risk assessment, and the building of prototypes and simulations.

14
Usage

It is used in the large applications and systems which built-in small phases or
segments.

Advantages

o High amount of risk analysis


o Useful for large and mission-critical projects.

Disadvantages

o Can be a costly model to use.


o Risk analysis needed highly particular expertise
o Doesn't work well for smaller projects.

15
Iterative Model
In this Model, you can start with some of the software specifications and develop
the first version of the software. After the first version if there is a need to change
the software, then a new version of the software is created with a new iteration.
Every release of the Iterative Model finishes in an exact and fixed period that is
called iteration.

The Iterative Model allows the accessing earlier phases, in which the variations
made respectively. The final output of the project renewed at the end of the
Software Development Life Cycle (SDLC) process.

Advantage(Pros) of Iterative Model:

1. Testing and debugging during smaller iteration is easy.


2. A Parallel development can plan.
3. It is easily acceptable to ever-changing needs of the project.
4. Risks are identified and resolved during iteration.
5. Limited time spent on documentation and extra time on designing.

16
Disadvantage(Cons) of Iterative Model:

1. It is not suitable for smaller projects.


2. More Resources may be required.

Incremental Model
In incremental model the whole requirement is divided into various builds. In this
model, each module passes through the requirements, design, implementation
and testing phases. A working version of software is produced during the first
module, so you have working software early on during the software life cycle.

Advantage of Incremental Model

o Errors are easy to be recognized.


o Easier to test and debug
o More flexible.
o Simple to manage risk because it handled during its iteration.
o The Client gets important functionality early.

Disadvantage of Incremental Model

o Need for good planning


o Total Cost is high.

17
Agile Model
It is based on iterative and incremental development, where requirements and
solutions evolve through collaboration between cross-functional teams.

The usage

It can be used with any type of the project, but it needs more engagement from
the customer and to be interactive. Also, it can be used when the customer needs
to have some functional requirement ready in less than three weeks and the
requirements are not clear enough.

Principles of agile methodology


1. Early and continuous delivery of valuable software.
Customer satisfaction is crucial to a product’s early and ongoing success. This
principle emphasizes the importance of a continuous cycle of feedback and
improvement. A minimum viable product (MVP) is released to the market and the
response informs future releases.
2. Welcome changing requirements

18
Development teams react to issues and change the product to satisfy customer
needs. Strategies and processes may be reconsidered to safeguard the product’s
quality.
3. Deliver working software frequently
Work on achieving goals on smaller scales, ultimately contributing to the
product’s overall completion. Teams have tighter structures and more concrete
goals to work towards.
4. Business people and developers must work together daily throughout the
project.
Agile principles unify different departments, prioritizing regular collaboration and
communication to share information/resources.
5. motivated individuals
Appointing the right people with the right skills to the right roles is vital to
achieving success with agile principles. They should be trusted to do their job
properly, without disruptive micromanagement.
6. face-to-face conversation.
This emphasizes the importance of ongoing collaboration and idea-sharing, with
daily meetings, sprint planning, demos, and more.
7. Working software is the primary measure of progress.
Development teams work on Minimum Viable Features instead of trying to
perfect complete feature sets. Idea testing should be fast, as useful products
released now are better than those released a year down the line.
8. sustainable development
It’s vital for product teams to have realistic goals and manageable expectations
during sprints. This aids morale and prevents staff from becoming burned out.
9. technical excellence

19
Products should be reviewed after each iteration to ensure real improvement is
taking place.
10. Simplicity
Agile is about keeping processes simple and streamlining the entire cycle, and the
Agile principles help keep that on track. Even the most minor distractions or
unnecessary tasks can slow progress. Embrace automation tools whenever
possible.

11. self-organizing teams.


Teams should be autonomous and capable of acting faster, without having to
secure permission on every little task.
12. more effective
Teams should be encouraged to reflect on their progress and make changes to
the product, rather than moving ahead blindly.

Scrum
Scrum is a framework that helps teams work together. Scrum is structured to help
teams naturally adapt to changing conditions and user requirements, with re-
prioritization built into the process and short release cycles so your team can
constantly learn and improve.

Scrum artifacts

Product Backlog is the primary list of work that needs to get done maintained
by the product owner or product manager. This is a dynamic list of features,
requirements, enhancements, and fixes that acts as the input for the sprint
backlog

20
Sprint Backlog is the list of items, Userstories, or bug fixes, selected by the
development team for implementation in the current sprint cycle.

Before each sprint, in the sprint planning meeting the team chooses which items
it will work on for the sprint from the product backlog. A sprint backlog may be
flexible and can evolve during a sprint

Burn up and burn down chart

A burn down chart shows how much work is remaining to be done in the project,
whereas a burn up shows how much work has been completed.

Scrum ceremonies or events

Organize the backlog: Sometimes known as backlog grooming, this event is


the responsibility of the product owner. The product owner’s main jobs are to
drive the product towards its product vision and have a constant pulse on the
market and the customer. Therefore, he/she maintains this list using feedback
from users and the development team to help prioritize and keep the list clean
and ready to be worked on at any given time.

Sprint planning: The work to be performed (scope) during the current sprint is
planned during this meeting by the entire development team. This meeting is led
by the scrum master and is where the team decides on the sprint goal. Estimates
the stories, Specific use stories are then added to the sprint from the product
backlog.

At the end of the planning meeting, every scrum member needs to be clear on
what can be delivered in the sprint and how the increment can be delivered.

Sprint: A sprint is the actual time period when the scrum team works together to
finish an increment. Two weeks is a pretty typical length for a sprint, though some
teams find a week to be easier to scope or a month to be easier to deliver a
valuable increment.

21
All the events — from planning to retrospective — happen during the sprint. Once
a certain time interval for a sprint is established, it has to remain consistent
throughout the development period. This helps the team learn from past
experiences and apply that insight to future sprints.

Daily scrum or stand up: This is a daily super-short meeting that happens at
the same time and place to keep it simple.
The stand up is the time to voice any concerns you have with meeting the sprint
goal or any blockers.

A common way to conduct a stand up is for every team member to answers three
questions in the context of achieving the sprint goal:

• What did I do yesterday?


• What do I plan to do today?
• Are there any obstacles?

Sprint review: At the end of the sprint, the team gets together for an informal
session to view a demo of, or inspect, the increment. The development team
showcases the backlog items that are now ‘Done’ to stakeholders and teammates
for feedback. The product owner can decide whether or not to release the
increment, although in most cases the increment is released.

Sprint retrospective: The retrospective is where the team comes together to


document and discuss what worked and what didn’t work in a sprint, a project,
people or relationships, tools, or even for certain ceremonies. The idea is to
create a place where the team can focus on what went well and what needs to be
improved for the next time, and less about what went wrong.

Three essential roles for scrum

A scrum team needs three specific roles: product owner, scrum master, and the
development team. And because scrum teams are cross-functional, the
development team includes testers, designers, UX specialists, and ops engineers
in addition to developers.

22
The scrum product owner
Product owners are the champions for their product. They are focused on
understanding business, customer, and market requirements, then prioritizing the
work to be done by the engineering team accordingly. Effective product owners:

• Build and manage the product backlog.


• Closely partner with the business and the team to ensure everyone understands
the work items in the product backlog.
• Give the team clear guidance on which features to deliver next.
• Decide when to ship the product with a predisposition towards more frequent
delivery.

The scrum master


Scrum masters are the champions for scrum within their teams. They coach
teams, product owners, and the business on the scrum process, and look for ways
to fine-tune their practice of it.

An effective scrum master deeply understands the work being done by the team
and can help the team optimize their transparency and delivery flow. As the
facilitator-in-chief, he/she schedules the needed resources (both human and
logistical) for sprint planning, stand-up, sprint review, and the sprint
retrospective.

The scrum development team


They are the champions for sustainable development practices. The most
effective scrum teams are tight-knit, co-located, and usually five to seven
members. One way to work out the team size is to use the famous ‘two pizza rule’
coined by Jeff Bezos, the CEO of Amazon (the team should be small enough to
share two pizzas).

Team members have differing skill sets, and cross-train each other so no one
person becomes a bottleneck in the delivery of work. Strong scrum teams are

23
self-organising and approach their projects with a clear ‘we’ attitude. All members
of the team help one another to ensure a successful sprint completion.

Requirements Traceability Matrix (RTM)

It is a document that maps and traces user requirement with test cases. The main
purpose of Requirement Traceability Matrix is to validate that all requirements
are checked via test cases such that no functionality is unchecked during Software
testing.

Advantage of Requirement Traceability Matrix

• It confirms 100% test coverage


• It highlights any requirements missing
• It shows the overall defects or execution status with a focus on business
requirements

24
IMPORTANT TESTING TERMS
Test bed/Test environment

• A testing environment is a setup of software and hardware for the testing


teams to execute test cases. A testing environment is a setup of software
and hardware on which the test team will conduct the testing.
• The complete test environment on which you are going to execute your
test cases. This should include your OS, Hardware, testing tools you are
using, bug tracking tools you are using etc.

Test suite
The most common term for a collection of test cases is a test suite.
The test suite often also contains more detailed instructions or goals for each
collection of test cases.
Use case
we can define how to use the system for executing a precise task. With the help of
the use case, we get to know how the product should work.

Test scenario

Test Scenarios are derived from test artifacts like BRS and SRS. Test Scenario is more
focused on what to test. A Test Scenario is defined as any functionality that can be
tested. It is a collective set of test cases which helps the testing team to determine
the positive and negative characteristics of the [Link] gives a high-level idea of
what we need to test.

Test Scenario 1: Check the Search Functionality

Test Scenario 2: Check the Payments Functionality

Test Scenario 3: Check the Login Functionality

Test case

25
A test Case is a set of actions executed to verify particular features or
functionality. Test Case is mostly derived from test scenarios. Test Cases are
focused how to test.
A Test Case is a set of actions executed to verify a particular feature or
functionality of your software application. A test case contains test id, test
description, test steps, test data, expected result, actual result, status.

1. Check system behavior when valid email id and password is entered.


2. Check system behavior when invalid email id and valid password is entered.

Test script
It is a short program written in a programming language used to test part of
the functionality of a software system. A written set of steps that should be
performed automatically can also be called a test script.

26
STLC(Software Testing Life Cycle)
It is a sequence of specific activities conducted during the testing process to
ensure software quality goals are met. STLC involves both verification and
validation activities.
There are following six major phases in every Software Testing Life Cycle Model
(STLC Model):

1. Requirement Analysis
2. Test Planning
3. Test case development
4. Test Environment setup
5. Test Execution
6. Test Cycle closure

[Link] Phase
It also known as Requirement Analysis in which test team studies the
requirements from a testing point of view to identify testable requirements
.Requirements could be either functional or non-functional. Automation
feasibility for the testing project is also done in this stage.
Activities in Requirement Phase Testing

• Identify types of tests to be performed.


• Gather details about testing priorities and focus.
• Prepare RTM.

[Link] Planning

27
it is a phase in which a Senior QA manager determines the test plan strategy along
with efforts and cost estimates for the project. Moreover, the resources, test
environment, test limitations and the testing schedule are also determined. The
Test Plan gets prepared and finalized in the same phase.
Test Planning Activities

• Preparation of test plan/strategy document for various types of testing


• Test tool selection
• Test effort estimation
• Resource planning and determining roles and responsibilities.
• Training requirement

Deliverables of Test Planning

• Test plan strategy document.

[Link] Case Development


This involves the creation, verification and rework of test cases & test scripts after
the test plan is ready. Initially, the test data is identified then created and
reviewed and then reworked based on the preconditions. Then the QA team
starts the development process of test cases for individual units.

Test Case Development Activities

• Create test cases, automation scripts (if applicable)


• Review and baseline test cases and scripts

Test case format

28
[Link] Environment Setup
It decides the software and hardware conditions under which a work product is
tested. It is one of the critical aspects of the testing process and can be done in
parallel with the Test Case Development Phase. Test team may not be involved in
this activity if the development team provides the test environment. The test
team is required to do a readiness check (smoke testing) of the given
environment.
Test Environment Setup Activities

• Understand the required architecture, environment set-up and prepare


hardware and software requirement list for the Test Environment.
• Setup test Environment and test data
• Perform smoke test on the build

[Link] Execution
it is carried out by the testers in which testing of the software build is done based
on test plans and test cases prepared. The process consists of test script
execution, test script maintenance and bug reporting. If bugs are reported then it
is reverted back to development team for correction and retesting will be
performed.
Test Execution Activities

• Execute tests as per plan


• Document test results, and log defects for failed cases
• Map defects to test cases in RTM
• Retest the defect fixes
• Track the defects to closure

Deliverables of Test Execution

• Completed RTM with the execution status


• Test cases updated with results
• Defect reports

[Link] Cycle Closure

29
This phase is completion of test execution which involves several activities like
test completion reporting, collection of test completion matrices and test results.
Testing team members meet, discuss and analyze testing artifacts to identify
strategies that have to be implemented in future, taking lessons from current test
cycle.
Test Cycle Closure Activities

• Evaluate cycle completion criteria based on Time, Test coverage, Cost,


Software, Critical Business Objectives, Quality
• Prepare test metrics based on the above parameters.
• Document the learning out of the project
• Prepare Test closure report
• Test result analysis to find out the defect distribution by type and severity.

Deliverables of Test Cycle Closure

• Test Closure report


• Test metrics

When to stop testing?


1. Meet the deadline
2. Completion of test case execution
3. Management decision
4. Bug rate falls below a certain level and no high priority bugs are
identified

30
DEFECT REPORTING

Defect severity
Severity is defined as the degree of impact a defect has on the development or
operation of a component application being tested.
Higher effect on the system functionality will lead to the assignment of higher
severity to the bug. Quality Assurance engineer usually determines the severity
level of defect.

6 types
▪ Blocker Blocks development and/or testing work.

▪ Critical crashes, loss of data, severe memory leak.

▪ Major major loss of function.

▪ Minor minor loss of function, or other problem where easy

workaround is present.
▪ Trivial cosmetic problem like misspelled words or misaligned

Text.

31
▪ Enhancement Request for enhancement.

DEFECT PRIORITY
Correcting the order of the bug. Priority is defined as the order in which a defect
should be fixed. Higher the priority the sooner the defect should be resolved.
Defects that leave the software system unusable are given higher priority over
defects that cause a small functionality of the software to fail.

Defect priority can be categorized into three class


Low: The defect is an irritant but repair can be done once the more serious defect
has been fixed
Medium: During the normal course of the development activities defect should
be resolved. It can wait until a new version is created
High: The defect must be resolved as soon as possible as it affects the system
severely and cannot be used until it is fixed.

Let see an example of low severity and high priority and vice versa

A very low severity with a high priority: A logo error for any shipment website
can be of low severity as it not going to affect the functionality of the website but
can be of high priority as you don't want any further shipment to proceed with
wrong logo.
A very high severity with a low priority: Likewise, for flight operating website,
defect in reservation functionality may be of high severity but can be a low
priority as it can be scheduled to release in a next cycle.

32
Bug life cycle

Defect life cycle management is a process that outlines the steps that are involved
in identifying, tracking, reporting and resolving defects or issues in software
[Link] is an integral part of the software development life cycle
(SDLC)and involves the following stages.
[Link] identification: This stage involves identifying defects or issues in the
software, which can be done through testing, user feedback, or other means.
2. Defect logging: once a defect is [Link] needs to be logged into a tracking
system or defect management tool, along with relevant details such as its
severity, priority, and description. eg: jira, Bugzilla etc
[Link] assignment: The logged defect is then assigned to the relevant team
member or developer responsible for fixing it.
[Link] Resolution: The assigned team member or developer works on fixing the
defect and marks it as resolved once the fix is complete.

1. Duplicate: If the bug is repeated twice or the two bugs mention the same
concept of the bug, then one bug status is changed to “duplicate“.

2. Rejected: If the developer feels that the bug is not genuine, he rejects the
bug. Then the state of the bug is changed to “rejected”.

3. Deferred: The bug, changed to deferred state means the bug is expected to
be fixed in next releases. The reasons for changing the bug to this state
have many factors. Some of them are priority of the bug may be low, lack

33
of time for the release or the bug may not have major effect on the
software.

4. Not a bug: The state given as “Not a bug” if there is no change in the
functionality of the application. For an example: If customer asks for some
change in the look and field of the application like change of color of some
text then it is not a bug but just some change in the looks of
the application.

[Link] Verification: The resolved defect is then retested to ensure that the fix
has resolved the issue.
[Link] Closure: finally, the defect is marked as closed once it has been verified
and confirmed as fixed.
By following this process, defects can be efficiently managed throughout the
SDLC, ensuring that they are identified, tracked, and resolved in a timely and
efficient manner, which ultimately results in a higher quality software

34
METRIC
Test metrics are indicators of the efficiency, effectiveness, quality and
performance of software testing techniques
• Process metric
• Product metric
• Software quality metric

Process metric
The metric used to measure the characteristics of the method, technique
and tools employed in developing implementing and maintaining the s/w s/m.
Product metric
Metric used to measure the characteristics of the documentation and code.
Software quality metric
focus on the quality aspects of the product, process, and project.
Process Metrics:

Software Test Metrics used in the process of test preparation and test execution
phase of stlc

The following are generated during the Test Preparation phase of stlc:

Test Case Preparation Productivity:

It is used to calculate the number of Test Cases prepared and the effort spent for
the preparation of testcases.

Test Case Preparation Productivity = (No of Test Case)/ (Effort spent for Test Case
Preparation)

35
Test Design Coverage:

It helps to measure the percentage of test case coverage against the number of
requirements

Test Design Coverage = ((Total number of requirements mapped to test cases) /


(Total number of requirements)*100

Test Execution Productivity:

It determines the number of Test Cases that can be executed per hour

(No of Test cases executed)/ (Effort spent for execution of test cases)

Test Execution Coverage:

It is to measure the number of test cases executed against the number of test
cases planed.

Test Execution Coverage = (Total no. of test cases executed / Total no. of test
cases planned to execute)*100

Test Cases Passed:

It is to measure the percentage no. of test cases passed

Test Cases Pass = (Total no. of test cases passed) / (Total no. of test cases
executed) * 100

36
Test Cases Failed:

It is to measure the percentage no. of test cases failed

Test Cases Failed = (Total no. of test cases failed) / (Total no. of test cases
executed) * 100

Test Cases Blocked:

It is to measure the percentage no. of test cases blocked

Test Cases Blocked = (Total no. of test cases blocked) / (Total no. of test cases
executed) * 100

Product metric:

Software Test Metrics used in the process of defect analysis phase of STLC.

Error Discovery Rate:

It is to determine the effectiveness of the test cases.

Error Discovery Rate = (Total number of defects found /Total no. of test cases
executed)*100

Defect Fix Rate:

It helps to know the quality of a build in terms of defect fixing.

Defect Fix Rate = (Total no of Defects reported as fixed - Total no. of defects
reopened) / (Total no of Defects reported as fixed + Total no. of new Bugs due to
fix)*100

37
Defect Density:

It is defined as the ratio of defects to requirements.

Defect density determines the stability of the application.

Defect Density = Total no. of defects identified / Actual Size


(requirements)

Defect Leakage:

It is used to review the efficiency of the testing process before UAT.

Defect Leakage = ((Total no. of defects found in UAT)/(Total no. of defects found
before UAT)) * 100

Defect Removal Efficiency:

It allows us to compare the overall (defects found pre and post-delivery) defect
removal efficiency

Defect Removal Efficiency = ((Total no. of defects found pre-delivery) /( (Total no.
of defects found pre-delivery )+ (Total no. of defects found post-delivery)))* 100

38
Testing levels
• Unit Testing

• Integration testing

• System testing

• User acceptance testing

[Link] testing

unit testing is a type of software testing where individual units or components of


a software are tested. The purpose is to validate that each unit of the software
code performs as expected. Unit Testing is done during the development (coding
phase) of an application by the developers. Unit Tests isolate a section of code
and verify its correctness. Unit testing is first level of testing. Unit testing is a
White Box testing technique that is usually performed by the developer.

[Link] testing

Integration testing is defined as a type of testing where software modules are


integrated logically and tested as a group. The purpose of this level of testing is to
expose defects in the interaction between these software modules when they are
integrated.

• Big Bang Approach


• Incremental Approach
• Top Down Approach
• Bottom Up Approach
• Sandwich Approach

Big Bang Testing(non-incremental)


bang testing is an Integration testing approach in which all the components
or modules are integrated together at once and then tested as a unit. This

39
combined set of components is considered as an entity while testing. If all
of the components in the unit are not completed, the integration process
will not execute.

Bottom-up Integration Testing


Bottom up integration testing is a strategy in which the lower level modules are
tested first. These tested modules are then further used to facilitate the testing of
higher level modules. The process continues until all modules at top level are
tested. Once the lower level modules are tested and integrated, then the next
level of modules are formed.
Drivers are used for testing if some modules are not ready.

Top-down Integration Testing


Top down integration testing is a method in which integration testing takes place
from top to bottom following the control flow of software system. The higher
level modules are tested first and then lower level modules are tested and
integrated in order to check the software functionality. Stubs are used for testing
if some modules are not ready.

40
Sandwich Testing(Hybrid)
sandwich testing is a strategy in which top level modules are tested with lower
level modules at the same time lower modules are integrated with top modules
and tested as a system. It is a combination of Top-down and Bottom-up
approaches therefore it is called Hybrid Integration Testing. It makes use of both
stubs as well as drivers.

Stubs and Drivers


Stubs and Drivers are the dummy programs in Integration testing used to
facilitate the software testing activity. These programs act as a substitutes for the
missing models in the testing. They do not implement the entire programming
logic of the software module but they simulate data communication with the
calling module while testing.

41
[Link] testing

System Testing is a level of testing that validates the complete and fully
integrated software product. The purpose of a system test is to evaluate the end-
to-end system specifications. As the name implies, all the components of
the software are tested as a whole in order to ensure that the overal l
product meets the requirements specified.

System testing enables testers to ensure that the product meets


business requirements, as well as determine that it runs smoothly
within its operating environment. This type of testing is typically
performed by a specialized testing team.

[Link] testing
User Acceptance Testing (UAT) is a type of testing performed by the end user or
the client to verify/accept the software system before moving the software
application to the production environment. UAT is done in the final phase of
testing after functional, integration and system testing is done. The aim of this
type of testing is to evaluate whether the system complies with the
end-user requirements and if it is ready for deployment.

Static Testing
Static Testing, a software testing technique in which the software is tested
without executing the code. Static testing is a software testing method that
involves examination of the program's code and its associated documentation but
does not require the program be executed.

It starts early in the Life cycle and so it is done during the verification process.

Static testing is performed due to the following reasons

• Early defect detection and correction

42
• Reduced development timescales
• Reduced testing cost and time
• For improvement of development productivity
• To get fewer defect at a later stage of testing

Types of defects that are easier to find during the static testing are

Deviation from standards, missing requirements, design defects, non-


maintainable code and inconsistent interface specifications.

It has two parts as listed below:

o Review

o Static analysis

REVIEWS

During reviews participants question development decisions recommend


improvements, and examine work products to determine status and conformance
to requirements. The review is an aid to quality and determines status.

During the Review process four types of participants that take part in testing are:

• Moderator: leads the review process,his role is to determine the type of


review ,scheduling meetings,distribute documents to other
participants,coaching team member.
• Author: Takes responsibility for fixing the defect found and improves the
quality of the [Link] is the writer of the document under review .
• Scribe:He is responsible to record each defect found and any suggestions
given in the meeting for process improvement. It does the logging of the
defect during a review and attends the review meeting
• Reviewer: Check material for defects and inspects

43
Types of reviews

• Informal Review

In informal review the creator of the documents put the contents in front
of audience and everyone gives their opinion and thus defects are
identified in the early stage.

• Walkthrough

It is basically performed by experienced person or expert to check the


defects so that there might not be problem further in the development or
testing phase.

• Peer Review

Peer review means checking documents of one-another to detect and fix


the defects. It is basically done in a team of colleagues.

• Inspection

Inspection is basically the verification of document the higher authority like


the verification of software requirement specifications (SRS).

Dynamic Testing
Dynamic testing is software testing technique where testing is carried out with
executing the code. This type of testing comes under Validation.

The main purpose of the dynamic test is to ensure consistency to the software.

44
Consistency is not only limited to the functionality it also refers to different
standards like performance, usability, compatibity etc, hence it becomes very
important to perform Dynamic Testing.

Types of Dynamic Testing

• White Box Testing


• Black Box Testing

white Box Testing


White Box Testing is a software testing method in which the internal structure/
design is known to the tester. The main aim of White Box testing is to check on how
System is performing based on the code. It is mainly performed by the Developers
. testing is done with knowledge of the internal structure of program. White Box
testing has the main goal to test the internal operation of the system. It is also
called glass box, Open box testing, transparent box, clear box or code base testing.

45
Black Box Testing
Black Box Testing is a method of testing in which the internal structure/
code/design is not known to the tester. The main aim of this testing to verify the
functionality of the system under test and this type of testing requires to execute
the complete test suite and is mainly performed by the Testers, and there is no
need of any programming knowledge. The main goal to test the behavior of the
software. Black Box testing is focused on external or end-user perspective. Black
box testing can be applied to virtually every level of software testing: unit,
integration, system, and acceptance.

Gray Box Testing


Gray Box Testing is a software testing method, which is a combination of
both white box and Black Box Testing method.

• In White Box testing internal structure (code) is known


• In Black Box testing internal structure (code) is unknown
• In Grey Box Testing internal structure (code) is partially known

Grey Box Testing or Gray box testing is a software testing technique to test a
software product or application with partial knowledge of internal structure of
the application.

46
Functional Vs Non-Functional Testing
FUNCTIONAL TESTING NON-FUNCTIONAL TESTING
Functional testing is performed using Non-Functional testing checks
the functional specification provided the Performance, reliability, scalability
by the client and verifies the system and other non-functional aspects of
against the functional requirements. the software system
Functional testing is executed first Non-functional testing should be
performed after functional testing

Manual testing or automation tools Using tools will be effective for this
can be used for functional testing [Link]: LoadRunner, Apache
JMeter, postman
Functional testing describes what the Nonfunctional testing describes how
product does good the product works
Easy to do Manual Testing Tough to do Manual Testing
Examples of Functional testing are Examples of Non-functional testing are

• Unit Testing • Performance Testing


• Smoke Testing • Load Testing
• Sanity Testing • Stress Testing
• Integration Testing • Security Testing
• White box testing • Installation Testing
• Black Box testing • Penetration Testing
• User Acceptance testing • Compatibility Testing
• Regression Testing

47
Black box testing techniques
[Link] Class partitioning Testing(ECP)
Equivalence Partitioning or Equivalence Class Partitioning is type of black box
testing technique which can be applied to all levels of software testing like unit,
integration, system, etc. In this technique, input data units are divided into
equivalent partitions that can be used to derive test cases which reduces time
required for testing because of small number of test cases.

• It divides the input data of software into different equivalence data classes.
• You can apply this technique, where there is a range in the input field.

[Link] value analysis


The best method of the black box design technique, boundary value analysis
comprises testing the input values at the boundaries. Generally, the input values
are put to test at the initial stages to reduce the chances of causing errors.
Consider 6 parameters min-1,min,min+1,max-1,max,max+1.

[Link] Table
Decision table testing is a software testing technique used to test system behavior
for different input combinations. This is a systematic approach where the
different input combinations and their corresponding system behavior (Output)
are captured in a tabular form. That is why it is also called as a Cause-Effect table
where Cause and effects are captured for better test coverage. A Decision Table is
a tabular representation of inputs versus rules/cases/test conditions. A decision
table helps to check all possible combinations of conditions for testing and testers
can also identify missed conditions easily.

[Link] Transition
State Transition Testing is a black box testing technique in which changes made in
input conditions cause state changes or output changes in the Application under
Test(AUT). State transition testing helps to analyze behaviour of an application for

48
different input conditions. Testers can provide positive and negative input test
values and record the system behavior.

[Link] guessing
It is an experience-based testing technique where the Test Analyst uses his/her
experience to guess the problematic areas of the application. This technique
necessarily requires skilled and experienced testers. It is a type of Black-Box
Testing technique. The Error Guessing technique does not follow any specific
rules.

Eg: [Link] blank spaces in the text fields


[Link] the submit button without entering values
[Link] files exceeding maximum limits
[Link] parameters

49
Testing Types
Smoke Testing

Whenever a new build is provided by the development team then the software
testing team validates the build and ensures that no major issue exists. The
testing team ensures that build is stable and a detailed level of testing is carried
out further.

If testers find that the major critical functionality is broken down at the initial
stage itself then testing team can reject the build and inform accordingly to the
development team. Smoke Testing is carried out to a detailed level of any
functional or regression testing.

Sanity Testing

Sanity testing is the subset of regression testing and it is performed when we do


not have enough time for doing testing.

Sanity testing is the surface level testing where QA engineer verifies that all the
menus, functions, commands available in the product and project are working
fine.

Regression testing

Regression Testing is defined as a type of software testing to confirm that a recent


program or code change has not adversely affected existing features. Regression
Testing is nothing but full or partial selection of already executed test cases which
are re-executed to ensure existing functionalities work fine.

For example, in a project there are 5 modules: login page, home page, user's
details page, new user creation and task creation.

Suppose we have a bug in the login page: the login page's username field accepts
usernames which are shorter than 6 alphanumeric characters, and this is against
the requirements, as in the requirements it is specified that the username should
be at least 6 alphanumeric [Link] the bug is reported by the testing
team to the developer team to fix it. After the developing team fixes the bug and
passes the app to the testing team, the testing team also checks the other
50
modules of the application in order to verify that the bug fix does not affect the
functionality of the other modules.

Difference between Regression and Retesting

1-Retesting is done to make sure that bug is fixed and failed functionality is
working fine or not, This is kind of verification method followed in testing field for
the fixed bugs. Whereas, Regression is re-execution of the test cases for
unchanged part to see that unchanged functionality is working fine or not.

2- Retesting is a planned testing while Regression is known as the generic testing.

3- Retesting is only done for failed Test cases while Regression is done for passed
test cases.

4- We should always keep this in mind, Re-testing has higher priority than
the regression testing. But in bigger projects Retesting and Regression is done in
parallel effort. But never forget importance of both in the success of the project.

Usability Testing

In usability testing basically the testers test the ease with which the user
interfaces can be used. It tests that whether the application or the product built is
user-friendly or not. Usability testing also reveals whether users feel comfortable
with your application or Web site according to different parameters - the flow,
navigation and layout, speed and content - especially in comparison to prior or
similar applications.

Negative Testing
Negative testing is performed to ensure that the product or application under test
does NOT fail when an unexpected input is given. The purpose of Negative testing
is to break the system and to verify the application response during unintentional
inputs. Negative Testing is carried out to spot the faults that can result in
significant failures. It is performed to expose the software weakness and potential
for exploitation.
51
Positive Testing
Positive Testing is a type of testing which is performed on a software application
by providing the valid data sets as an input. It checks whether the software
application behaves as expected with positive inputs or not.

Recovery Testing
Recovery testing is the activity of testing how well an application is able to
recover from crashes, hardware failures and other similar problems. Recovery
testing is the forced failure of the software in a variety of ways to verify that
recovery is properly performed. It is basically done in order to check how fast and
better the application can recover against any type of crash or hardware failure
etc.

Performance testing

Software performance testing involves testing software applications to ensure


they will perform well under their expected workload. Features and Functionality
supported by a software system is not the only concern. A software application's
performance like its response time, do matter. The goal of performance testing is
not to find bugs but to eliminate performance bottlenecks.

The focus of Performance Testing is checking a software program's

• Speed - Determines whether the application responds quickly

• Scalability - Determines maximum user load the software application can


handle.

• Stability - Determines if the application is stable under varying loads

52
Load Testing

Testing technique that puts demand on a system or device and measures its
response. It is usually conducted by the performance engineers.

Stress Testing:

Testing technique which evaluates a system or component at or beyond the limits


of its specified requirements. It is usually conducted by the performance
engineer.

Endurance Testing/soak testing

Type of testing which checks for memory leaks or other problems that may occur
with prolonged execution. It is usually performed by performance [Link] is
also called soak testing.

Endurance testing involves testing a system with a significant load extended over
a significant period of time, to discover how the system behaves under sustained
use.

For example, in software testing, a system may behave exactly as expected when
tested for 1 hour but when the same system is tested for 3 hours, problems such
as memory leaks cause the system to fail or behave randomly.

Installation Testing

Installation testing is check that software application is successfully installed & is


working as expected after installation. This is testing phase prior to end users will
firstly interact with the actual application. Installation testing is also called as
“Implementation Testing”.

Compatibility Testing
Compatibility testing is conducted on the application to evaluate the
application's compatibility with the computing environment.
Compatibility Testing is a type of Software testing to check whether your software
is capable of running on different hardware, operating systems, applications,
network environments or mobile devices.

53
End-to-end Testing

Similar to system testing, involves testing of a complete application environment


in a situation that mimics real-world use, such as interacting with a database,
using network communications, or interacting with other hardware, applications,
or systems if appropriate. It is performed by QA teams.

Security Testing

Security Testing is a type of Software Testing that uncovers vulnerabilities of the


system and determines that the data and resources of the system are protected
from possible intruders. It ensures that the software system and application are
free from any threats or risks that can cause a loss.

Penetration Testing

Testing method which evaluates the security of a computer system or network by


simulating an attack from a malicious source. Usually they are conducted by
specialized penetration testing companies.

Gorilla Testing

Gorilla Testing is a Software testing technique wherein a module of the program is


repeatedly tested to ensure that it is working correctly and there is no bug in that
module.

In Gorilla testing test case and test data are not required. It uses random data and
test cases to perform testing of [Link] is called frustrated testing.

Exploratory Testing

Black box testing technique performed without planning and documentation. It is


usually performed by manual testers.

Exploratory testing, the application is tested while learning [Link] can increase
their knowledge by testing and exploring.

54
Adhoc Testing

Testing performed without planning and documentation - the tester tries to


'break' the system by randomly trying the system's functionality. It is performed
by the testing team.

Ad-hoc testing means learn the application and then test it. Ad-hoc tester should
have complete knowledge about the requirements of the system.

Before going for ad-hoc testing, the tester should have done sufficient testing of
the system.

Benefits Realization tests

The benefits realization test is a test or analysis conducted after an application is


moved into production in order to determine whether the application is likely to
deliver the original projected benefits. The analysis is usually conducted by the
business user or client group who requested the project and results are reported
back to executive management.

Globalization Testing

it is a software testing method used to ensure that the software application can
function in any culture or locale (language, territory or code page) by testing the
software functionalities using each type of international input possible. The
purpose of Globalization testing is to ensure that software can be used
internationally or worldwide. It is also called Internationalization Testing.

Localization Testing
Localization testing is the software testing technique in which the behavior of a
software is tested for a specific region, locale or culture. The purpose of doing
localization testing for a software is to test appropriate linguistic and cultural
aspects for a particular locale. It is the process of customizing the software as
per the targeted language and country. The major area affected by localization
testing includes content and UI.

55
Spike Testing
Spike Testing is a type of software testing in which a software application is tested
with extreme increments and decrements in traffic load. The main purpose of
spike testing is to evaluate the behaviour of the software application under
sudden increment or decrement in user load and determine recovery time after a
spike of user load. Spike Testing is performed to estimate the weaknesses of
software applications.

Penetration Testing
Penetration Testing or Pen Testing is a type of Security Testing used to cover
vulnerabilities, threats and risks that an attacker could exploit in software
applications, networks or web applications. The purpose of penetration testing is
to identify and test all possible security vulnerabilities that are present in the
software application. Penetration testing is also called Pen Test.

Mutation Testing

It is a type of software testing in which certain statements of the source code are
changed/mutated to check if the test cases are able to find errors in source code.
The goal of Mutation Testing is ensuring the quality of test cases in terms of
robustness that it should fail the mutated source code.
The changes made in the mutant program should be kept extremely small that it
does not affect the overall objective of the program. Mutation Testing is also
called Fault-based testing strategy as it involves creating a fault in the program
and it is a type of White Box Testing which is mainly used for Unit Testing.

Reliability Testing

It is a software testing process that checks whether the software can perform a
failure-free operation in a particular environment for a specified time period. The
purpose of Reliability testing is to assure that the software product is bug-free
and reliable enough for its expected purpose.

Reliability means “yielding the same,” in other terms, the word “reliable” means
something is dependable and that it will give the same outcome every time. The
same is true for Reliability testing.
56
API TESTING

API testing is a software testing type that validates Application Programming


Interfaces (APIs). The purpose of API Testing is to check the functionality,
reliability, performance, and security of the programming interfaces.

What is an API?

API (Application Programming Interface) enables communication and data


exchange between two separate software systems. A software system
implementing an API contains functions/sub-routines which can be executed by
another software system.

API tests are very different from GUI Tests and won't concentrate on the look and
feel of an application. It mainly concentrates on the business logic layer of the
software architecture.

57
WEB APPLICATION TESTING
Web Testing in simple terms is checking your web application for potential bugs
before its made live or before code is moved into the production environment.

Some or all of the following testing types may be performed depending on your
web testing requirements.

• Functionality testing

• Usability testing

• Interface testing

• Database Testing

• Compatibility Testing

• Performance

• Security

1. Functionality Testing:

This is used to check if your product is as per the specifications you intended for it
as well as the functional requirements

Test all links in your webpages are working correctly and make sure there are no
broken links. Links to be checked will include -

• Outgoing links

• Internal links

• Broken Links

• MailTo Links

Test Forms are working as expected. This will include-

58
• Scripting checks on the form are working as expected. For example- if a
user does not fill a mandatory field in a form an error message is shown.

• Check default values are being populated

• Once submitted, the data in the forms is submitted to a live database or is


linked to a working email address

• Forms are optimally formatted for better readability

Test Cookies are working as expected. Cookies are small files used by websites to
primarily remember active user sessions so you do not need to log in every time
you visit a website. Cookie Testing will include

• Testing cookies (sessions) are deleted either when cache is cleared or when
they reach their expiry.

• Delete cookies (sessions) and test that login credentials are asked for when
you next visit the site.

2. Usability testing:

Usability Testing has now become a vital part of any web based project. It can
be carried out by testers like you or a small focus group similar to the target
audience of the web application.

Test the site Navigation:

• Menus, buttons or Links to different pages on your site should be easily


visible and consistent on all webpages

• Content should be legible with no spelling or grammatical errors.

• Images if present should contain an "alt" text

[Link] Testing:

Interface testing is to test the interface between the web server and application
server, application server and database server have proper interaction or not. It
ensures a positive user experience. It includes verifying the communication

59
processes as well as making sure that error messages are displaying
[Link] areas to be tested here are - Application, Web and Database
Server

• Application: Test requests are sent correctly to the Database and output at
the client side is displayed correctly. Errors if any must be caught by the
application and must be only shown to the administrator and not the end
user.

• Web Server: Test Web server is handling all application requests without
any service denial.

• Database Server: Make sure queries sent to the database give expected
results.

Database Testing:

Database is one critical component of your web application and stress must be
laid to test it thoroughly. Testing activities will include-

• Test if any errors are shown while executing queries

• Data Integrity is maintained while creating, updating or deleting data in


database.

• Check response time of queries and fine tune them if necessary.

• Test data retrieved from your database is shown accurately in your web
application

Compatibility Test

Compatibility tests ensures that your web application displays correctly across
different devices. This would include-

Browser Compatibility Test: Same website in different browsers will display


differently. You need to test if your web application is being displayed correctly
across browsers, JavaScript, AJAX and authentication is working fine. You may
also check for Mobile Browser Compatibility.

60
The rendering of web elements like buttons, text fields etc. changes with change
in Operating System. Make sure your website works fine for various combination
of Operating systems such as Windows, Linux, Mac and Browsers such as Firefox,
Internet Explorer, Safari etc.

Performance Testing:

This will ensure your site works under all loads. Testing activities will include but
not limited to

• Website application response times at different connection speeds

• Load test your web application to determine its behavior under normal and
peak loads

• Stress test your web site to determine its break point when pushed to
beyond normal loads at peak time.

• Test if a crash occurs due to peak load, how does the site recover from such
an event

• Make sure optimization techniques like zip compression, browser and


server side cache enabled to reduce load times

Security testing:

Security Testing is vital for e-commerce website that store sensitive customer
information like credit cards. Testing Activities will include-

• Test unauthorized access to secure pages should not be permitted

• Restricted files should not be downloadable without appropriate access

• Check sessions are automatically killed after prolonged user inactivity

• On use of SSL certificates, website should re-direct to encrypted SSL pages.

61
MOBILE APPLICATION TESTING

Common Types Of Mobile App Testing

Ensures that the app is easy to use and offers the desired user
Usability Testing
experience.

Ensures that the app performs well on different devices, browsers,


Compatibility Testing
screen sizes, and OS versions.

Testing of menu options, navigation, gestures, transitions, buttons,


Interface Testing
history, and settings.

Services Testing Testing the online and offline modes of app.

Low-level Resource
Validating local database issues.
Testing

Testing the app performance by switching to and from mobile data


Performance Testing
to WIFI, data sharing, battery usage, etc.

Operational Testing Testing of backups in case of data loss during app upgrades.

Installation Testing Validating the installation and uninstallation of the app.

Security Testing Testing the data protection capability of the mobile app.

62

You might also like