0% found this document useful (0 votes)
38 views25 pages

Functional Testing

The document discusses functional testing and its goal of ensuring an application works as intended. It also defines smoke testing and sanity testing, explaining when and why each is performed. The key differences between smoke and sanity testing are outlined.

Uploaded by

ABHISHEK SINGH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views25 pages

Functional Testing

The document discusses functional testing and its goal of ensuring an application works as intended. It also defines smoke testing and sanity testing, explaining when and why each is performed. The key differences between smoke and sanity testing are outlined.

Uploaded by

ABHISHEK SINGH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

15/05/2023

Functional Testing

Functional testing is performed to evaluate whether a component or system


satisfies certain functional requirements. The ultimate goal of functional tests
is to ensure that the application works according to the requirements and
fulfils the user’s expectations.

What is Smoke Testing?


Smoke Testing is a type of testing that determines whether the deployed build
is stable or not.

Why Perform Smoke Testing?


The purpose of Smoke Testing is to confirm whether the QA team can proceed
with further testing. It is also called as “Build Verification Testing” or
“Confidence Testing”.

When to Perform Smoke Testing?


1. Whenever the Dev team provides a fresh build to the QA team. Fresh
Build=Build has new changes made by the developer.

2. When a new module/functionality is added.

Pros of Smoke Testing

1. Smoke testing helps to find bugs in the early stages of testing.


2. It improves the quality of risks and reduces the risks.
3. Smoke testing can be completed in a short span of time and quickly.
4. It helps in checking that the issues fixed in the previous build are NOT
affecting the major functionalities of the application.
5. Smoke testing does not require more test cases.
Cons of Smoke Testing

1. The smoke tests do not cover all functionalities of the application.


2. Smoke tests are non-exhaustive testing with fewer test cases; they
cannot identify the critical bugs and performance issues in the
application.
3. Smoke tests don’t perform negative scenarios and with invalid data .

Smoke Testing Example:


Let’s take a simple example of a Gmail application for testing.
Here the important functions are:
Login to Gmail application
Compose an email
Send the email
Why are the above functionalities important when it comes to Smoke Testing?

Assume your email does not get sent… Does it make any sense to test other
functionalities such as: Drafts, Deleted messages, Archives, etc? 🤔 No. If even
the basic functionality of sending an email is not working, then this means
there is no use of testing any further functionalities.

The main focus of Smoke Testing is to test the critical areas & not the whole
application.

Sanity Testing

What is Sanity Testing?


Sanity Testing is a subset of Regression Testing which is performed to ensure
that the code changes that are made are working properly.
Why Perform Sanity Testing?
The purpose of Sanity Testing is to determine that the changes in functionality
and bug fixes are working as expected. It is also called as “Tester Acceptance
Testing”.

When to Perform Sanity Testing?


When the defect/bug is fixed.
When tester receives software build with minor changes in code.

Pros of Sanity Testing

1. Sanity testing helps in identifying the issues quickly and reporting the
issues immediately.

2. No documentation is required as the test cases are carried out in a lesser


time when compared to other test types.

3. If any defects are found during smoke testing, the build gets rejected,
saving time and effort.
4. The execution of sanity testing will help in saving unnecessary testing
effort and time because it is only focused on one or a few functionality
areas.

Cons of Sanity Testing


1. Sanity testing focuses only on the commands and functions of the
application.

2. Sanity tests focus only on the limited features, so it is difficult to identify


major bugs during sanity testing.
3. As the sanity tests are unscripted, future references are not available.

4. All test cases are not covered under sanity tests.

Sanity Testing Example


Let’s take an example of OLA application for Testing.

Here the features are:

Signup to OLA app


Login to app
Search Cab
Book Cab
Assume these four features are tested and “Login” feature is not working.
Then the developer modifies & fixes the defects. Then, Sanity Testing will be
performed only on the modified function.

Differences between Smoke Testing vs. Sanity Testing

Category Smoke Testing Sanity Testing

Sanity testing exercises only the particular


Smoke testing exercises
What
the entire system from end to end
component of the entire system
Smoke Testing is performed to
Sanity Testing is performed to check that 
make sure that critical
When
functionalities of the application
new functionalities /bugs have been fixed
are working fine

The objective of the testing is to verify the 


The objective of the testing is to
verify the stability of the system in
Why rationality  of the system in order to proceed with
order to proceed with more
rigorous testing
more rigorous testing

Smoke testing is
Scripted  Sanity testing is not documented and unscripted
usually documented and scripted

Smoke testing is a subset


Subset Sanity testing is a subset of Regression testing
of Acceptance Testing

Smoke testing is
Analogy Sanity testing is like Specialized Health check-up
like General Health check-up

Both smoke testing and sanity testing are performed to avoid wasting time and
effort by quickly determining whether an application is too flawed to merit any
rigorous testing.

When do you do sanity and smoke testing?


Smoke testing is done at the initial stages of SDLC (Software Development Life
Cycle) to find the core functionalities of an application, whereas a sanity test is
performed at the final stage after completing the smoke testing.

Why is sanity testing performed?

Sanity testing is performed to check whether the code changes are working
properly or not. The general focus of sanity testing is to validate the high-level
functionalities and not to perform all features of the application.
Which comes first, smoke or sanity?
Smoke testing is performed first followed by sanity testing. Smoke tests are
done at the early stages of SDLC, testing the basic functionalities of the
software, and sanity tests are done at the final stages of SDLC to test the high-
level functionalities of the software.

Is sanity and smoke tests the same?


Sanity and smoke tests employ different concepts

Smoke testing is a kind of Software Testing performed to test the critical


functionalities of the program. It is executed before functional or regression
tests are executed on the build. The purpose of a smoke test is to reject a
badly broken application so that the QA team does not waste time in
development on something that is not yet functional.
Sanity testing is performed after receiving a software build with minor changes
in the code. If the sanity testing is failed, the build gets rejected in order to
save the time of the QA team members.

Key Points
Sanity testing is also called tester acceptance testing.
Smoke testing performed on a particular build is also known as a build
verification test.
Both smoke and sanity tests can be executed manually or using an automation
testing tool.
When automated tools are used, the tests are often initiated by the same
process that generates the build itself.
As per the needs of testing, you may have to execute both Sanity and Smoke
Tests in the software build. In such cases, you will first execute Smoke tests
and then go ahead with Sanity Testing. In our industry, test cases for Sanity
Testing are commonly combined with that for Smoke Testing, to speed up test
execution.
Conclusion
Smoke and Sanity testing are significant in the development of a project. A
smoke test is used to confirm whether the basic functionalities of a particular
build are working fine or not. Sanity testing is to check if the build is good to go
to further testing stages.

The common thing among both Sanity and Smoke tests is that they are
employed to avoid wasting time and effort by quickly checking whether or not
an application is fit for more rigorous testing.

Software Test Life Cycle!!!!


Software Testing is an investigation conducted to provide stakeholders with
information about the quality of the software. It is a process which involves the
execution of a software or system to check whether the software is working as
expected.
Software Testing mainly focuses whether the developed system meets the
customer/stakeholder’s expectations. Software Testing itself has many phases
and it is called as STLC. STLC is a part of Software Development Life Cycle. SDLC
has many phases and STLC is a part of it.

Below flow diagram shows the different phases of STLC:


requirements is the input to STLC. Software Testing starts with Requirements,
firstly Requirements will be studied by entire team that is called as System
Study. After the first two phases, Test Plan will be created to understand the
scope of Testing. Once after Test Plan is created, detailed Test cases will be
written / created. Traceability Matrix will be created to ensure each
requirement is covered with the test cases. Test Execution and Defect Tracking
will be done if there are any defects. Once all the test cases are executed and
finished. Then Test Execution report will be created and retrospect meeting
will be conducted to discuss about what went well and what didn’t go well.

We will see all the phases in detail below:

1. Requirements:
The very first is Requirements and it is the input for Testing. It is the
phase where Software Testing starts. Requirements will be discussed with the
entire team.

2. System Study:

The requirements given in first phase will be deeply studied and process
starts with writing Test Plan.

3.Write Test Plan:

Test Plan is a document which derives all the Testing activities of the project.
All the future Testing activities will be planned and document in this and it is
known as Test Plan.

It contains below information usually:

• Number of Testers needed.

• Who should Test which module and which feature.

• Which will be the Defect Tracking tool.

• Start and End Dates of writing Test Cases and Execution planned dates.

• Scope of Testing
4. Write Test Cases:

In this phase, Test Cases will be created. These Test Cases will be reviewed
and after all review comments are updated and once the test cases are
approved, they will be stored in a Test Case repository.

5. Traceability Matrix:

Traceability matrix is a document which ensures that every requirement has a


test case. Test cases are written by referring to the requirements and test
cases are executed by using them.

If any requirement is missed and test cases are not created for that particular
requirement, then testing will not be done for that module /feature. To make
sure that all the requirements are having at least one test case.

6. Defect Tracking:

Bug/ Defect found by QA team is sent to the development team. This should
be followed up by QA team until it is fixed and if it is really fixed then need to
do Retesting and also Regression Testing based on the need and defect should
be closed.

7. Test Execution Report:

It is a report generated once Testing is completed. Usually it is prepared after


every Test Cycle and sent to Development team, Testing team, Management
team and also sometimes Customer.
The last Test Execution report will be sent to customer and this means that
project is over. It contains below information usually:

• List of Bugs

• Summary of Test Cases passed and failed.

• Summary of Deferred Test Cases

8. Retrospect Meeting:

It is also called as Post Mortem meeting or Project Closure Meeting. It is


conducted to discuss the achievements and mistakes in the project. “What
went well” and What didn’t go well” will be discussed and document in Quality
Management System under Retrospect folder.

These are the various phases of STLC generally.

Defect Tracking: 5 Common Reasons for Bug Rejection


1. It is not a defect or it’s an invalid defect

Why does the Dev team say it is not a defect or that it is an invalid
defect? By seeing the defect report, the Developer might say it is not a
defect due to the following reasons:
Misunderstanding of the requirement. For example, a developer
understands a feature as a link, whereas a tester understands it as a
button.
When the build or software is wrongly installed. For example, when
there’s a mismatch in the build steps.
Referring to the old requirement. For example, when a feature is
enhanced, the developer develops it with the new SRD (System
Requirements Document), whereas the tester tests it using the old SRD.
Adding extra features. For example, the developer thinks it will be useful
to add extra features, but the tester is using the requirement document
that lacks the new features.

2. Duplicate defect

By reviewing the defect report, developers might say the defect is a


duplicate defect due to the following reasons:

Testing a common feature, testers might find the same defect and thus
it is duplicated. For example, a link present on the home page is also
present on another page because the navigating page is the same.
Reduce defect count. For example, if the same feature is displayed on
two or more pages and fixing it on one page may fix it in all pages for the
feature.

3. Defect cannot be fixed or won’t fix

Developers might say that the defect cannot be fixed or won’t fix. Why?

If the technology itself is not supported. For example, if the programming


language which is used to develop the software does not have the capacity or
solution to fix the issue.
If the defect is at the root of the product & is a minor defect, meaning it is not
impacting the customer business workflow. If it is a critical defect, then the
developer should indeed fix the issue. For example, if the ‘Search’ text field is
not accepting more than 100 characters.
When the cost of fixing a defect is more than the cost of the defect itself. For
example, if a user is not able to add 500 items to a Shopping Cart in an
eCommerce application.
Because of inconsistent defects. For example, if a feature is working, but
occasionally the same feature sometimes does not work.

4. Issue not reproducible or works for me


Developers might tell you that the issue is not reproducible or that it works for
them well. That might be due to a build mismatch and/or not enough test
data. For example, if the platform/Browser/OS is not mentioned in the defect
report, then the developer will try to reproduce it on another platform, and it
might just work fine for him. We need to make sure to deliver proper defect
reports with all the relevant information for reproducing the defect.

5. I will fix the issue in a future release or postpone or put on hold


There are also scenarios in which developers might say they will fix the issue in
a future release, postpone it or put in on hold, due to the following reasons:

Finding a minor defect at the end of the release, and the developer might not
have sufficient time. For example, a spelling mistake on a link.
If a customer is planning to do a lot of requirement changes.

These are 5 of the most common reasons I experienced while reporting


defects.

B – Beta Testing
Beta testing is a type of acceptance test that is performed at an external site
other than the developer’s test environment and follows the rules outside of
the development organization. This is the final test to be completed before
releasing the software to the market, usually to a limited amount of end-users.
SEVERITY & PRIORITY OF A BUG

Severity and Priority of a Bug are the mandatory fields of a Bug Report because
these two fields helps to decide how quickly a bug should be fixed.

What is Severity?

Severity is the impact of the bug on customer's business and tells how severe
the bug is and the impact of the bug.

What is Priority?

Priority defines how soon the defect should be fixed. It defines the importance
of the bug.

Generally, Severity is assigned by Tester/Test Lead and priority is assigned by


Developer/Project Lead. It requires whole team to decide. Development team
will fix the High Priority Bugs first rather than high Severity.

Examples of Different Combinations:

High Priority & Low Severity: Logo of the company or brand, it won't cause lot
of damage but need to be fixed as soon as possible.

High Priority & High Severity: In an Ecommerce website, 'Submit' button is not
working. When user enters all the information and clicks on 'Submit' button, it
throws error.
Low Priority & Low Severity: Spelling mistakes, a page is taking more time to
load than usual.

Low Priority & High Severity: Browser compatibility issues

These are only the examples, in real time whole team will decide these two
fields as per the application and business flows.

Stubs and Drivers

Stubs and drivers both are dummy modules and are only created for test
purposes.
Stubs are used in top down testing approach, when you have the major
module ready to test, but the sub modules are still not ready yet. So in a
simple language stubs are "called" programs, which are called in to test the
major module's functionality.

For eg. suppose you have three different modules : Login, Home, User.
Suppose login module is ready for test, but the two minor modules Home and
User, which are called by Login module are not ready yet for testing.
At this time, we write a piece of dummy code, which simulates the called
methods of Home and User. These dummy pieces of code are the stubs.

Drivers are the ones which are the "calling" programs. Drivers are used in
bottom up testing approach. Drivers are dummy code, which is used when the
sub modules are ready but the main module is still not ready.

Taking the same example as above. Suppose this time, the User and Home
modules are ready, but the Login module is not ready to test. Now since Home
and User return values from Login module, so we write a dummy piece of
code, which simulates the Login module. This dummy code is then called
Driver.
24/05/2023

Difference Between Test Strategy and Test Plan


A Test Plan can be defined as a document that defines the
1. scope,
2. objective, and
3. approach

to test the software application. The Test Plan is a term and a deliverable.

The Test Plan is a document that lists all the activities in a QA project, schedules them, defines the
scope of the project, roles & responsibilities, risks, entry & exit criteria, test objective, and anything
else that you can think of.

The Test Plan will be designed based on the requirements. While assigning work to the test engineers,
due to some reasons one of the testers gets replaced by another one. Here, the Test Plan gets updated.

Test Plan is a document that provides complete information about testing tasks related to a Software
Project. It provides details like Scope of the testing, Types of testing, Objectives, Test Methodology,
Testing Effort, Risks & Contingencies, Release Criteria, Test Deliverables, etc. It keeps track of
possible tests that will be run on the system after coding.

Test Strategy

Test strategy is a set of guidelines that explains test design and determines how testing needs
to be done
Components of Test strategy includes- objectives and scope, documentation formats, test
processes, team reporting structure, client communication strategy, etc.

A test strategy is carried out by the project manager. It says what type of technique to follow
and which module to test.
Test strategy narrates about the general approaches
Test strategy cannot be changed
It is a long-term plan of action. You can abstract information that is not project specific and
put it into test approach
In smaller project, test strategy is often found as a section of a test plan
It is set at organization level and can be used by multiple projects

Agile Model

 Day1 customer comes up with epic wherein he will come up with rough requirement
of
 complete product and then they derive a detailed product backlog.
 Product Backlog
  It contains story cards for all the features/ complete product.
  Roughly they prioritize what are the stories we develop in which sprint.
 They form a scrum team, scrum team contains
 Scrum master
 Developers
 Test Engineers
 Shared resources (Architects, Business Analyst, Database/ NTW admin, product
 owner)
 We create sprint backlog.
 They pull few of the story cards from the product backlog which they want to develop
it in 1 st sprint and that is called as sprint backlog
 We do sprint planning wherein we plan how all the stories which are there in
 sprint backlog must be developed and tested.
 Here we
o prioritize features to be developed.
o We allocate it to engineers.
o We get each story estimated and that is called as story point.

 Development team starts doing their low level design and coding. Testing team they
start
 writing test cases.
 Within 5-8 days of the sprint development team gives the build and testing team start
executing the test cases.
 Every 2-4 days we keep getting new new builds
 Typically we might get 4-8 cycles or builds for testing
 In the entire sprint everyday we do some basic task as a team
o Stand-up meeting in the beginning of the day
o Scrum Master Update the burn down chart
o Use story board to understand how many stories are completed and how many
stories are left out .
 Once we feel that product is meeting the acceptance criteria, we release the product to
the customer for acceptance testing If the product goes though acceptance testing we
move the product to production. Anybody and everybody can be there )
 We list all the mistakes and all the good practises followed.
 We prepare sprint backlog for second sprint, conduct sprint planning.
 While planning we refer old retrospect document. We prepare the plan in
 such a way that old mistakes are not repeated, and all the good activities are
 once again adapted.

08/06/2023
6 Practical Ways to Improve Software Testing.
Determine the acceptable level of product quality.
State how to achieve this objective through QA testing.
Address the customer need`s and expectations of the product.

Plan the Testing Process


Test policy is a management level document that defines the test principles and objectives
adapted by the organisation.

Test Strategy is a product -specific document prepared from the business requirements
specifications. Mostly project managers or business analyst prepare this document.

Test Plan provide the details of what, when, how and why to perform the tests.

Test cases provide the set of input, pre-conditions, expected results and post conditions used
to achieve a specific objective.
Adopt a Shift left approach

Creating a test strategy early during product development to detect and resolve bugs.
Reviewing and analysing customer requirements at the beginning of the software cycle.
Performing smaller testing across the entire SDLC for immediate and continued product
validation.
Focussing on preventing any product issues rather than reacting to them.

Implement user acceptance Testing


User acceptance (or end-user) testing was performed in the final stages of product
development. This is no longer sufficient. User acceptance testing (or UAT) enables the
product companies to determine the market readiness of their product and measure the
performance when used by customers.
For implementing effective UAT, QA professional create “user personas” to identify when
and where to look for defects. Alpha and beta testing are among the common types of UAT.
Alpha tests are executed in the internal development environment using internal users. Beta
testing is performed later in the customer`s environment to check product is ready for use.

Optimise the Automation Testing

Automation Testing is the best way to improve software testing and deliver high-quality
products.

Automation testing can reduce human efforts, save time and minimise human errors. It
applies to variety of testing techniques, including cross-browser, regression, load and stress,
and performance testing. More over automation testing is easy to implement in any Agile and
Devops environment.

Ensure clarity in QA Reporting


Provide stepwise instructions on how to reproduce the bug.
Provide a plausible solution to the identified problem or describe the expected behaviour of a
product feature.
Define the problem clearly for the developers to understand and address their failure.
Provide sufficient screenshots of the software to highlight the problem.

Test Maturity Model (TMM)

TMM is a framework that assesses the maturity of an organization’s testing processes and
practices. It provides a roadmap for improving testing capabilities and achieving higher levels of
maturity.

Different stages within the Test Maturity Model:

🌀 Initial Level: At the initial level, the testing processes and practices are ad-hoc,
undocumented, and unstructured. Testers work in isolation without much collaboration or
coordination.

🌀 Repeatable Level: At the repeatable level, basic testing processes and practices are
established and documented. There is some level of consistency in test planning, execution, and
reporting. Testing efforts are more organized, and basic test management practices are in place.

🌀 Defined Level: At the defined level, the organization has well-defined and documented
testing processes. Testers follow established standards and guidelines for test planning,
execution, and reporting.

🌀 Managed Level: At the managed level, testing processes are actively managed and
monitored. Test metrics and key performance indicators (KPIs) are tracked and analysed to
measure and control the testing activities.

🌀 Optimizing Level: At the optimizing level, the testing processes are continuously improved
and optimized based on lessons learned and industry best practices. The organization fosters a
culture of innovation, research, and knowledge sharing.
What are the different types of testing environments?

1. The Dev environment – This is where apps are deployed and unit tested by the Dev
team.

2. Test/QA Environment – In this environment, the versioned QA builds are deployed,


followed by testers executing the tests and reporting test results to the Dev team.

3. Staging Environment – It validates the application approaching the production stage


to ensure the app will perform well post-deployment.
4. Production Environment – This is a live environment where real users use the app.

What comes first, UAT or staging?


UAT comes first in a software development life cycle, followed by staging.

What is a Test Environment?


Every application developed needs to be validated to ensure it performs as per the end user’s
expectations. Test environments are platforms that help design and run multiple test cases on
the application as well as the associated hardware and network configurations. This
environment consists of all the required resources needed to execute the tests, such as OS,
servers, drivers, and so on.

Basically, a test environment is a setup that brings together hardware, software, data, and a
combination of configurations to perform testing. These are configured according to the
needs of the software being tested to ensure it performs well in all conditions.

Test environments are not one-size-fits-all in nature. It is the component that dictates the
environment setup. The main aim of setting up a test environment is to enable QA teams to
validate the application and find underlying bugs that are fixed to prevent any negative user
experience.
Why Do You Need a Testing Environment?
A well-designed test environment is essential to ensure the investment in creating robust test
cases pays off. A test environment enables testers to have comprehensive feedback about the
application quality. In other words, a test environment provides teams with the necessary
setup to run the test cases.

A test environment further helps in providing a dedicated environment to isolate the code and
verify the application’s behaviour. This ensures that no other activities that can influence the
output of the tests are running on the server.

Moreover, a test environment can replicate the production environment, which is crucial for
being confident about the testing outcomes. The testing engineer needs to ensure that the
application behaves the same way in the test environment as in the production environment.

Best Practices for an Effective Test Environment

Creating a software testing environment is essential for any organization that wants to ensure
the quality of its software applications. This helps testers run various tests to identify and fix
defects early in development.

Setting up a proper test environment needs experience, resources, and effort. Here are a few
best practices for setting up an effective software testing environment:

Define the testing goals. It should be clear what the testers want to achieve with testing.
Identify what types of testing would be required. Some of the critical kinds of testing are Unit
testing, Integration testing, Acceptance testing, Regression testing.
Determine the scope of software testing. It should be clear which parts of the software will be
tested.
Create test cases while keeping in mind the requirements and features.
The test environment should feature a comprehensive software, hardware, and network
configuration setup.
Ensure the testing environment is secure.
The environment should be scalable to incorporate the increasing volumes of user traffic and
app data.
Ensure the code changes are validated periodically before the updates are pushed to
production.
Ensure the test environment is updated and includes the latest software and hardware
configurations.

What is a Staging Environment?


Staging environments consist of software, hardware, and configuration similar to the
production environments. It is through these similarities’ testers can mimic the real-world
production environment.

Staging environments are replicas of the production environments. It imitates the production
environment as closely as possible to ensure application quality. The purpose of setting up a
staging environment is to validate the application approaching the production stage to ensure
the app will perform well post-deployment.

Simply put, it is a stage where Dev and QA teams can perform various tests on the software
and identify its best version. It ensures that the users are always provided with the best
software experience.

Environment changes....is it QA to UAT or staging?


If so, regression will not be done.
If there is any configuration change like DB, then yes, regression is to be done.

Regression Testing
Regression means the return of a bug.
Testing performed to find the regressions in the system after doing any changes in the
product.
If a piece of code of a software is modified , testing needs to be performed to ensure that it
works as specified and that it has not negatively impacted any functionality that it offered
previously.

When to perform Regression Testing ??


Change in requirements and code
New feature is added to the software
Defect fixing

Why is Regression Testing important?


Software change can cause existing functionality to break.
Changes to a Software Any component could impact dependent Components.
It is commonly observed that a Software fix could cause other bugs.
All this affects the quality and reliability of the system.

 Manual testing: Why it still holds upper hand over automation?

Let's take a scenario where you want to test a complex functionality within few days. You
have 2 choices: Either you build a code which automatically tests or you do it manually. Now
the more complex the scenario is the more time it would take to build an automation code.
What would you prefer, to waste time in building a code, or to start testing manually. The
answer is straightforward. Manual testing.

Imagine, a software for which all tests are made automated, you have run the suite &
everything is passed. You release it & on the first day it is broken because a user has entered
something for fun which was not identified as a scenario. Here comes a manual tester who
can test not only what asked to but also puts their creative mind to work & often provides out
of the box coverages, ensuring better quality delivered.

When your test suite is unique each and every time, it is better to choose manual testing.
Automation testing is only advantageous if your testing involves a lot of regression testing,
otherwise it would be uneconomical rather than being cost effective.

If your software or product is something which is constantly changing, manual tester is your
ally rather than automation. Manual testing is adaptable and can be utilized in a range of
testing scenarios. Testers are able to modify test cases, as opposed to automated testing,
which cannot be modified to meet changing software needs.
Finally, automation testing certainly exhibits a higher accuracy for algorithm-based test
cases, but it doesn't meet expectations when testing usability, functionality, aesthetics, UX, or
behavior. The reason being automation still lacks cognitive abilities & can't depict human-
like intelligence regarding decision-making. This is where Manual Testing stands strong &
beats Automation.

1. Human Can handle complex test cases more efficiently

2. Dedicated manual testers have a lot better perspective of how a piece of software feels
to the user

3. A machine tests what is asked to, while human not only tests what it is asked but also
what he is not asked to.

4. If the testing is not regression, than automation will be more time consuming than the
manual one.

5. Manual testing is adaptable and can be utilised in a range of testing scenarios. Testers
are able to modify test cases, as opposed to automated testing

You might also like