Functional Testing
Functional Testing
Functional Testing
Assume your email does not get sent… Does it make any sense to test other
functionalities such as: Drafts, Deleted messages, Archives, etc? 🤔 No. If even
the basic functionality of sending an email is not working, then this means
there is no use of testing any further functionalities.
The main focus of Smoke Testing is to test the critical areas & not the whole
application.
Sanity Testing
1. Sanity testing helps in identifying the issues quickly and reporting the
issues immediately.
3. If any defects are found during smoke testing, the build gets rejected,
saving time and effort.
4. The execution of sanity testing will help in saving unnecessary testing
effort and time because it is only focused on one or a few functionality
areas.
Smoke testing is
Scripted Sanity testing is not documented and unscripted
usually documented and scripted
Smoke testing is
Analogy Sanity testing is like Specialized Health check-up
like General Health check-up
Both smoke testing and sanity testing are performed to avoid wasting time and
effort by quickly determining whether an application is too flawed to merit any
rigorous testing.
Sanity testing is performed to check whether the code changes are working
properly or not. The general focus of sanity testing is to validate the high-level
functionalities and not to perform all features of the application.
Which comes first, smoke or sanity?
Smoke testing is performed first followed by sanity testing. Smoke tests are
done at the early stages of SDLC, testing the basic functionalities of the
software, and sanity tests are done at the final stages of SDLC to test the high-
level functionalities of the software.
Key Points
Sanity testing is also called tester acceptance testing.
Smoke testing performed on a particular build is also known as a build
verification test.
Both smoke and sanity tests can be executed manually or using an automation
testing tool.
When automated tools are used, the tests are often initiated by the same
process that generates the build itself.
As per the needs of testing, you may have to execute both Sanity and Smoke
Tests in the software build. In such cases, you will first execute Smoke tests
and then go ahead with Sanity Testing. In our industry, test cases for Sanity
Testing are commonly combined with that for Smoke Testing, to speed up test
execution.
Conclusion
Smoke and Sanity testing are significant in the development of a project. A
smoke test is used to confirm whether the basic functionalities of a particular
build are working fine or not. Sanity testing is to check if the build is good to go
to further testing stages.
The common thing among both Sanity and Smoke tests is that they are
employed to avoid wasting time and effort by quickly checking whether or not
an application is fit for more rigorous testing.
1. Requirements:
The very first is Requirements and it is the input for Testing. It is the
phase where Software Testing starts. Requirements will be discussed with the
entire team.
2. System Study:
The requirements given in first phase will be deeply studied and process
starts with writing Test Plan.
Test Plan is a document which derives all the Testing activities of the project.
All the future Testing activities will be planned and document in this and it is
known as Test Plan.
• Start and End Dates of writing Test Cases and Execution planned dates.
• Scope of Testing
4. Write Test Cases:
In this phase, Test Cases will be created. These Test Cases will be reviewed
and after all review comments are updated and once the test cases are
approved, they will be stored in a Test Case repository.
5. Traceability Matrix:
If any requirement is missed and test cases are not created for that particular
requirement, then testing will not be done for that module /feature. To make
sure that all the requirements are having at least one test case.
6. Defect Tracking:
Bug/ Defect found by QA team is sent to the development team. This should
be followed up by QA team until it is fixed and if it is really fixed then need to
do Retesting and also Regression Testing based on the need and defect should
be closed.
• List of Bugs
8. Retrospect Meeting:
Why does the Dev team say it is not a defect or that it is an invalid
defect? By seeing the defect report, the Developer might say it is not a
defect due to the following reasons:
Misunderstanding of the requirement. For example, a developer
understands a feature as a link, whereas a tester understands it as a
button.
When the build or software is wrongly installed. For example, when
there’s a mismatch in the build steps.
Referring to the old requirement. For example, when a feature is
enhanced, the developer develops it with the new SRD (System
Requirements Document), whereas the tester tests it using the old SRD.
Adding extra features. For example, the developer thinks it will be useful
to add extra features, but the tester is using the requirement document
that lacks the new features.
2. Duplicate defect
Testing a common feature, testers might find the same defect and thus
it is duplicated. For example, a link present on the home page is also
present on another page because the navigating page is the same.
Reduce defect count. For example, if the same feature is displayed on
two or more pages and fixing it on one page may fix it in all pages for the
feature.
Developers might say that the defect cannot be fixed or won’t fix. Why?
Finding a minor defect at the end of the release, and the developer might not
have sufficient time. For example, a spelling mistake on a link.
If a customer is planning to do a lot of requirement changes.
B – Beta Testing
Beta testing is a type of acceptance test that is performed at an external site
other than the developer’s test environment and follows the rules outside of
the development organization. This is the final test to be completed before
releasing the software to the market, usually to a limited amount of end-users.
SEVERITY & PRIORITY OF A BUG
Severity and Priority of a Bug are the mandatory fields of a Bug Report because
these two fields helps to decide how quickly a bug should be fixed.
What is Severity?
Severity is the impact of the bug on customer's business and tells how severe
the bug is and the impact of the bug.
What is Priority?
Priority defines how soon the defect should be fixed. It defines the importance
of the bug.
High Priority & Low Severity: Logo of the company or brand, it won't cause lot
of damage but need to be fixed as soon as possible.
High Priority & High Severity: In an Ecommerce website, 'Submit' button is not
working. When user enters all the information and clicks on 'Submit' button, it
throws error.
Low Priority & Low Severity: Spelling mistakes, a page is taking more time to
load than usual.
These are only the examples, in real time whole team will decide these two
fields as per the application and business flows.
Stubs and drivers both are dummy modules and are only created for test
purposes.
Stubs are used in top down testing approach, when you have the major
module ready to test, but the sub modules are still not ready yet. So in a
simple language stubs are "called" programs, which are called in to test the
major module's functionality.
For eg. suppose you have three different modules : Login, Home, User.
Suppose login module is ready for test, but the two minor modules Home and
User, which are called by Login module are not ready yet for testing.
At this time, we write a piece of dummy code, which simulates the called
methods of Home and User. These dummy pieces of code are the stubs.
Drivers are the ones which are the "calling" programs. Drivers are used in
bottom up testing approach. Drivers are dummy code, which is used when the
sub modules are ready but the main module is still not ready.
Taking the same example as above. Suppose this time, the User and Home
modules are ready, but the Login module is not ready to test. Now since Home
and User return values from Login module, so we write a dummy piece of
code, which simulates the Login module. This dummy code is then called
Driver.
24/05/2023
to test the software application. The Test Plan is a term and a deliverable.
The Test Plan is a document that lists all the activities in a QA project, schedules them, defines the
scope of the project, roles & responsibilities, risks, entry & exit criteria, test objective, and anything
else that you can think of.
The Test Plan will be designed based on the requirements. While assigning work to the test engineers,
due to some reasons one of the testers gets replaced by another one. Here, the Test Plan gets updated.
Test Plan is a document that provides complete information about testing tasks related to a Software
Project. It provides details like Scope of the testing, Types of testing, Objectives, Test Methodology,
Testing Effort, Risks & Contingencies, Release Criteria, Test Deliverables, etc. It keeps track of
possible tests that will be run on the system after coding.
Test Strategy
Test strategy is a set of guidelines that explains test design and determines how testing needs
to be done
Components of Test strategy includes- objectives and scope, documentation formats, test
processes, team reporting structure, client communication strategy, etc.
A test strategy is carried out by the project manager. It says what type of technique to follow
and which module to test.
Test strategy narrates about the general approaches
Test strategy cannot be changed
It is a long-term plan of action. You can abstract information that is not project specific and
put it into test approach
In smaller project, test strategy is often found as a section of a test plan
It is set at organization level and can be used by multiple projects
Agile Model
Day1 customer comes up with epic wherein he will come up with rough requirement
of
complete product and then they derive a detailed product backlog.
Product Backlog
It contains story cards for all the features/ complete product.
Roughly they prioritize what are the stories we develop in which sprint.
They form a scrum team, scrum team contains
Scrum master
Developers
Test Engineers
Shared resources (Architects, Business Analyst, Database/ NTW admin, product
owner)
We create sprint backlog.
They pull few of the story cards from the product backlog which they want to develop
it in 1 st sprint and that is called as sprint backlog
We do sprint planning wherein we plan how all the stories which are there in
sprint backlog must be developed and tested.
Here we
o prioritize features to be developed.
o We allocate it to engineers.
o We get each story estimated and that is called as story point.
Development team starts doing their low level design and coding. Testing team they
start
writing test cases.
Within 5-8 days of the sprint development team gives the build and testing team start
executing the test cases.
Every 2-4 days we keep getting new new builds
Typically we might get 4-8 cycles or builds for testing
In the entire sprint everyday we do some basic task as a team
o Stand-up meeting in the beginning of the day
o Scrum Master Update the burn down chart
o Use story board to understand how many stories are completed and how many
stories are left out .
Once we feel that product is meeting the acceptance criteria, we release the product to
the customer for acceptance testing If the product goes though acceptance testing we
move the product to production. Anybody and everybody can be there )
We list all the mistakes and all the good practises followed.
We prepare sprint backlog for second sprint, conduct sprint planning.
While planning we refer old retrospect document. We prepare the plan in
such a way that old mistakes are not repeated, and all the good activities are
once again adapted.
08/06/2023
6 Practical Ways to Improve Software Testing.
Determine the acceptable level of product quality.
State how to achieve this objective through QA testing.
Address the customer need`s and expectations of the product.
Test Strategy is a product -specific document prepared from the business requirements
specifications. Mostly project managers or business analyst prepare this document.
Test Plan provide the details of what, when, how and why to perform the tests.
Test cases provide the set of input, pre-conditions, expected results and post conditions used
to achieve a specific objective.
Adopt a Shift left approach
Creating a test strategy early during product development to detect and resolve bugs.
Reviewing and analysing customer requirements at the beginning of the software cycle.
Performing smaller testing across the entire SDLC for immediate and continued product
validation.
Focussing on preventing any product issues rather than reacting to them.
Automation Testing is the best way to improve software testing and deliver high-quality
products.
Automation testing can reduce human efforts, save time and minimise human errors. It
applies to variety of testing techniques, including cross-browser, regression, load and stress,
and performance testing. More over automation testing is easy to implement in any Agile and
Devops environment.
TMM is a framework that assesses the maturity of an organization’s testing processes and
practices. It provides a roadmap for improving testing capabilities and achieving higher levels of
maturity.
🌀 Initial Level: At the initial level, the testing processes and practices are ad-hoc,
undocumented, and unstructured. Testers work in isolation without much collaboration or
coordination.
🌀 Repeatable Level: At the repeatable level, basic testing processes and practices are
established and documented. There is some level of consistency in test planning, execution, and
reporting. Testing efforts are more organized, and basic test management practices are in place.
🌀 Defined Level: At the defined level, the organization has well-defined and documented
testing processes. Testers follow established standards and guidelines for test planning,
execution, and reporting.
🌀 Managed Level: At the managed level, testing processes are actively managed and
monitored. Test metrics and key performance indicators (KPIs) are tracked and analysed to
measure and control the testing activities.
🌀 Optimizing Level: At the optimizing level, the testing processes are continuously improved
and optimized based on lessons learned and industry best practices. The organization fosters a
culture of innovation, research, and knowledge sharing.
What are the different types of testing environments?
1. The Dev environment – This is where apps are deployed and unit tested by the Dev
team.
Basically, a test environment is a setup that brings together hardware, software, data, and a
combination of configurations to perform testing. These are configured according to the
needs of the software being tested to ensure it performs well in all conditions.
Test environments are not one-size-fits-all in nature. It is the component that dictates the
environment setup. The main aim of setting up a test environment is to enable QA teams to
validate the application and find underlying bugs that are fixed to prevent any negative user
experience.
Why Do You Need a Testing Environment?
A well-designed test environment is essential to ensure the investment in creating robust test
cases pays off. A test environment enables testers to have comprehensive feedback about the
application quality. In other words, a test environment provides teams with the necessary
setup to run the test cases.
A test environment further helps in providing a dedicated environment to isolate the code and
verify the application’s behaviour. This ensures that no other activities that can influence the
output of the tests are running on the server.
Moreover, a test environment can replicate the production environment, which is crucial for
being confident about the testing outcomes. The testing engineer needs to ensure that the
application behaves the same way in the test environment as in the production environment.
Creating a software testing environment is essential for any organization that wants to ensure
the quality of its software applications. This helps testers run various tests to identify and fix
defects early in development.
Setting up a proper test environment needs experience, resources, and effort. Here are a few
best practices for setting up an effective software testing environment:
Define the testing goals. It should be clear what the testers want to achieve with testing.
Identify what types of testing would be required. Some of the critical kinds of testing are Unit
testing, Integration testing, Acceptance testing, Regression testing.
Determine the scope of software testing. It should be clear which parts of the software will be
tested.
Create test cases while keeping in mind the requirements and features.
The test environment should feature a comprehensive software, hardware, and network
configuration setup.
Ensure the testing environment is secure.
The environment should be scalable to incorporate the increasing volumes of user traffic and
app data.
Ensure the code changes are validated periodically before the updates are pushed to
production.
Ensure the test environment is updated and includes the latest software and hardware
configurations.
Staging environments are replicas of the production environments. It imitates the production
environment as closely as possible to ensure application quality. The purpose of setting up a
staging environment is to validate the application approaching the production stage to ensure
the app will perform well post-deployment.
Simply put, it is a stage where Dev and QA teams can perform various tests on the software
and identify its best version. It ensures that the users are always provided with the best
software experience.
Regression Testing
Regression means the return of a bug.
Testing performed to find the regressions in the system after doing any changes in the
product.
If a piece of code of a software is modified , testing needs to be performed to ensure that it
works as specified and that it has not negatively impacted any functionality that it offered
previously.
Let's take a scenario where you want to test a complex functionality within few days. You
have 2 choices: Either you build a code which automatically tests or you do it manually. Now
the more complex the scenario is the more time it would take to build an automation code.
What would you prefer, to waste time in building a code, or to start testing manually. The
answer is straightforward. Manual testing.
Imagine, a software for which all tests are made automated, you have run the suite &
everything is passed. You release it & on the first day it is broken because a user has entered
something for fun which was not identified as a scenario. Here comes a manual tester who
can test not only what asked to but also puts their creative mind to work & often provides out
of the box coverages, ensuring better quality delivered.
When your test suite is unique each and every time, it is better to choose manual testing.
Automation testing is only advantageous if your testing involves a lot of regression testing,
otherwise it would be uneconomical rather than being cost effective.
If your software or product is something which is constantly changing, manual tester is your
ally rather than automation. Manual testing is adaptable and can be utilized in a range of
testing scenarios. Testers are able to modify test cases, as opposed to automated testing,
which cannot be modified to meet changing software needs.
Finally, automation testing certainly exhibits a higher accuracy for algorithm-based test
cases, but it doesn't meet expectations when testing usability, functionality, aesthetics, UX, or
behavior. The reason being automation still lacks cognitive abilities & can't depict human-
like intelligence regarding decision-making. This is where Manual Testing stands strong &
beats Automation.
2. Dedicated manual testers have a lot better perspective of how a piece of software feels
to the user
3. A machine tests what is asked to, while human not only tests what it is asked but also
what he is not asked to.
4. If the testing is not regression, than automation will be more time consuming than the
manual one.
5. Manual testing is adaptable and can be utilised in a range of testing scenarios. Testers
are able to modify test cases, as opposed to automated testing