0% found this document useful (0 votes)
122 views

CH 2-Software Testing Fundamentals - KM

This document provides an overview of software testing fundamentals, including definitions of key terms, testing principles, the software testing life cycle, types of testing, and objectives of software testing. It discusses the role of testing in ensuring software quality, defines errors, bugs, faults, defects, and failures. It also covers topics like validation and verification, agile testing, levels of testing from unit to acceptance, functional vs structural testing, and non-functional testing types.

Uploaded by

akash chandankar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
122 views

CH 2-Software Testing Fundamentals - KM

This document provides an overview of software testing fundamentals, including definitions of key terms, testing principles, the software testing life cycle, types of testing, and objectives of software testing. It discusses the role of testing in ensuring software quality, defines errors, bugs, faults, defects, and failures. It also covers topics like validation and verification, agile testing, levels of testing from unit to acceptance, functional vs structural testing, and non-functional testing types.

Uploaded by

akash chandankar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Chapter 2: Software Testing Fundamentals

1. Definition & Objectives of Testing


2. Role of testing and its effect on quality
3. Causes of software failure: Definition of -Error, Bug, Fault , Defect and Failure
4. Seven Testing Principles
5. Software Testing Life cycle (STLC)
6. Validation & Verification Concepts
7. V Model and W Model
8. Agile Testing- Test Driven Software Development
9. Levels of Testing-
10. Unit (Component) Testing
11. Integration Testing
12. System Testing
13. User Acceptance Testing (UAT)
14. Test Types
15. Functional testing (Black-box)
16. Structural testing (White-box)
17. Testing related to changes - Confirmation (Re-testing) and Regression Testing
18. Non-Functional Testing Types –
19. Performance(Load & Stress)
20. Usability
21. Maintainability
22. Portability
23. Localization & Internationalization
24. Concept of Smoke testing and Sanity Testing

Role of testing and its effect on quality

Software Testing is the process to check whether the software is defect-free or not. It is the
process of verification and validation of software service or application by checking whether it is
meeting the user requirements and what all is implemented as per the characteristics. Software
testing plays a vital role in the process of developing a high quality software. Testing is necessary
because we all make mistakes. Some of those mistakes are unimportant but some of them are
expensive and dangerous. Therefore, there is a need to check everything that we produce.

Testing plays a vital role in software development. In every company, testing is an important
and valuable stage in the Software Development Life Cycle. Techniques used for software testing
differ from one company to another. ... It is a stage where the developers find bugs in the
software and make the software bug free.
QA & Testing adds value by having a great impact on your brand's reputation. Simply put,
reliable Quality Assurance processes help communicate to the customer that everything possible
is being done to ensure that the product is of high quality and will meet the defined project
requirements.
Testing forms an important part of a quality assurance (QA) journey, but an overall quality
approach must encompass all project activities across the Software Development Lifecycle
(SDLC) covering off QA from inception to delivery and governance.
Quality control involves testing units and determining if they are within the specifications for the
final product. The purpose of the testing is to determine any needs for corrective actions in the
manufacturing process. Good quality control helps companies meet consumer demands for
better products.
Testing is a subset of QC. It is the process of executing a system in order to detect bugs in the
product so that they get fixed. Testing is an integral part of QC as it helps demonstrate that the
product runs the way it is expected and designed for.

OBJECTIVES OF SOFTWARE TESTING


A. To Find and Prevent Defects: The foremost task of a tester is to find defects in the software
and report them to developer so that they can be rectified. A tester must form best set of test
cases so that maximum defects can be arised. After all, Testing shows presence of defects.
B. Satisfies the SRS & BRS: Another objective of testing is to check whether the developed
software satisfies the Software Requirement Specification and Business Requirement
Specification or not because until or unless the software is satisfying user requirements, it is of
no use to the customer inspite of using best programming skills, designing and tools.
C. Writing High Quality Test Cases: A test case is a set of conditions under which a tester
will determine whether an application under test satisfies requirements or works correctly. The
process of developing test cases can also help problems in the requirement or design of an
application. The more accurate are the test cases, the better will be testing process.
D. Software Reliability Estimation: Testing also helps to estimate the reliability of software.
Software reliability is the probability of failure-free software operation for a specified period of
time in a specified environment. Reliability estimation helps to find the number of failures
occurring in a specified amount of time to find the mean life of software and to discover main
cause of failure etc.
E. Minimum Cost and Effort: Testing is too expensive- it’s a myth. There is always a saying
that we should pay less for testing and more for maintenance. But in actual, if there will be no
proper testing, it may result in improper design of software that will be expensive to handle and
there will be great loss of time and money.
F. Gain Customer Confidence: Software testing helps to gain confidence of customers by
providing them a quality product.

Causes of software failure:


Definition of -Error, Bug, Fault , Defect and Failure

Quality Testing:
It is assessment of the extents which a test object meets given requirements.

Error: Errors are a part of our daily life. Human makes errors in their thoughts,
actions and in the products that might result from their actions. Errors occur
wherever humans are involved in taking actions and making decisions.

Error is a state of the system, it could lead to failure.

Bug: Due to which program fail to perform its intended function correctly.

Defect: It is bug – roughly say Fault.

Fault: It is adjudged cause of error.

Failure: Failure is said to occur whenever the external behavior of a system does not
confirm to that prescribed in the system specification.

1.3 Bug Life Cycle:

New

Open
Rejected
Assign
Deferred
Reopen Test

Verified

Closed
New: When the bug is posted for the 1 st time, its state will be “New”. This means that the bug is
not yet approved.
Open: After a tester has posted a bug, the lead of the testers approves that bug is genuine and he
changes the state as “Open”.
Assign: Once the lead changes the state as “Open” he assigns the bug to corresponding
developer or developers’ team. The state of the bug now is opened to “Assign”.
Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next
round of testing. Before he releases the software with bug fixed, he changes the state of bug to
“Test”. It specifies that the bug has been fixed and is released to testing team.
Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next
releases. The reasons for changing the bug to this state have many factors. Some of them are
priority of the bug may be low, lack of time for the release or the bug may not have major effect
on the software.
Rejected: If the developer feels that bug is not genuine he rejects the bug. Then the state of
the bug is changed to “Rejected”.
Verified: Once the bug is fixed and the status is changed to “Test”, the tester tests the bug.
If the bug is not present in the software, he approves that the bug is fixed and changes the status
to “Verified”.
Reopen: If the bug still exists even after the bug is fixed by developer, the tester changes
the status to “Reopen”. The bug traverses the life cycle once again.
Closed: Once the bug is fixed, it is tested by the tester. If tester feels that the bug is no
longer exists in software, he changes the status of the bug to “Closed”. This state means that the
bug is fixed, tested and approved.

There are seven principles of Software Testing.


 Testing shows presence of defects.
 Exhaustive testing is impossible.
 Early testing.
 Defect clustering.
 Pesticide paradox.
 Testing is context dependent.
 Absence of error – fallacy.
1. Testing Shows Presence of Defects:
Testing shows the presence of defects in the software. The goal of testing is to make the software
bug free. Sufficient testing reduces the presence of defects. In case testers are unable to find
defects after repeated regression testing doesn’t mean that the software is bug-free.
Testing talks about the presence of defects and don’t talk about the absence of defects.

2. Exhaustive Testing is Impossible:


What is Exhaustive Testing?
Testing all the functionalities using all valid and invalid inputs and preconditions is known as
Exhaustive testing.
Why it’s impossible to achieve Exhaustive Testing?
Assume we have to test an input field which accepts age between 18 to 20 so we do test the field
using 18,19,20. In case the same input field accepts the range between 18 to 100 then we have to
test using inputs such as 18, 19, 20, 21, …., 99, 100. It’s a basic example, you may think that you
could achieve it using automation tool. Imagine the same field accepts some billion values. It’s
impossible to test all possible values due to release time constraints.

If we keep on testing all possible test conditions then the software execution time and costs will
rise. So instead of doing exhaustive testing, risks and priorities will be taken into consideration
while doing testing and estimating testing efforts.

3. Early Testing:
Defects detected in early phases of SDLC are less expensive to fix. So conducting early testing
reduces the cost of fixing defects.
Assume two scenarios, first one is you have identified an incorrect requirement in the
requirement gathering phase and the second one is you have identified a bug in the fully
developed functionality. It is cheaper to change the incorrect requirement compared to fixing the
fully developed functionality which is not working as intended.

4. Defect Clustering:
Defect Clustering in software testing means that a small module or functionality contains most of
the bugs or it has the most operational failures.

As per the Pareto Principle (80-20 Rule), 80% of issues comes from 20% of modules and
remaining 20% of issues from remaining 80% of modules. So we do emphasize testing on the
20% of modules where we face 80% of bugs.
5. Pesticide Paradox:
Pesticide Paradox in software testing is the process of repeating the same test cases again and
again, eventually, the same test cases will no longer find new bugs. So to overcome this Pesticide
Paradox, it is necessary to review the test cases regularly and add or update them to find more
defects.
6. Testing is Context Dependent:
Testing approach depends on the context of the software we develop. We do test the software
differently in different contexts. For example, online banking application requires a different
approach of testing compared to an e-commerce site.
7. Absence of Error – Fallacy:
99% of bug-free software may still be unusable, if wrong requirements were incorporated into
the software and the software is not addressing the business needs.

The software which we built not only be a 99% bug-free software but also it must fulfill the
business needs otherwise it will become an unusable software.

These are the seven principles of Software Testing every professional tester should know

What Is the Software Testing Life Cycle?


Let’s first understand the term life cycle before getting into all the details. A life cycle is the
sequence of changes an entity goes through from one form to another. Many concrete and
obscure entities go through a series of changes from start to finish.
When we talk about the software testing life cycle, the software is an entity. The software testing
life cycle is the process of executing different activities during testing.
These activities include checking the developed software to see if it meets specific requirements.
If there are any defects in the product, testers work with the development team. In some cases,
they have to contact the stakeholder to gain insight into different product specs. Validation and
verification of a product are also important processes of the STLC.

SDLC vs. STLC


The complete journey of a product from its start to becoming the final product is taken care of by
SDLC. Among the various phases of SDLC, testing is one of the most important. Software
testing is a part of SDLC. And this part has got its own life cycle—STLC. 
So how is SDLC different from STLC?
SDLC
 Focus on building a product
 A parent process
 Understanding user requirement and building a product that is helpful to users
 SDLC phases are completed before testing
 End goal is to deploy a high-quality product that users can use
STLC
 Focus on testing a product
 A child of SDLC process
 Understanding development requirements and ensuring the product is working as
expected
 STLC phases start after phases of SDLC are completed
 End goal is to find bugs in product and report to development team for bug fix
These are the basic differences between SDLC and STLC. Now, let’s understand STLC in depth. 
What Is the Role of STLC? 
Now that we have the idea of what the software testing life cycle is, let’s take a look at why it’s
essential. Even if a firm has the best programmers and developers, they are bound to make
mistakes. The main role of STLC is to find those mistakes and get them fixed. The main goal of
conducting an STLC is to maintain product quality.

Gone are the days when average testing was the trend. In today’s world, businesses need to
conduct detailed testing.

From planning and research to execution and maintenance, every phase plays a crucial role in
testing a product.

SDLC is all about assuring the product’s quality. Every application has different attributes such
as reliability, functionality, and performance. And STLC aids in enhancing these attributes and
facilitates the delivery of an ideal end-product.

A high-quality product results in lower maintenance costs in the long run. The stability of an
application or software is a must to entice new users. Apart from that, consistently reliable
products also help keep existing clientele. For a product to stay in the realm of business, it’s
important to focus on each phase of the STLC.

Phases of Software Testing Life Cycle


Validating every module of software or application is a must to ensure product precision and
accuracy. Since software testing itself is an elaborate process, testers carry it out in phases.
Complexities can pop up if testing lacks organization. The complexities may include unresolved
bugs, undetected regression bugs, or in the worst case, a module that skipped testing because the
deadline got closer.

Each phase of the STLC has a specific goal and deliverables. It involves the initiation, execution,
and termination of the testing process.

Let’s take a look at different phases of the software testing life cycle in detail.

1. Requirement Analysis

Your valuable software testers have to view, study, and analyze the available specifications and
requirements. Certain requirements produce outcomes by feeding them with input data. These
requirements are testable requirements. Testers study both functional and non-functional
requirements. After that, they have to pick out testable requirements.

Activities in this phase include brainstorming for requirement analysis and identifying and
prioritizing test requirements. They also include picking out requirements for both automated and
manual testing. There are a few things you have test even if not explicitly mentioned. A click on
an active button should do something, a text field for phone number shouldn’t accept alphabets
submitted. These things are universal and should always be tested. But in the requirement
analysis phase it about knowing more specific details about the product. You need to learn how
the product should be in its ideal state. This phase generates as deliverables a detailed
requirements report, besides analysis of test automation feasibility.

Another important deliverable generated in this phase is a requirements traceability matrix.


What’s this?

“Traceability” here means the ability to trace back artifacts from their requirements. For instance,
having traceability in the software development process means that the organization should be
able to trace each commit in its codebase back to its original requirements.

The RTM—requirements traceability matrix—is a document that allows the organization to


connect various artifacts back to their requirements. When it comes to software testing, you want
to be able to trace back testing activities to their original requirements. That way, you reduce
waste, by ensuring that every testing activity is connected to a requirement that generates value
for the customer.
To sum it up:
 Understand the expected output from the product.
 Identify any loopholes in the specifications.
 Collect priorities.
 Perform automation feasibility checks.
2. Test Planning

The second step is test planning, and the QA team creates this plan after analyzing all the
necessary testing requirements. They outline the scope and objectives after understanding the
product domain. The team then analyzes the risks involved and defines time schedules and
testing environments to create a strategy. After that, management finalizes the tools and assigns
roles and responsibilities to individuals. An approximate timeline is also defined by which the
testing of each module should be completed. The most important delivery generated in this step
is the test plan, which is a document describing the motivation and details of the testing activities
for a given project.

To sum it up:
 Prepare test plan documentation.
 Estimate time and efforts.
 Finalize on tools and platform.
 Assign tasks to teams and individuals.
 Identify training requirements
3. Test Case Designing and Development

After development and planning, it’s time to let the creative juices flow! Based on the test plan,
testers design and develop test cases. Test cases should be extensive and should cover almost all
the possible cases. All the applicable permutations and combinations should be gathered. You
can prioritize these test cases by researching which of them are most common or which of them
would affect the product the most. Next comes the verification and validation of specified
requirements in the documentation stage. Also, the reviewing, updating, and approval of
automation scripts and test cases are essential processes of this stage. This phase also includes
defining different test conditions with input data and expected outcomes. So, the main
deliverables produced in this phase are the actual test cases organized in their test suites.
To sum it up:
 Research and gather possible actions on the product.
 Create test cases.
 Prioritize test cases.
 Prepare automated scripts for test cases.
4. Test Environment Setup

Testing activities need certain environmental factors—such as servers, frameworks, hardware,


and software—for executing developed test cases. Software and hardware configuration, along
with test data setup, are the main components of this phase. And it’s mandatory to smoke test and
to equip your testers with bug reporting tools. In the developer community, it’s common to hear
“it ran on my system, but it’s not running on yours”. Hence it is important that your test
environment covers all the environments that the user would use. For example, some feature that
works in Google Chrome doesn’t work in Internet Explorer. The working of features also differ
based on software and hardware requirements. A feature might work smoothly on 4 GB RAM
but might create issues with 1 GB RAM. Research on environments used by end-users would
help you prioritize your test environments.

The main deliverable in this stage is a complete strategy for test environment management.
It’s the job of the QA manager supervising the team to take care of setting up the test
environment.

To sum it up:
 Understand minimum requirements
 List down software and hardware required for different levels of performance. 
 Prioritize test environments
 Setup test environments
 Smoke test the built environments
5. Test Execution

An application is ready for testing once the team is done with all the previous phases. According
to the test plan, the testers execute test cases. They also identify, detect, and log the defects, thus
reporting the bugs. The team is also responsible for comparing expected results with the real
outcome. If any bugs are found, they need to be documented to pass it on to the development
team for a fix. 

Once the development team removes a bug, regression testing begins. Regression testing is to


ensure that the software or application works even after deploying a change. When testing after a
bug fix, test the complete product again. Because a fix for a bug could create a bug on some other
part of the product. And because the same tests need to be run again and again after every fix and
deployment, it’s recommended to use scripts or automated testing tools. We could say the main
deliverables in this phase are the test results, which, ideally, should be validated and
communicated in an entirely automated manner.

To sum it up:
 Run test cases.
 Identify deviation from expected behavior of the product.
 Log failed cases with details
 Test again after bug fixes.
6. Test Closure
And that brings us to the last stage of the STLC: test closure.

The end of test execution and delivery of the end product marks the onset of the test closure
phase. The QA team checks the test results and discusses them with other team members. Some
other factors they consider are product quality, test coverage, and project cost. If there’s a
deviation from estimated values, further analyzes can be done to identify what didn’t go as
expected. 

It’s an essential practice for testers to come together and discuss the conclusion after testing. Any
issues faced during testing, flaws in strategies can be discussed here. You can also work on
coming up with a better approach for testing based on the learnings during testing. If you
follow DevOps or canary release practice, testing is frequent. You can decide on how often to
send reports and what details to mention while sending reports to different stakeholders. 

Apart from that, the team also considers test metrics, the fulfillment of goals, and their adherence
to deadlines. Once they have a total grasp on what happened, they can evaluate the entire testing
strategy and process.

To sum it up:
 Verify that all tests are completed. 
 Evaluate factors such as quality, test coverage, timeline, and cost.
 Document the conclusion.
 Discuss the learning and find out if the testing process can be improved.
 Prepare test closure report. 
What Are the Entry and Exit Criteria for Testing?
All six phases of a software testing life cycle have entry or exit criteria associated with them.
Testers need to finish executing the test cases within a fixed time. Also, they need to maintain the
quality, functionality, and efficiency of the end product. Therefore, defining entry and exit
criteria is a must. That’s what we’ll do now.

Entry Criteria: Entry criteria state which requirements the team has to take care of before
starting the testing procedure. Before testing begins, it’s mandatory to cross off all requirements.
There are some ongoing activities and conditions that have to be present before testing begins.
First, you need input from the development team. You’ll also want to examine the test plan, test
cases and data, the testing environment, and your code.

Exit Criteria: Exit criteria state the requirements and actions to complete before the testing ends.
In other words, they include items to cross off the task list and processes to complete before
testing comes to a halt.
Exit criteria will include the identification of high-priority defects. You’ll need to get those fixed
right away. Testers have to pass different test cases and ensure full functional coverage.

Conclusion
Simply identifying errors in the last stage of an SDLC is not an efficient practice anymore. There
are various other daily activities a firm has to focus on. Devoting too much of your precious time
to testing and fixing bugs can hamper efficiency. After all, you’ll take more time to generate less
output.
To ease the testing process, it’s important to make efficient use of time and resources. Following
a systematic STLC not only results in quick bug fixing but it also enhances the product quality.
By increasing customer satisfaction, you’ll enjoy an increased ROI and improved brand presence.

What should your next steps be, then? Well, education is always a great next step. So, we suggest
you start by learning more about software testing in general. Here at the Testim blog, you can
learn, for instance, what are the differences between white box and black box testing, what is test
coverage and why it’s so important, and what is the test automation pyramid and why you should
care about it. You can also learn about testing in more specific scenarios, such as
testing Angular and React apps or learning about test frameworks for JavaScript.

Education, despite being essential, can only take you so far. At some point during your software
testing journey, you’ll have to learn about test automation tools and pick the one best suited for
your organization. When this comes, we invite you to take a look at Testim Automate, an AI-
powered test automation tool that solves two of the biggest challenges in test automation:
difficult test case authoring and heavy test maintenance. Due to Testim’s hybrid approach to
authoring test cases, everyone in the team can contribute to the testing strategy. And thanks to its
innovative smart locator strategy, fragile end-to-end tests are a thing of the past.

What Is Verification And Validation In Software Testing?


In the context of testing, “Verification and Validation” are the two widely and commonly used
terms. Most of the times, we consider both the terms as the same, but actually, these terms are
quite different.
There are two aspects of V&V (Verification & Validation) tasks:
 Confirms to requirements (Producer view of quality)
 Fit for use (consumers view of quality)
Producer’s view of quality, in simpler terms, means the developers perception of the final
product.
Consumers view quality means the user’s perception of the final product.
When we carry out the V&V tasks, we must concentrate on both of these views of quality.

Let us first start with the definitions of verification and validation and then we will go about
understanding these terms with examples.

Note: These definitions are, as mentioned in the QAI’s CSTE CBOK (check out this link to
know more about CSTE).
What Is Verification?
Verification is the process of evaluating the intermediary work products of a software
development lifecycle to check if we are in the right track of creating the final product.

In other words, we can also state that verification is a process to evaluate the mediator products
of software to check whether the products satisfy the conditions imposed during the beginning of
the phase.

Now the question here is: What are the intermediary or mediator products?
Well, these can include the documents which are produced during the development phases like,
requirements specification, design documents, database table design, ER diagrams, test
cases, traceability matrix, etc.
We sometimes tend to neglect the importance of reviewing these documents, but we should
understand that reviewing itself can find out many hidden anomalies when if found or fixed in
the later phase of the development cycle, can be very costly.

Verification ensures that the system (software, hardware, documentation, and personnel)
complies with an organization’s standards and processes, relying on the review or non-executable
methods.
Where is Verification Performed?
Specific to IT projects, following are some of the areas (I must emphasize that this is not all) in
which verification is performed.

Verification
Actors Definition Output
Situation

Business/Functional Dev team/client This is a necessary step to not Finalized requirements


Requirement Review for business only make sure that the that are ready to be
requirements. requirements have been consumed by the next step
gathered and/or correctly but – design.
also to make sure if they are
feasible or not.

Design Review Dev team Following the design Design is ready to be


creation, the Dev team implemented into an IT
reviews it thoroughly to make system.
sure that the functional
requirements can be met via
the design proposed.

Code Walkthrough Individual The code once written is Code ready for unit
Developer reviewed to identify any testing.
syntactic errors. This is more
casual in nature and is
performed by the individual
developer on the code
developed by oneself.

Code Inspection Dev team This is a more formal set up. Code ready for testing.
Subject matter experts and
developers check the code to
make sure it is in accordance
with the business and
functional goals targeted by
the software.

Test Plan Review QA team A test plan is internally A test plan document
(internal to QA team) reviewed by the QA team to ready to be shared with the
make sure that it is accurate external teams (Project
and complete. Management, Business
Verification
Actors Definition Output
Situation

Analysis, development,
Environment, client, etc.)

Test Plan Review Project A formal analysis of the test A signed off or approved
(External) Manager, plan document to make sure test plan document based
Business that the timeline and other on which the testing
Analyst, and considerations of the QA activity is going to be
Developer. team are in line with the other based on.
teams and the entire project
itself.

Test documentation QA team A peer review is where the Test documentation ready
review (Peer review) members team members review one to be shared with the
another's work to make sure external teams.
that there are no mistakes in
the documentation itself.

Test documentation Business A test documentation review Test documentation ready


final review Analyst and to make sure that the test to be executed.
development cases cover all the business
team. conditions and functional
elements of the system.
See the test documentation review article which posts a detailed process on how testers can
perform the review.
What Is Validation?
Validation is the process of evaluating the final product to check whether the software meets the
business needs. In simple words, the test execution which we do in our day to day life is actually
the validation activity which includes smoke testing, functional testing, regression testing,
systems testing, etc.
Validation is all forms of testing that involves working with the product and putting it to test.

Given below are the validation techniques:


 Unit Testing
 Integration testing
 System Testing
 User Acceptance Testing
Validation physically ensures that the system operates according to a plan by executing the
system functions through a series of tests that can be observed and evaluated.
Fair enough, right? Here come my two-cents:
When I try to deal with this V&V concept in my class, there is a lot of confusion around it. A
simple, petty example seems to solve all the confusion. It is somewhat silly but really works.

Validation And Verification Examples


Real-life Example: Imagine yourself going to a restaurant/diner and ordering maybe blueberry
pancakes. When the waiter/waitress brings your order out, how can you tell that the food that
came out is as per your order?
The first things are that we look at it and notice the following things:
 Does the food look like what pancakes typically appear to be?
 Are the blueberries to be seen?
 Do they smell right?
Maybe more, but you get the gist right?

On the other hand, when you need to be absolutely sure about whether the food is as you
expected: You will have to eat it.

Verification is all when you are yet to eat but are checking on a few things by reviewing the
subjects. Validation is when you actually eat the product to see if it is right.
In this context, I cannot help myself but go back to the CSTE CBOK reference. There is a
wonderful statement out there that helps us bring this concept home.
Verification answers the question, “Did we build the right system?” while validations
addresses, “Did we build the system right?”

Difference Between Verification And Validation

Verification Validation

Evaluates the intermediary products to check whether Evaluates the final product to check whether
it meets the specific requirements of the particular it meets the business needs.
phase.

Checks whether the product is built as per the specified It determines whether the software is fit for
requirement and design specification. use and satisfies the business needs.

Checks “Are we building the product right”? Checks “Are we building the right product”?

This is done without executing the software. Is done with executing the software.

Involves all the static testing techniques. Includes all the dynamic testing techniques.

Examples include reviews, inspection, and Example includes all types of testing like
walkthrough. smoke, regression, functional, systems and
UAT.
When To Use Validate And Verify?
These are independent procedures that should be employed together to check if the system or
application is in conformity with the requirements and specifications and that it achieves its
intended purpose. Both are important components of the quality management system.

It is often possible that a product passes through the verification but fails in the validation phase.
As it met the documented requirements & specifications, however, those specifications were
themselves incapable to address the user’s needs. Thus, it is important to carry out testing for
both the types to ensure the overall quality.
Verification can be used as an internal process in development, scale-up, or production. On the
other hand, validation should be used as an external process to get the acceptance of fitness with
stakeholders.

Is UAT Validation or Verification?


UAT (User Acceptance Testing) should be considered as validation. It is the real-world
validation of the system or application, which is done by the actual users who validate if the
system is “fit for use”.

Conclusion
V&V processes determine whether the products of a given activity conform to the requirements
and are fit for its use.

Finally, the following are a few things to note:


1. In very simpler terms (to avoid any kind of confusion), we just remember that
Verification means the review activities or the static testing techniques and validation
means the actual test execution activities or the dynamic testing techniques.
2. Verification may or may not involve the product itself. Validation definitely needs the
product. Verification can sometimes be performed on the documents that represent the
final system.
3. Verification and validation do not necessarily have to be performed by the testers. As you
see above in this article some of these are performed by the developers and other teams.

V Model and W Model

What is STLC V-Model?


One of the major handicaps of waterfall STLC model was that defects were found at a very later
stage of the development process since testing was done at the end of the development cycle. It
became very challenging and costly to fix the defects since it was found at a very later stage. To
overcome this problem, a new development model was introduced called the “V Model”
V model is now one of the most widely used software development processes. Introduction of the
V model has actually proved the implementation of testing right from the requirement phase. V
model is also called a verification and validation model.

Verification and Validation


To understand the V model, let’s first understand what is verification and validation in
software.
Verification: Verification is a static analysis technique. In this technique, testing is done without
executing the code. Examples include – Reviews, Inspection, and walkthrough.
Validation: Validation is a dynamic analysis technique where testing is done by executing the
code. Examples include functional and non-functional testing techniques.
V-Model
In the V model, the development and QA activities are done simultaneously. There is no discrete
phase called Testing, rather testing starts right from the requirement phase.  The verification and
validation activities go hand in hand.

To understand the V model, let’s look at the figure below:


In a typical development process, the left-hand side shows the development activities and the
right hand side shows the testing activities. I should not be wrong if I say that in the development
phase both verification and validation are performed along with the actual development
activities.

Now let’s understand the figure:

Left-Hand Side
As said earlier, left-hand side activities are development activities. Normally we feel, what
testing can we do in the development phase, but this is the beauty of this model which
demonstrates that testing can be done in all phase of development activities as well.

Requirement analysis:  In this phase, the requirements are collected, analyzed and studied. Here
how the system is implemented, is not important but, what the system is supposed to do, is
important. Brain storming sessions/walkthrough, interviews are done to have the objectives clear.

 Verification activities: Requirements reviews.


 Validation activities: Creation of UAT (User acceptance test) test cases
 Artifacts produced: Requirements understanding document, UAT test cases.
System requirements /High-level design:  In this phase, the high-level design of the software is
built. The team studies and investigates on how the requirements could be implemented. The
technical feasibility of the requirements is also studied. The team also comes up with the modules
that would be created/ dependencies, Hardware/software needs

 Verification activities: Design reviews


 Validation activities: Creation of System test plan and cases, Creation of traceability
metrics
 Artifacts produced: System test cases, Feasibility reports, System test plan, Hardware-
software requirements, and modules to be created, etc.
Architectural design: In this phase, based on the high-level design, software architecture is
created. The modules, their relationships, and dependencies, architectural diagrams, database
tables, technology details are all finalized in this phase.
 Verification activities: Design reviews
 Validation activities: Integration test plan and test cases.
 Artifacts produced: Design documents, Integration test plan and test cases, Database table
designs etc.
Module design/Low-level Design: In this phase, each and every module of the software
components are designed individually. Methods, classes, interfaces, data types etc are all
finalized in this phase.

 Verification activities: Design reviews


 Validation activities: Creation and review of unit test cases.
 Artifacts produced: Unit test cases,
Implementation / Code: In this phase, the actual coding is done.

 Verification activities: Code review, test cases review


 Validation activities: Creation of functional test cases.
 Artifacts produced: test cases, review checklist.
Right Hand Side
Right-hand side demonstrates the testing activities or the Validation Phase. We will start from the
bottom.

Unit Testing: In this phase, all the unit test case, created in the Low-level design phase are
executed.

*Unit testing is a white box testing technique, where a piece of code is written which invokes a
method (or any other piece of code) to test whether the code snippet is giving the expected output
or not. This testing is basically performed by the development team. In case of any anomaly,
defects are logged and tracked.

Artifacts produced:  Unit test execution results

Integration Testing:  In this phase, the integration test cases are executed which were created in
the Architectural design phase. In case of any anomalies, defects are logged and tracked.

*Integration Testing:  Integration testing is a technique where the unit tested modules are
integrated and tested whether the integrated modules are rendering the expected results. In
simpler words, It validates whether the components of the application work together as expected.

Artifacts produced: Integration test results.

Systems testing: In this phase all the system test cases, functional test cases and nonfunctional
test cases are executed. In other words, the actual and full fledge testing of the application takes
place here. Defects are logged and tracked for its closure. Progress reporting is also a major part
of this phase. The traceability metrics are updated to check the coverage and risk mitigated.

Artifacts produced: Test results, Test logs, defect report, test summary report, and updated
traceability matrices.
User Acceptance Testing:  Acceptance testing is basically related to business requirements
testing. Here testing is done to validate that the business requirements are met in the user
environment. Compatibility testing and sometimes nonfunctional testing (Load, stress, and
volume) testing are also done in this phase.
Artifacts produced: UAT results, Updated Business coverage matrices.

When To Use The V Model?


V model is applicable when:

 The requirement is well defined and not ambiguous


 Acceptance criteria are well defined.
 Project is short to medium in size.
 Technology and tools used are not dynamic.
Pros and Cons of using V model
PROS CONS

- Development and progress is very organized and -Not suitable for bigger and complex projects
systematic

- Works well for smaller to medium sized projects. - Not suitable if the requirements are not
consistent.

- Testing starts from beginning so ambiguities are - No working software is produced in the
identified from the beginning. intermediate stage.

- Easy to manage as each phase has well defined - No provision for doing risk analysis so
objectives and goals. uncertainty and risks are there.
Sr. Key V-Model WaterFall Model
No.

1 Definition V-Model is the development On other hand Waterfall model


model in which the entire there is first development of the
model is divided into various application and after which the
sub development phase where different testing of application take
corresponding testing phase for place. In other words we can say
each development phase is that in WaterFall the complete
practices. In other words we process is divided into several
can say that for every stage in phases among which one phase
the development cycle, there is should be completed in order to
an associated testing phase and reach the next phase and testing is
corresponding testing phase of almost at end phase of the
the development phase is development.
planned in parallel.

2 Type/Nature As mentioned above that in V- On other hand WaterFall Model is


Model the execution of the a relatively linear sequential design
phases i.e., development and approach as each phase should be
testing happens in a sequential completed in order to reach the
manner so type of V-Model is next phase. So type of this model is
Sequential/Parallel in nature. Continuous in nature.

3 Testing and In V-Model each development On other hand in case of WaterFall


Validation phase get tested at its own Model the testing occurs after
level and hence no pending development is completed and thus
testing occurs in this model if any missing validation is
also if any validation requires identified to be implemented then
to be implemented then it first that phase of development
could be implemented at that needs to be recognized and then
phase. that validation get implemented.

4 Cost and As sequential phases need to On other hand in WaterFall Model


Complexity be functional in case of V- due to linear development only one
Model hence the cost is higher phase of development is
as compared to that of operational and hence cost and
WaterFall Model also the complexity is low as compared to
complexity is more than that of V-Model.
WaterFall.

5 Defects In V-Model the probability of On other hand in WaterFall Model


total number of defects in the the probability of total number of
development of application is defects in the development of
low as testing is done in application is high as testing is
parallel to the development. done post development.
Key Difference – Waterfall Model vs V Model
 
The key difference between waterfall model and V model is that in waterfall model the
software testing is done after the completion of development phase while in V model, each
phase in the development cycle has a directly associated testing phase.

Software Development Life Cycle (SDLC) is a process followed by a software organization to


develop a working, high quality software. There are various software development process
models that can be followed during the software development process. Two of them are
Waterfall and V model.

What is Waterfall Model?

Waterfall model is an easy to understand and simple model. The complete process is divided into
several phases. One phase should be completed in order to reach the next phase.

The first phase is requirement gathering and analysis. The requirements are then documented. It
is called the Software Requirement Specification (SRS). The next is the system design phase. It
is to design the entire software architecture. Next phase is the implementation phase. It is to start
coding the small units. These units are combined to form the complete system and tested in the
integration and testing phase. After the testing is completed the software is distributed to the
market. The activities such as maintenance of the software and adding new features come under
deployment and maintenance.
Figure 01: Waterfall Model
This model is appropriate for small projects and when the requirements are very clear. It is not
suitable for large and complex projects. Generally, the customer interaction is the minimum in
the waterfall model.

What is V Model?

V model is an extension of the waterfall model. It has a corresponding testing phase for each
development phase. Therefore, for every stage in the development cycle, there is an associated
testing phase. The corresponding testing phase of the development phase is planned in parallel.
This model is also known as the verification and validation model.

The first phase is to gather requirements. The SRS is prepared at this stage. The acceptance
design plan is also done in this phase. It is the input for acceptance testing.  The design phase
involves two steps. The architecture design involves the architecture required for the system. It is
known as the high-level design. The module design is known as low-level design. The actual
coding starts in the coding phase.

In unit testing, the small modules or units are tested.  The integration testing is to test the flow of
the two different modules. The system testing is to check the functionality of the entire system.
The acceptance testing is to test the software in user environment. It also checks whether the
system is in line with the software requirement specification.

Overall, the v model is suitable, when the project is short and when the requirements are very
clear. It is a not a suitable project for the large, complex and object-oriented projects.

What are the Similarities Between Waterfall Model and V Model?

 Both Waterfall Model and V Model are software process models.


 Both Waterfall model and V models are not suitable for large and complex projects.
What is the Difference Between Waterfall Model and V Model?

Waterfall Model vs V Model


The waterfall model is a relatively linear The V model is a model in which the
sequential design approach to develop execution of the phases happens in a
software projects. sequential manner in a v shape.
 Methodology
The waterfall model is a continuous
The V model is a simultaneous process.
process.
Total Defects
In waterfall model, the total defects in In v model, the total defects in the developed
the developed software are higher. software are lower.
 Defect Identification
In waterfall model, the defects are In v model, the defects are identified from
identified in the testing phase. the initial phase.
Summary – Waterfall Model vs V Model

This article discussed two software process models that are waterfall and v model. The difference
between waterfall and V model is that in waterfall model the software testing is done after the
completion of development phase while in V model, each phase in the development cycle has a
directly associated testing phase.

Agile Testing- Test Driven Software Development

What is Test Driven Development(TDD)?


Test Driven Development (TDD) is software development approach in which test cases are
developed to specify and validate what the code will do. In simple terms, test cases for each
functionality are created and tested first and if the test fails then the new code is written in order
to pass the test and making code simple and bug-free.
Test-Driven Development starts with designing and developing tests for every small functionality
of an application. TDD framework instructs developers to write new code only if an automated
test has failed. This avoids duplication of code. The TDD full form is Test-driven development.
The simple concept of TDD is to write and correct the failed tests before writing new code
(before development). This helps to avoid duplication of code as we write a small amount of
code at a time in order to pass tests. (Tests are nothing but requirement conditions that we need to
test to fulfill them).

Test-Driven development is a process of developing and running automated test before actual
development of the application. Hence, TDD sometimes also called as Test First Development.

How to perform TDD Test


Following steps define how to perform TDD test,

1. Add a test.
2. Run all tests and see if any new test fails.
3. Write some code.
4. Run tests and Refactor code.
5. Repeat.
Fig: Five Steps of Test-Driven Development

TDD cycle defines

1. Write a test
2. Make it run.
3. Change the code to make it right i.e. Refactor.
4. Repeat process.

Some clarifications about TDD:

 TDD approach is neither about “Testing” nor about “Design”.


 TDD does not mean “write some of the tests, then build a system that passes the tests.
 TDD does not mean “do lots of Testing.”

TDD Vs. Traditional Testing


Below is the main difference between Test driven development and traditional testing:

TDD approach is primarily a specification technique. It ensures that your source code is
thoroughly tested at confirmatory level.
 With traditional testing, a successful test finds one or more defects. It is same as TDD.
When a test fails, you have made progress because you know that you need to resolve the
problem.
 TDD ensures that your system actually meets requirements defined for it. It helps to build
your confidence about your system.
 In TDD more focus is on production code that verifies whether testing will work
properly. In traditional testing, more focus is on test case design. Whether the test will
show the proper/improper execution of the application in order to fulfill requirements.
 In TDD, you achieve 100% coverage test. Every single line of code is tested, unlike
traditional testing.
 The combination of both traditional testing and TDD leads to the importance of testing
the system rather than perfection of the system.
 In Agile Modeling (AM), you should “test with a purpose”. You should know why you
are testing something and what level its need to be tested.

Benefits of TDD:
 Much less debug time
 Code proven to meet requirements
 Tests become Safety Net
 Near zero defects
 Shorter development cycles

LEVELS OF TESTING
A level of testing is the stage at which the software must be tested. There are four recognized
levels of testing
A. Unit Testing: It is a type of software testing where individual units or components of a
software are tested. The purpose is to validate that each unit of the software code performs as
expected. Unit Testing is done during the development (coding phase) of an application by the
developers. Unit Tests isolate a section of code and verify its correctness. A unit may be an
individual function, method, procedure, module, or object.
In SDLC, STLC, V Model, Unit testing is first level of testing done before integration testing.

1) Example Consider the example of Calculator. To conduct unit testing on the calculator
system, individual modules like Addition, subtraction, multiplication, division must be tested
independently.
B. Integration Testing: It is defined as a type of testing where software modules are integrated
logically and tested as a group. A typical software project consists of multiple software modules,
coded by different programmers. The purpose of this level of testing is to expose defects in the
interaction between these software modules when they are integrated
Integration Testing focuses on checking data communication amongst these modules. Hence it is
also termed as ‘I & T’ (Integration and Testing), ‘String Testing’ and sometimes ‘Thread
Testing’.

After integrating two different components together we do the integration testing. As displayed


in the image below when two different modules ‘Module A’ and ‘Module B’ are integrated then
the integration testing is done.
 Integration testing is done by a specific integration tester or test team.

Integration Testing Example

Let us understand Integration Testing with example. Let us assume that you work for an IT
organization which has been asked to develop an online shopping website for Camp World, a
company that sells camping gear.

After requirements gathering, analysis and design was complete, one developer was assigned to
develop each of the modules below.

1. User registration and Authentication/Login


2. Product Catalogue
3. Shopping Cart
4. Billing
5. Payment gateway integration
6. Shipping and Package Tracking

After each module was assigned to a developer, the developers began coding the functionality on
their individual machines. They deployed their respective modules on their own machines to see
what worked and what didn’t, as they went about developing the module.

After they completed the development, the developers tested their individual functionalities as
part of their unit testing and found some defects. They fixed these defects. At this point they felt
their modules were complete.

The QA Manager suggested that integration testing should be performed to confirm that all the
modules work together.

When they deployed all of their code in a common machine, they found that the application did
not work as expected since the individual modules did not work well together. There were bugs
like – after logging in, the user’s shopping cart did not show items they had added earlier, the bill
amount did not include shipping cost etc.

In this way, Integration Testing helps us identify, fix issues and ensure that the application as a
whole, works as expected.

It can be done in three ways- Big-bang approach, top down approach and bottom up approach.
1. Big Bang Integration Testing

In Big Bang integration testing all components or modules are integrated simultaneously, after
which everything is tested as a whole. As per the below image all the modules from ‘Module 1’
to ‘Module 6’ are integrated simultaneously then the testing is carried out.

2. Top-down Integration Testing

Testing takes place from top to bottom, This testing begins with unit testing, followed by tests of
progressively higher-level combinations of units called modules or builds.
3. Bottom up Integration Testing

Testing takes place from the bottom of the control flow upwards. n this testing, the highest-level
modules are tested first and progressively, lower-level modules are tested thereafter.

C. System Testing: System Testing includes testing of a fully integrated software system.
Generally, a computer system is made with the integration of software (any software is only a
single element of a computer system). The software is developed in units and then interfaced
with other software and hardware to create a complete computer system. In other words, a
computer system consists of a group of software to perform the various tasks, but only software
cannot perform the task; for that software must be interfaced with compatible hardware. System
testing is a series of different type of tests with the purpose to exercise and examine the full
working of an integrated software computer system against requirements.

To check the end-to-end flow of an application or the software as a user is known as System
testing. In this, we navigate (go through) all the necessary modules of an application and check if
the end features or the end business works fine, and test the product as a whole system.

It is end-to-end testing where the testing environment is similar to the production environment.

Different types of system testing are followed like usability testing, stress testing, regression
testing etc. First and important step in system testing is to prepare System test plan.
System Testing Example
A car manufacturer does not produce the car as a whole car. Each component of the car is
manufactured separately, like seats, steering, mirror, break, cable, engine, car frame, wheels etc. 
After manufacturing each item, it is tested independently whether it is working the way it is
supposed to work and that is called Unit testing.
Now, when each part is assembled with another part, that assembled combination is checked if
assembling has not produced any side effect to the functionality of each component and whether
both components are working together as expected and that is called integration testing.
Once all the parts are assembled and the car is ready, it is not ready actually.
The whole car needs to be checked for different aspects as per the requirements defined like if
car can be driven smoothly, breaks, gears, and other functionality working properly, car does not
show any sign of tiredness after being driven for 2500 miles continuously, color of car is
generally accepted and liked, car can be driven on any kind of roads like smooth and rough,
sloppy and straight, etc and this whole effort of testing is called System Testing and it has
nothing to do with integration testing.
The example worked the way it was expected and the client was convinced about the efforts
required for the system test.
D. Acceptance Testing User Acceptance Testing (UAT) is a type of testing performed by the
end user or the client to verify/accept the software system before moving the software application
to the production environment. UAT is done in the final phase of testing after functional,
integration and system testing is done.

Purpose of UAT
The main Purpose of UAT is to validate end to end business flow. It does not focus on cosmetic
errors, spelling mistakes or system testing. User Acceptance Testing is carried out in a separate
testing environment with production-like data setup. It is kind of black box testing where two or
more end-users will be involved.

Who Performs UAT?

 Client
 End users
 Need of User Acceptance Testing arises once software has undergone Unit, Integration
and System testing because developers might have built software based on requirements
document by their own understanding and further required changes during development
may not be effectively communicated to them, so for testing whether the final product is
accepted by client/end-user, user acceptance testing is needed.

It is a level of testing where a system is tested for acceptability. It has various types’ like- Alpha
testing, beta testing, user acceptance testing and business acceptance testing. It is done by end
users. Its outcome provides an important quality indication for the customer to determine whether
to accept or reject the product.
Test Types

The categorization of software testing is a part of diverse testing activities, such as test strategy,
test deliverables, a defined test objective, etc. And software testing is the execution of the
software to find defects.
The purpose of having a testing type is to confirm the AUT (Application Under Test).
To start testing, we should have a requirement, application-ready, necessary resources
available. To maintain accountability, we should assign a respective module to different test
engineers.

The software testing mainly divided into two parts, which are as follows:
o Manual Testing
o Automation Testing

What is Manual Testing?


Testing any software or an application according to the client's needs without using any
automation tool is known as manual testing.
In other words, we can say that it is a procedure of verification and validation. Manual testing
is used to verify the behavior of an application or software in contradiction of requirements
specification.

Classification of Manual Testing


In software testing, manual testing can be further classified into three different types of testing,
which are as follows:
o White Box Testing
o Black Box Testing
o Grey Box Testing
White Box Testing
White Box Testing is software testing technique in which internal structure, design and coding
of software are tested to verify flow of input-output and to improve design, usability and security.
In white box testing, code is visible to testers so it is also called Clear box testing, Open box
testing, Transparent box testing, Code-based testing and Glass box testing.

What do you verify in White Box Testing?


White box testing involves the testing of the software code for the following:
 Internal security holes
 Broken or poorly structured paths in the coding processes
 The flow of specific inputs through the code
 Expected output
 The functionality of conditional loops
 Testing of each statement, object, and function on an individual basis

The testing can be done at system, integration and unit levels of software development. One of
the basic goals of whitebox testing is to verify a working flow for an application. 

How do you perform White Box Testing?


To give you a simplified explanation of white box testing, we have divided it into two basic
steps. This is what testers do when testing an application using the white box testing technique:

STEP 1) UNDERSTAND THE SOURCE CODE


The first thing a tester will often do is learn and understand the source code of the application.
Since white box testing involves the testing of the inner workings of an application, the tester
must be very knowledgeable in the programming languages used in the applications they are
testing
Step 2) CREATE TEST CASES AND EXECUTE
The second basic step to white box testing involves testing the application’s source code for
proper flow and structure. One way is by writing more code to test the application’s source code.
The tester will develop little tests for each process or series of processes in the application.
This method requires that the tester must have intimate knowledge of the code and is often done
by the developer. 

Advantages of White Box Testing

 Code optimization by finding hidden errors.


 White box tests cases can be easily automated.
 Testing is more thorough as all code paths are usually covered.
 Testing can start early in SDLC even if GUI is not available.

Coverage
White Box Testing is coverage of the specification in the code:
1. Code coverage
2. Segment coverage: Ensure that each code statement is executed once.
3. Branch Coverage or Node Testing: Coverage of each code branch in from all possible was.
4. Compound Condition Coverage: For multiple conditions test each condition with multiple
paths and combination of the different path to reach that condition.
5. Basis Path Testing: Each independent path in the code is taken for testing.
6. Data Flow Testing (DFT): In this approach you track the specific variables through each
possible calculation, thus defining the set of intermediate paths through the code.DFT tends to
reflect dependencies but it is mainly through sequences of data manipulation. In short, each data
variable is tracked and its use is verified. This approach tends to uncover bugs like variables used
but not initialize, or declared but not used, and so on.
7. Path Testing: Path testing is where all possible paths through the code are defined and
covered. It’s a time-consuming task.
8. Loop Testing: These strategies relate to testing single loops, concatenated loops, and nested
loops. Independent and dependent code loops and values are tested by this approach.

Regression Testing

Regression testing is a software testing practice that ensures an application still functions as
expected after any code changes, updates, or improvements.
Regression testing is responsible for the overall stability and functionality of the existing
features. Whenever a new modification is added to the code, regression testing is applied to
guarantee that after each update, the system stays sustainable under continuous improvements. 
Changes in the code may involve dependencies, defects, or malfunctions. Regression testing
targets to mitigate these risks, so that the previously developed and tested code remains
operational after new changes.
Generally, an application goes through multiple tests before the changes are integrated into the
main development branch. Regression testing is the final step, as it verifies the product behaviors
as a whole.

When to apply regression testing


Typically, regression testing is applied under these circumstances:
 A new requirement is added to an existing feature
 A new feature or functionality is added
 The codebase is fixed to solve defects
 The source code is optimized to improve performance
 Patch fixes are added
 Changes in configuration

How to perform regression testing

Regression testing practices vary among organizations. However, there are a few basic steps: 

 
 Detect Changes in the Source Code
Detect the modification and optimization in the source code; then identify the components or
modules that were changed, as well as their impacts on the existing features.
 Prioritize Those Changes and Product Requirements
Next, prioritize these modifications and product requirements to streamline the testing process
with the corresponding test cases and testing tools. 
 Determine Entry Point and Entry Criteria
Ensure whether your application meets the preset eligibility before the regression test execution.
 Determine Exit Point
Determine an exit or final point for the required eligibility or minimum conditions set in step
three.
 Schedule Tests
Finally, identify all test components and schedule the appropriate time to execute.

Black Box Testing


Black Box Testing is a software testing method in which the functionalities of software
applications are tested without having knowledge of internal code structure, implementation
details and internal paths. Black Box Testing mainly focuses on input and output of software
applications and it is entirely based on software requirements and specifications. It is also known
as Behavioral Testing

steps of black box testing


o The black box test is based on the specification of requirements, so it is examined in the
beginning.
o In the second step, the tester creates a positive test scenario and an adverse test scenario
by selecting valid and invalid input values to check that the software is processing them
correctly or incorrectly.
o In the third step, the tester develops various test cases such as decision table, all pairs test,
equivalent division, error estimation, cause-effect graph, etc.
o The fourth phase includes the execution of all test cases.
o In the fifth step, the tester compares the expected output against the actual output.
o In the sixth and final step, if there is any flaw in the software, then it is cured and tested
again.

Techniques Used in Black Box Testing


Decision Table Decision Table Technique is a systematic approach where various
Technique input combinations and their respective system behavior are
captured in a tabular form. It is appropriate for the functions that
have a logical relationship between two and more than two inputs.
Boundary Value Boundary Value Technique is used to test boundary values,
Technique boundary values are those that contain the upper and lower limit of a
variable. It tests, while entering boundary value whether the
software is producing correct output or not.

State Transition State Transition Technique is used to capture the behavior of the
Technique software application when different input values are given to the
same function. This applies to those types of applications that
provide the specific number of attempts to access the application.

All-pair Testing All-pair testing Technique is used to test all the possible discrete
Technique combinations of values. This combinational method is used for
testing the application that uses checkbox input, radio button input,
list box, text box, etc.

Cause-Effect Cause-Effect Technique underlines the relationship between a given


Technique result and all the factors affecting the result.It is based on a
collection of requirements.

Equivalence Equivalence partitioning is a technique of software testing in which


Partitioning input data divided into partitions of valid and invalid values, and it
Technique is mandatory that all partitions must exhibit the same behavior.

Error Guessing Error guessing is a technique in which there is no specific method


Technique for identifying the error. It is based on the experience of the test
analyst, where the tester uses the experience to guess the
problematic areas of the software.

Use Case Use case Technique used to identify the test cases from the
Technique beginning to the end of the system as per the usage of the system.
By using this technique, the test team creates a test scenario that can
exercise the entire software based on the functionality of each
function from start to end.

Types of Black Box Testing

Black box testing further categorizes into two parts, which are as discussed below:

o Functional Testing
o Non-function Testing

Types of Functional Testing

Just like another type of testing is divided into several parts, functional testing is also classified
into various categories.

The diverse types of Functional Testing contain the following:


o Unit Testing
o Integration Testing
o System Testing

Types of Non-functional Testing

Non-functional testing categorized into different parts of testing, which we are going to discuss
further:

o Performance Testing
o Usability Testing
o Compatibility Testing

1. Performance Testing

In performance testing, the test engineer will test the working of an application by applying some
load.

In this type of non-functional testing, the test engineer will only focus on several aspects, such
as Response time, Load, scalability, and Stability of the software or an application.

Classification of Performance Testing

Performance testing includes the various types of testing, which are as follows:

o Load Testing
o Stress Testing
o Scalability Testing
o Stability Testing

Load Testing:
Load Testing is a non-functional software testing process in which the performance of software
application is tested under a specific expected load. It determines how the software application
behaves while being accessed by multiple users simultaneously. The goal of Load Testing is to
improve performance bottlenecks and to ensure stability and smooth functioning of software
application before deployment.

This testing usually identifies –

 The maximum operating capacity of an application


 Determine whether the current infrastructure is sufficient to run the application
 Sustainability of application with respect to peak user load
 Number of concurrent users that an application can support, and scalability to allow more
users to access it.

It is a type of non-functional testing. In Software Engineering, Load testing is commonly used for
the Client/Server, Web-based applications – both Intranet and Internet.
Examples of load testing include

 Downloading a series of large files from the internet


 Running multiple applications on a computer or server simultaneously
 Assigning many jobs to a printer in a queue
 Subjecting a server to a large amount of traffic
 Writing and reading data to and from a hard disk continuously

Stress Testing: Stress testing is defined as the process of testing the hardware or software for its
stability under a heavy load condition. This testing is done to find the numerical point when the
system will break (in terms of a number of the users and server requests etc.) and the related
error handling for the same.
During Stress testing, the application under test (AUT) is bombarded with a heavy load for a
given period of time to verify the breaking point and to see how well error handling is done.

Example: MS Word may give a ‘Not Responding’ error message when you try to copy a 7-8 GB
file.
You have bombarded Word with a huge sized file and it couldn’t process such a big file and as a
result, it is hanged. We normally kill apps from the Task Manager when they stop responding,
the reason behind it is that the apps get stressed and stop responding.

Following are some technical reasons behind performing Stress testing:


 To verify the system behavior under abnormal or extreme load condition.
 To find the numerical value of users, requests etc., after which the system may break.
 Handle the error graciously by showing appropriate messages.
 To be well prepared for such conditions and take precautionary measures like code
cleaning, DB cleaning, etc.
 To verify data handling before the system breaks i.e. to see if data was deleted, saved or
not etc.
 To verify security threat under such breaking conditions etc.

Enlisted below are some Examples of real cases where it is of high importance to stress test an
app or website:

#1) Commercial shopping apps or websites need to perform stress testing as the load becomes
very high during festivals, sale or special offer period.
#2) Financial apps or websites need to perform stress test as the load increases at times like when
a company share goes up, a lot of people log in to their accounts to buy or sell, online shopping
websites re-direct ‘Net-bankers’ for payment etc.
#3) Web or emailing apps need to be stress tested.
#4) Social networking websites or apps, blogs etc., need to be stress tested etc.
Difference Between Load Testing and Stress Testing
S.No. Stress Testing Load testing

1 This testing is done to find out the breaking point This testing is done to verify the
of the system. performance of the system under an
expected load.

2 This testing is done to find out whether the system This testing is done to check the response
will behave as expected if the load goes beyond time of the server for the expected specific
the normal limit. load.
S.No. Stress Testing Load testing

3 Error handling is also verified in this test. Error handling is not intensely tested.

4 This also checks for security threats, memory No such testing is mandatory.
leaks etc.

5 Checks the stability of the systems. Checks the reliability of the system.

6 Testing is done with more than the max. possible Testing is done with the maximum no of
no of users, requests etc. users, requests etc.

1. Usability Testing

The primary purpose of executing the usability testing is to check that the application should be
easy to use for the end-user who is meant to use it, whereas sustaining the client's specified
functional and business requirements.

When we use usability testing, it makes sure that the developed software is straightforward while
using the system without facing any problem and makes end-user life easier.

In other words, we can say that Usability testing is one of the distinct testing techniques that
identify the defect in the end-user communication of software product. And that's why it is also
known as User Experience (UX) Testing.

It helps us to fix several usability problems in a specific website or application, even making sure
its excellence and functionality.

Why do we need to perform Usability Testing?

We need usability testing because usability testing is to build a system with great user
experience. Usability is not only used for software development or website development, but it is
also used for product designing.

Usability testing is a method of testing the functionality of a website, app, or other digital


product by observing real users as they attempt to complete tasks on it. The users are usually
observed by researchers working for a business.

The goal of usability testing is to reveal areas of confusion and uncover opportunities to improve
the overall user experience.

Why is usability testing important?

Usability testing is done by real-life users, who are likely to reveal issues that people familiar
with a website can no longer identify—very often, in-depth knowledge can blind designers,
marketers, and product owners to a website's usability issues.

Bringing in new users to test your site and/or observing how real people are already using it are
effective ways to determine whether your visitors:
 Understand how your site works and don't get 'lost' or confused
 Can complete the main actions they need to
 Don't encounter usability issues or bugs 
 Have a functional and efficient experience
 Notice any other usability problems

And Customers must be comfortable with your application with the following parameters.

o The flow of an Application should be good


o Navigation steps should be clear
o Content should be simple
o The layout should be clear
o Response time

And we can also test the different features in usability testing given as follows:

o How easy it is using the application


o How easy to learn application

Maintainability testing

It basically defines that how easy it is to maintain the system. This means that how easy it is to
analyze, change and test the application or product.

The term maintainability corresponds to the ability to update or modify the system under test.
This is a very important parameter as the system is subjected to changes throughout the software
life cycle.

The maintainability testing shall be specified in terms of the effort required to effect a change
under each of the following four categories:

 Corrective maintenance – Correcting problems. The maintainability of a system can be


measured in terms of the time taken to diagnose and fix problems identified within that
system.
 Perfective maintenance –  Enhancements. The maintainability of a system can also be
measured in terms of the effort taken to make required enhancements to that system. This
can be tested  by recording the time taken to achieve a new piece of identifiable
functionality such as a change to the database, etc. A number of similar tests should be
run and an average time calculated. The outcome will be that it is possible to give an
average effort required to implement specified functionality. This can be compared
against a target effort and an assessment made as to whether requirements are met.
 Adaptive maintenance – Adapting to changes in environment. The maintainability of a
system can also be measured in terms on the effort required to make required adaptations
to that system. This can be measured in the way described above for perfective
maintainability testing.
 Preventive maintenance – Actions to reduce future maintenance costs. This refers to
actions to reduce future maintenance costs.
Portability Testing
The test results obtained from Portability Testing helps in finding out how easily a software
component from one environment can be used in another environment. 
The term ‘environment’ refers to moving from one operating system to another operating
system, one browser to another browser or from one database version to another database
version.

A major thumb rule of Portability Testing is that it is to be used only if the software
component is to be moved from one environment to another environment.

Difference between Portability and Compatibility Testing

The points given below will briefly distinguish the differences between Portability and
Compatibility.
=> Compatibility deals with whether two or more components can be run in the same
environment at the same time without adversely affecting the behavior of each other.
Example: A word processor and a calculator running on the same OS such as Windows 10 can
be said to be compatible with each other as running one application will not affect the behavior
of the other application.
=> Portability deals with moving the component from one environment to another.
Example: A game running on Windows XP is said to be portable if the same game can be run on
Windows 7 without any change in the behavior of the game.
=> In short, portability testing deals with software components across multiple environments,
while compatibility testing deals with testing two different applications in the same environment.

The following are the objectives of this testing:


 Determine if a system can be ported to each of the environmental characteristics, such as
Processor speed, Disk space & RAM, monitor resolution, OS and browser versions.
 Determine if the look and feel of the application with respect to UI and functional
features are similar to multiple OS and multiple browsers.
 This testing helps to determine if the system can be ready for release, especially when
there is an awareness that the customers of the product will use multiple operating
systems with multiple browser versions.
 This testing is usually performed against a pre-defined set of portability requirements,
which help to find the defects that are missed as part of the unit and integration testing of
the application.
 Defects found in this testing need to be fixed and delivered as a part of the product release
by the Developers.
 This testing is generally performed in an incremental manner throughout the software
development lifecycle.
 Help determine the extent to which the system is ready for launch.

Smoke Testing
Smoke Testing is a software testing technique performed post software build to verify that the
critical functionalities of software are working fine. It is executed before any detailed functional
or regression tests are executed. The main purpose of smoke testing is to reject a software
application with defects so that QA team does not waste time testing broken software application.
In Smoke Testing, the test cases chose to cover the most important functionality or component of
the system. The objective is not to perform exhaustive testing, but to verify that the critical
functionalities of the system are working fine.
For Example, a typical smoke test would be – Verify that the application launches successfully,
Check that the GUI is responsive … etc.

What is Sanity Testing?


Sanity testing is a kind of Software Testing performed after receiving a software build, with
minor changes in code, or functionality, to ascertain that the bugs have been fixed and no further
issues are introduced due to these changes. The goal is to determine that the proposed
functionality works roughly as expected. If sanity test fails, the build is rejected to save the time
and costs involved in a more rigorous testing.

The objective is “not” to verify thoroughly the new functionality but to determine that the
developer has applied some rationality (sanity) while producing the software. For instance, if
your scientific calculator gives the result of 2 + 2 =5! Then, there is no point testing the advanced
functionalities like sin 30 + cos 50.

Smoke Testing Vs Sanity Testing – Key Differences


Following is the difference between Sanity and Smoke testing:

Smoke Testing Sanity Testing


Smoke Testing is performed to ascertain that
Sanity Testing is done to check the new
the critical functionalities of the program is
functionality/bugs have been fixed
working fine
The objective of this testing is to verify the The objective of the testing is to verify the
“stability” of the system in order to proceed “rationality” of the system in order to proceed
with more rigorous testing with more rigorous testing
This testing is performed by the developers or Sanity testing in software testing is usually
testers performed by testers
Smoke testing is usually documented or Sanity testing is usually not documented and is
scripted unscripted
Smoke testing is a subset of Acceptance testing Sanity testing is a subset of Regression Testing
Smoke testing exercises the entire system from Sanity testing exercises only the particular
end to end component of the entire system
Sanity Testing is like specialized health check
Smoke testing is like General Health Check Up
up

Points to note.

 Both Sanity and Smoke testing are ways to avoid wasting time and effort by quickly
determining whether an application is too flawed to merit any rigorous testing. 
 Smoke Testing is also called tester acceptance testing.
 Smoke testing performed on a particular build is also known as a build verification test.
 One of the best industry practice is to conduct a Daily build and smoke test in software
projects.
 Both smoke and sanity tests can be executed manually or using an automation tool.
When automated tools are used, the tests are often initiated by the same process that
generates the build itself.
 As per the needs of testing, you may have to execute both Sanity and Smoke Tests in the
software build. In such cases, you will first execute Smoke tests and then go ahead with
Sanity Testing. In industry, test cases for Sanity Testing are commonly combined with
that for smoke tests, to speed up test execution. Hence, it’s a common that the terms are
often confused and used interchangeably

17.Localization & Internationalization

Localization testing

Localization testing is a part of software testing process focused on internationalization and


localization aspects of software. Localization is the process of adapting a globalized application
to a particular culture/locale. Localizing an application requires a basic understanding of the
character sets typically used in modern software development and an understanding of the issues
associated with them. Localization includes the translation of the application user interface and
adapting graphics for a specific culture/locale. The localization process can also include
translating any help content associated with the application.

Localization of business solutions requires that you implement the correct business processes and
practices for a culture/locale. Differences in how cultures/locales conduct business are heavily
shaped by governmental and regulatory requirements. Therefore, localization of business logic
can be a massive task.

Localization testing checks how well the build has been translated into a particular target
language. This test is based on the results of globalized testing where the functional support for
that particular locale has already been verified. If the product is not globalized enough to support
a given language, you probably will not try to localize it into that language in the first place!

You still have to check that the application you're shipping to a particular market really works
and the following section shows you some of the general areas on which to focus when
performing a localization test.

The following needs to be considered in localization testing:

 Things that are often altered during localization, such as the UserInterface and content
files.
 Operating System
 Keyboards
 Text Filters
 Hot keys
 Spelling Rules
 Sorting Rules
 Upper and Lower case conversions
 Printers
 Size of Papers
 Mouse
 Date formats
 Rulers and Measurements
 Memory Availability
 Voice User Interface language/accent
 Video Content

You might also like