0% found this document useful (0 votes)
15 views

ccs 366 STA Unit 1

This document provides an overview of the foundations of software testing, including its importance, processes, and methodologies. It discusses various testing types such as manual and automated testing, as well as verification and validation processes. Additionally, it outlines the Software Testing Life Cycle (STLC) and key terminologies related to software testing.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

ccs 366 STA Unit 1

This document provides an overview of the foundations of software testing, including its importance, processes, and methodologies. It discusses various testing types such as manual and automated testing, as well as verification and validation processes. Additionally, it outlines the Software Testing Life Cycle (STLC) and key terminologies related to software testing.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

UNIT I

FOUNDATIONS OF SOFTWARE TESTING

Introduction

In this chapter we will discuss the fundamentals of testing, such as why testing
is required, its limitations, aims and purposes, as well as the guiding principles,
step -by -step methods and psychological concerns that testers must take into
mind. We will be able to explain the fundamentals of testing after the
completion of this chapter.

Software testing is a method for figuring out if the real piece of software meets
requirements and is error-free. It involves running software or system
components manually or automatically in order to evaluate one or more of its
characteristics. Finding faults; gaps or unfulfilled requirements in comparison
to the documented specifications is the aim of software testing.

TESTING PROCESS

Testing is an important aspect of the software development life cycle. It is


basically the process of testing the newly developed software, prior to its actual
use. The program is executed with desired input and the output is observed
accordingly. The observed output is compared with expected output. If both are
same, then the program is said to be correct as per specifications, otherwise there
is something wrong somewhere in the program.
What is Software Testing?
Software testing can be stated as the process of verifying and validating whether
a software or application is bug - free, meets the technical requirements as guided
by its design and development and meets the user requirements effectively and
efficiently by handling all the exceptional and boundary cases.

The process of software testing aims not only at finding faults in the existing
software but also at finding measures to improve the software in terms of
efficiency, accuracy and usability. It mainly aims at measuring the specification,
functionality and performance of a software program or application.

Why Should We Test?


Software testing is a very expensive and critical activity; but releasing the software
without testing is definitely more expensive and dangerous. No one would like to
do it. It is like running a car without brakes. Hence testing is essential; but how
much testing is required? Do we have methods to measure it? Do we have
techniques to quantify it? The answer is not easy. All projects are different in
nature and functionalities and a single yardstick may not be helpful in all
situations. It is a unique area with altogether different problems.

When to release the software is a very important decision. Economics generally


plays an important role. We shall try to find more errors in the early phases of
software development. The cost of removal of such errors will be very reasonable
as compared to those errors, which we may find in the later phases of software
development. The cost to fix errors increases drastically from the specification
phase to the test phase and finally to the maintenance phase as shown in below
Figure.

Who Should We Do the Testing?


Testing a software system may not be the responsibility of a single person.
Actually, it is a team work and the size of the team is dependent on the complexity,
criticality and functionality of the software under test. Roles of the persons
involved during development and testing are given in Table 1.6.

SOME TERMINOLOGIES

Program and Software


The software is the superset of the program(s). It consists of one or many
program(s), documentation manuals and operating procedure manuals. These
components are shown in Figure 1.6.

The program is a combination of source code and object code. Every phase of the
software development life cycle requires preparation of a few documentation
manuals which are shown in Figure 1.7. These are very helpful for development
and maintenance activities
Operating procedure manuals consist of instructions to set up, install, use and to
maintain the software. The list of operating procedure manuals / documents is
given in Figure 1.8.

Verification and Validation


Verification:
“It is the process of evaluating the system or component to determine whether the
products of a given development phase satisfy the conditions imposed at the start
of that phase.”

Validation:
“It is the process of evaluating a system or component during or at the end of the
development process to determine whether it satisfies the specified requirements.”
Testing includes both verification and validation.
Thus Testing = Verification + Validation

Test, Test Case and Test Suite


Test and test case terms are synonyms and may be used interchangeably. A test
case consists of inputs given to the program and its expected outputs. Inputs may
also contain precondition(s) (circumstances that hold prior to test case execution),
if any, and actual inputs identified by some testing methods. Expected output may
contain post-condition(s) (circumstances after the execution of a test case), if any,
and outputs which may come as a result when selected inputs are given to the
software.The template for a typical test case is given in Table 1.7.

Static and Dynamic Testing


Static testing refers to testing activities without executing the source code. All
verification activities like inspections, walkthroughs, reviews, etc. come under this
category of testing. Dynamic testing refers to executing the source code and seeing
how it performs with specific inputs.
Testing and Debugging
The purpose of testing is to find faults and find them as early as possible. When we
find any such fault, the process used to determine the cause of this fault and to
remove it is known as debugging. These are related activities and are carried out
sequentially.
What are the Benefits of Software Testing ?
The following are advantages of employing software testing :
One of the key benefits of software testing is that it is cost-effective, timely
testing of any IT project enables you to make long-term financial savings. If flaws
are found sooner in the software testing process, fixing them is less expensive.
• Security : This is the critical and delicate advantage of software testing.
People are searching for reliable goods. It assists in removing risks and issues
early.
• Product quality : Any software product must meet this criteria. Testing
guarantees that buyers get a high-quality product.
• Customer satisfaction : Providing consumers with contentment is the primary
goal of every product. The optimum user experience is guaranteed through
UI/UX testing.
Type of Software Testing
• Manual testing :

The process of checking the functionality of an application as per the customer


needs without taking any help of automation tools is known as manual testing.
While performing the manual testing on any application, we do not need any
specific knowledge of any testing tool, rather we need to have a proper
understanding of the product so we can easily prepare the test document.

Manual testing can be further divided into three types of testing, which
are as follows :
White box testing
Black box-testing
Gray box testing.

Automation-testing :

• Automation testing is a process of converting any manual test cases into the
test scripts with the help of automation tools or any programming language
is known as automation testing. With the help of automation testing, we can
enhance the speed of our test execution because here, we do not require any
human efforts. We need to write a test script and execute those scripts.
Black-Box Testing and White-Box Testing
Black box testing (also called functional testing) is testing that ignores the
internal mechanism of a system or component and focuses solely on the outputs
generated in response to selected inputs and execution conditions. White box
testing (also called structural testing and glass box testing) is testing that takes
into account the internal mechanism of a system or component.
What is white-box testing?
The technique of testing in which the tester is aware of the internal workings
of the product, has access to its source code and is conducted by making sure
that all internal operations are performed according to the specifications is
known as white-box testing.
Because Of the system's internal viewpoint, the phrase "white box" is employed.
The term "clear box," "white box" or "transparent box" refers to the capability of
seeing the software's inner workings through its exterior layer. Developers carry it
out before sending the program to the testing team, who then conducts black box
testing. Testing the infrastructure of the application is the primary goal of white-
box testing. As it covers unit testing and integration testing, which is performed at
lower levels. Given that it primarily focuses on the code structure, pathways,
conditions and branches of a program or piece of software, it necessitates
programming skills. Focusing on the inputs and outputs via the program and
enhancing its security are the main objectives of white-box testing.
It is also referred to as transparent testing, code-based testing, structural
testing and clear box testing. It is a good fit and is recommended for testing
algorithms.
Types of White Box Testing in Software Testing
White box testing is a type of software testing that examines the internal
structure and design of a program or application. The following are some
common types of white box testing :
• Unit testing : Tests individual units or components of the software to
ensure they function as intended.
• Integration testing : Tests the interactions between different units or
components of the software to ensure they work together correctly.
• Performance testing : Tests the performance of the software under
various loads and conditions to ensure it meets performance requirements.
• Security testing : Tests the software for vulnerabilities and weaknesses to
ensure it is secure.
• Code coverage testing : Measures the percentage of code that is executed
during testing to ensure that all parts of the code are tested.
• Regression testing : Tests the software after changes have been made to
ensure that the changes did not introduce new bugs or issues.
Techniques of White Box Testing
There are some techniques which is used for white box testing
• Statement coverage : This testing approach involves going over every
statement in the code to make sure that each one has been run at least once. As a
result, the code is checked line by line.
• Branch coverage : Is a testing approach in which test cases are created to
ensure that each branch is tested at least once. This method examines all
potential configurations for the system.
• Path coverage : Path coverage is a software testing approach that defines and
covers all potential pathways. From system entrance to exit points, pathways are
statements that may be executed. It takes a lot of time.
• Loop testing :With the help of this technique, loops and values in both
independent and dependent codes are examined. Errors often happen at the start
and conclusion of loops. This method includes following
• Concatenated loops
• Simple loops
• Nested loops

Basis path testing : Using this methodology, control flow diagrams are created
from code and subsequently calculations are made for cyclomatic
complexity(counting number of decision points in a source code).

Advantages of White Box Testing


• Complete coverage.
• Better understanding of the system.
• Improved code quality.
• Increase efficiency.
• Early detection of error.
Disadvantages of White Box Testing
This testing is very expensive and time-consuming.
• Redesign of code needs test cases to be written again.
• Missing functionalities cannot be detected.
• This technique can be very complex and at times not realistic.
• White-box testing requires a programmer with a high level of knowledge due to
the complexity of the level of testing that needs to be done.
Black-Box Testing
What is black-box testing?
Black Box Testing is a software testing technique where the internal workings
(code, algorithms, architecture) of the software are hidden from the tester. Instead,
the focus is on verifying the software's functionality, performance, and user
experience based on the input and expected output.

Every dot in the input domain represents a set of inputs and every dot in the output
domain represents a set of outputs. Every set of input(s) will have a corresponding
set of output(s). The test cases are designed on the basis of user requirements
without considering the internal structure of the program. This black box
knowledge is sufficient to design a good number of test cases.

Types of Black Box Testing


Black box testing can be applied to three main types of tests : Functional, non-
functional and regression-testing.
1. Functional testing :(Verifies software functions)
Specific aspects or operations of the program that is being tested may be tested via
black box testing. For example, make sure that the right user credentials may be
used to log in and that the incorrect ones cannot.
Functional testing concentrates on the most important aspect features of the
program(smoke testing/sanity testing),or on how well the system works as a whole
(system testing) or on the integration of its essential components.

2. Non-functional testing(Checks performance, usability, and security) :


Beyond features and functioning, black box testing allows for the inspection of
extra software components. A non-functional test examines "how" rather than
"if" the programme can carry out a certain task.
Black box testing may determine whether software is
• Usable and simple for its users to comprehend;
• Performance under predicted or peak loads; Compatible with relevant
devices, screen sizes, browsers or operating systems;
• Exposed to security flaws or frequent security threats.
3. Regression testing (Ensures new updates don't break existing features):
To determine if a new software version displays a regression or a decrease in
capabilities, from one version to the next, black box testing may be employed.
Regression testing may be used to evaluate both functional and non-functional
features of the program, such as when a particular feature no longer functions as
anticipated in the new version or when a formerly fast-performing action becomes
much slower in the new version.
Black Box Testing Techniques
1.Equivalence partitioning :
Purpose: Reduces the number of test cases by dividing input data into logical
partitions/groups.
Approach: Test one value from each partition, assuming all values in the partition
will behave similarly.
Example:

• Input Age: Valid range (18–60)


• Partitions:
• Valid: 18–60
• Invalid: Below 18, Above 60
• Test Values: 17 (invalid), 30 (valid), 61 (invalid)

Boundary value analysis :


2. Boundary Value Analysis (BVA)

• Purpose: Tests the boundaries of input ranges, where errors are most likely
to occur.
• Approach: Test cases are designed at the edges of valid and invalid ranges.
• Example:
• Input Age: Valid range (18–60)
• Test Values: 17 (just below), 18 (lower boundary), 60 (upper
boundary), 61 (just above)

3. Decision Table Testing

• Purpose: Tests combinations of different inputs and their corresponding


outcomes.
• Approach: Represent rules and conditions in a tabular format.
• Example:

Black box testing White box testing-


It is a method of software testing in It is a method of software testing in
which the code; programme or internal which the tester is familiar with the
struéture is concealed and software's internal structure, code or
unknowable. programme.
Black box testing does not need code White box testing requires Code
implementation. implementation.
It is mostly done by software testers. It is mostly done by software
developers.
No knowledge of implementation is Knowledge of implementation is
needed. required.
It can be referred to as outer or It is the inner or the internal software
external software testing. testing.
It is a functional test of the software. It is a structural test of the software.
This testing can be initiated based on This type of testing of software is
the requirement specifications started after a detailed design
document. document.
No knowledge of programming is It is mandatory to have knowledge of
required. programming.
It is the behavior testing of the It is the logic testing of the software.
software.
It is applicable to the higher levels of It is generally applicable to the lower
testing softWare. levels of software testing.
It is also called closed testing. It is also called clear box testing.
It is the least time consuming. It is most time consuming.
It is not suitable or preferred for It is suitable for algorithm testing.
algórithm testing,
Can be done by trial and error ways Data domains along with inner or
and methods. Internal boundaries can be better
tested.
Example: Search something on google Example : By input to check and verify
by using keywords. loops.
Black-box test design techniques white-box test design techniques
Decision table testing Control flow testing
All-pairs testing Data flow testing
Equivalence partitioning Branch testing
Error guessing

Types of black box testing . Types of white box testing


Functional testing Path testing
Non-functional testing Loop testing
Regression testing Condition testing
It is less comprehensive as compared It is comparatively more
to white box testing. comprehensive than black box testing.

Software Testing Life Cycle

The Software Testing Life Cycle (STLC) is a systematic process for testing software
to ensure its quality, reliability, and performance. It consists of multiple phases
that guide the testing team from planning to closure.

What is the STLC (Software Testing Life Cycle) ?

The term "Software Testing Life Cycle" refers to a testing procedure with particular
phases that must be carried out in a certain order to guarantee that the- quality
objectives have been reached. Each step of the STLC process is completed in a
planned and orderly manner. Goals and deliverables vary for each phase. The STLC
stages vary depending on the organization, but the fundamentals are the same.

Below are the phases of STLC :

l . Requirements phase

2. Planning phase

3. Analysis phase
4. Design phase

5.Implementation phase

6. Execution phase

7. Conclusion phase

8.Closure phase

l . Requirements phase

Analyses and research the requirements throughout this phase of the STLC.
Participate in brainstorming discussions with other teams to see if the criteria can
be tested . The scope of the testing is determined at this step. Inform the team
during this phase if any feature cannot be tested so that the mitigation
approach(identifying potential risks or challenges at each phase and implementing
strategies to minimize or eliminate their impact on the software testing process)
may be prepared.

2. Planning phase

Is the initial stage of the testing procedure· in real-world circumstances. The


actions and resources that will help us achieve the testing goals are identified at
this phase. We also strive to determine the metrics and the procedure for
collecting and monitoring such indicators during planning.

What is the foundation for the planning ? only prerequisites ?

NO, is the response. While requirements certainly serve as a foundation, there are
also two additional highly significant aspects that affect test preparation. Which
are

Assess the organization's strategy.


Risk management and mitigation, as well as risk analysis.

3. Analysis phase

The "WHAT" to be tested is determined in this STLC step. Basically, the


requirements document, product hazards and other test bases are used to
determine the test circumstances. The requirement should be able to be linked
back to the test condition.The determination of test conditions is influenced by a
number of variables, including:

Testing levels and depth.

The product's complexity.

Project-and product-related risks.

The 1ife cycle of software development is included.

Test administration.

We need to make an effort to accurately capture the test circumstances in writing.

4.Design phase :

In this step, "HOW" to test is defined. The duties in this phase include : Describe
the test condition. To enhance coverage, divide the test conditions into many
smaller sub-conditions.

⦁ Locate and collect the test data.

⦁ Identify the test environment and set it up.

⦁ Develop the traceability metrics for requirements.


⦁ Produce metrics for test coverage.

5.Implementation phase :

The construction of thorough test cases is the main undertaking in this STLC
phase. Determine the test cases' order of importance and which test cases will be
included in the regression suite. It is crucial to do a review to confirm the accuracy
of the test cases prior to finalizing them. Don't forget to sign off on the test cases
before beginning the real execution as well.

If your project incorporates automation, choose the test cases that should be
automated and begin scripting them. Remember to review them!

6.Execution phase :

As its name implies, this is the stage of the software testing life cycle when actual
execution occurs. However, make sure that your entrance requirement is satisfied
before you begin your execution. Execute the test cases and in the event of any
discrepancy, report the faults. Fill up the traceability metrics simultaneously to-
monitor its progress.

7.Conclusion Phase :

The exit criteria and reporting are the main topics of this STLC phase. You may
choose whether to send out a daily report or a weekly report, etc., depending on
your project and the preferences of your stakeholders.

The main thing to remember is that the substance of the report varies and relies
on whoever you are sending your reports to. There are many sorts of reports (DSR
- Daily Status Report, WSR - Weekly Status Reports) that you may send.

Include the technical aspects of the project in your report (number of test cases
succeeded, failed, defects reported, severity 1 problems, etc.) if your project
managers have a testing background since they will be more interested in the
technical side of the project.
However, if you are reporting to higher stakeholders, it's possible that they won't
be interested in the technical details; instead, focus on the risks that the testing
has helped to reduce.

8.Closure phase :

The following tasks are part of the closure activities :

Verify that the test has been completed. Whether all test scenarios are run or
intentionally mitigated. Verify that no faults of severity 1(Severity 1 refers to a
critical defect or bug identified during the testing phase )have been opened.

Hold meetings to discuss lessons learned and produce a paper detailing them.

(Include what worked well, where changes are needed and what may be done
better.)

V-model of Software Testing

The V-model is also known as the verification and validation model. This requires
that each stage of the SDLC be completed before moving on to the next. The
waterfall model's sequential design approach is also followed. The device's testing
is scheduled concurrently with the relevant stage of development.
Verification is a static analysis technique (review) carried out without actually
running any code. To determine if certain criteria are met, the product
development process is evaluated. Testing is done by running code and validation
comprises dynamic analysis methods (functional and non-functiónal). After the
development phase the software is categorized through the validation step to see
whether it satisfies the needs and expectations of the client.

Therefore, the V - model features validation stages on one side and verification
phases on the other. Coding phase joins the verification and validation
processes in a V-shape. As a result, it is known as the V - model.
There are many stages in the V - model's verification phase :
Business requirement analysis :
This is the initial phase in which customer-side product needs are understood.
To fully comprehend the expectations and precise needs of the consumer, this
step involves comprehensive discussion.
System design :
System engineers utilise the user requirements document to analyse and
comprehend the business of the proposed system at this level.
Architecture design :
The first step in choosing an architecture is to have a solid understanding of
everything that will be included, such as the list of modules, a short description
of each module's operation, the linkages between the modules' interfaces, any
dependencies, database tables, architectural diagrams, technological details,
etc. A certain step includes the integration testing model.
Module design :
The system is divided into manageable modules during the module design
phase. Low-level design, which is the specification of the modules' intricate
design.
Coding step : The coding step is started after designing. It is determined on a
programming language that will work best based on the criteria. For coding,
there are certain rules and standards. The final build is enhanced for greater
performance prior to checking it into the repository and the code undergoes
several code reviews to verify its performance.
There are many stages in the V - model's validation phase-:
Unit testing :
Unit Test Plans (UTPs) are created in the V - model's module design phase. To
get rid of problems at the unit or code level, these UTPs are run. The smallest
thing that can exist on its own is a unit, such a programme module. Unit
testing ensures that even the tiniest component can operate properly when
separated from other units.
Integration testing :
During the architectural design phase, integration test plans are created. These
experiments demonstrate that separate groups may live and interact with one
another.
System testing :
During the system design phase, plans for system tests are created. System
test plans, in contrast to unit and integration test plans, are created by the
client's business team. System testing ensures that developer's requirements
are satisfied.
Acceptance testing :
The examination of business requirements is connected to acceptance testing.
The software product is tested in a user environment. Acceptance tests
highlight any system compatibility issues that may exist within the user
environment. Additionally, it identifies non-functional issues like load and
performance flaws in the context of actual-user interaction.
When to use the V model ?
When the requirement is well defined and not ambiguous.
The V-shaped model should be used for small to medium-sized projects where
requirements are clearly defined and fixed.
The V-shaped model should be chosen when sample technical resources are
available with essential technical expertise.

Advantage of V model :
1.Easy to understand.
2.Testing methods like planning, test designing happens well before coding.
3.This saves a lot of time. Hence a higher chance of success over the
waterfall model.
4.Avoids the downward flow of the defects.
5. Works well for small plans where requirements are easily understood.
Disadvantage of V - model :
1.Very rigid and least flexible.
2. Not good for a complex project.
3.Software is developed during the implementation stage, so no early
prototypes of the software are produced.
4. If any changes happen midway, then the test documents along with the
required documents, have to be updated.
Program Correctness and Verification
program correctness

Program correctness refers to the extent to which a software program meets


its intended specifications and behaves as expected.
Purpose of program verification
Program verification is the process of formally confirming that a program
adheres to its specifications to ensure its correctness.

Aspect Program Correctness Program Verification


Definition Ensuring software Proving software adheres
produces the correct to specifications.
output.
Focus Output correctness and Conformance to formal
termination. specifications.
Techniques Static Analysis, Model Checking,
Assertions, Testing Theorem Proving
Scope Limited to specific Broader system-wide
program behaviors. assurance.
Reliability versus Safety
Reliability
It is defined as a system or component's capacity to carry out its essential tasks
under constant circumstances for a certain amount of time.
The chance that a software system completes its assigned job in a certain
environment for a predetermined number of input instances, assuming that
the hardware and the input are error-free, is another definition of software
reliability.

Example:

An online banking application must reliably process thousands of transactions


daily without errors or crashes.

Safety

Safety refers to the software system's ability to operate without causing


unacceptable risk or harm to people, data, or the environment, even in the
presence of faults.
Aspect Reliability Safety
Focus Consistent and error-free Avoidance of harm
performance failure
Objective Minimize system failures Minimize risk and
and downtime consequences of failures
Example Domain Banking software Self-driving car safety
reliability
Failures, Errors and Faults (Defects)

Software Failure

Definition:

A software failure occurs when a software system does not perform its intended
function or produces incorrect results during execution, violating its specifications
or user expectations.

• Example: A banking app calculates incorrect interest for a savings account


due to a bug in the interest computation module.

Key Characteristics of Software Failure:

Failures are noticeable by users or during system monitoring.

Failures occur when the system is running.

The system produces outputs or behaves in a way not aligned with its
requirements.

Errors

Error is a condition that arises when a developer or member of the development


team does not Comprehend the concept of a need, and this misunderstanding
results in defective code. Error is the phrase used to describe this circumstance
and was mostly created by the developers.
• Wrong logic, grammar or loops may produce errors that affect the end-
user experience.
• The difference between predicted and actual outcomes is used to
compute it.
• It arises for a variety of causes, such as application challenges brought
on by design, code or system specification problems.

Faults (Defects)

Sometimes due to specific conditions like a lack of finances or failing to take


the necessary precautions when a software error occurs, it signifies that the
logic to manage the application's problems was not included. Although this is a
bad condition, it often results from incorrectly specified stages or a lack of data
definitions.
It results in a warning in the programme; it is an application program's
unexpected behaviour.
If a bug isn't fixed, it can prevent the deployed code from functioning
properly.
In rare circumstances, a tiny mistake might cause a high-end error.

Adopting programming methods, development approaches, peer review


and code analysis are only a few strategies to avoid errors.

Aspect Failure Error Fault (Defect)


Definition Observable Human mistakes Flaw in the
deviation from during design or software code or
expected behavior coding. design.
during execution.
Source Result of executing Human oversight. Result of an error.
a fault.
Stage Runtime phase. Design/Coding Code/System
phase. phase.
Visibility Always visible Invisible in code May not always
when executed. execution. cause failure.
Example Wrong output Misunderstood Incorrect
displayed in the requirement. calculation
app. formula.
Software Testing Principles

The Software Testing Principles are fundamental guidelines aimed at ensuring


effective, efficient, and reliable testing practices. Below are the seven
fundamental principles of software testing

Testing shows the presence of defects.


Exhaustive testing is not possible.
Early testing
Defect clustering.
Pesticide paradox.
Testing is context-dependent.
Absence of errors fallacy.

Testing shows the presence of defects :


The application will be put through testing by the test engineer to ensure
that there are no bugs or flaws. We can only pinpoint the existence of
problems in the application or programme when testing. The main goal of
testing is to find any flaws that might prevent the product from fulfilling the
client's needs by using a variety of methods and testing techniques. Since
the entire test should be able to be traced back to the customer
requirement.

Testing reduces the amount of flaws in any program, but this does not imply
that the application is defect-free since sometimes software seems to be
bug-free without enough testing. But if the end-user runs into flaws that
weren't discovered during testing, it's at the point of deployment on the
production server.
Exhaustive testing is not possible :
It's not feasible to test every possible scenario or input. Instead, focus on
risk-based testing and prioritizing critical functionalities.It might often
appear quite difficult to test all the modules and their features throughout
the real testing process using effective and ineffective combinations of the
input data.
Early testing :
Early testing refers to the idea that all testing activities should begin
in the early stages of the requirement analysis stage of the Software
Development Life Cycle (SDLC) in order to identify the defects. If we find the
bugs at an early stage, we can fix them right away, which could end up
costing us much less than if they are discovered in a later phase of the
testing process.
Since we will need the requirement definition papers in order to
conduct testing, if the requirements are mistakenly specified now, they may
be corrected later, perhaps during the development process.
Defect clustering :

• Defect clustering means that a small number of modules or components


are prone to a higher concentration of defects.
• This pattern often emerges due to complexity, frequent changes, poor
design, or lack of proper testing in certain areas of the software.
• Defect Clustering is one of the fundamental principles of software testing
derived from the Pareto Principle (80/20 Rule), which states:
• "80% of defects are typically found in 20% of the software modules."
• In simpler terms, a small portion of the system is usually responsible for
most of the issues or bugs.

Examples of Defect Clustering:

• A module that is heavily modified over multiple releases.


• A feature that is technically complex and difficult to maintain.

Reasons for Defect Clustering


• Complex Functionality: Some modules are inherently more complex and
prone to errors.
• Frequent Changes: Areas with repeated modifications are more prone to
introducing defects.
• Inadequate Reviews: Poor code reviews or lack of peer validation.
• Tight Deadlines: Rushed development often leads to overlooked defects.
• Human Errors: Developers may unintentionally create errors due to
misunderstandings or fatigue.

Pesticide paradox :
This rule said that if the same set of test cases were run again over
a given period of time, the tests would not be able to discover any new
problems in the program or application.
Reviewing all the test cases periodically is essential to overcome
these pesticide contradictions. Additionally in order to incorporate many
components of the application or program, new and different tests must be
created, which aids in the discovery of additional flaws.
Testing is context-dependent :
The Context-Dependent Principle emphasizes that software
testing strategies, tools, and techniques must be adapted based on
the context in which the software operates. In simpler terms:

"There is no one-size-fits-all approach to testing. Different projects


require different testing approaches."

A testing approach suitable for one project (e.g., a banking application)


might not work well for another (e.g., a gaming app).

Best Practices for Context-Dependent Testing

• Understand Project Requirements: Align testing goals with business


objectives.
• Perform Risk Analysis: Prioritize testing high-risk areas.
• Choose the Right Tools: Use tools appropriate for the technology and
project type.
• Adapt to Development Methodology: Match testing approaches to Agile,
Waterfall, or Hybrid models.
• Focus on End-User Perspective: Consider user experience, accessibility, and
usability.

Program Inspections
• Definition: Program inspection is a formal review process where
developers, testers, and stakeholders examine software code or documents
to detect defects, deviations from standards, or areas for improvement.
• Goal: Identify bugs, ambiguities, and inconsistencies early in the software
development lifecycle (SDLC) to reduce costs and improve software quality.

1. Planning :
Define the scope and objectives of the inspection.
Select participants (Author, Moderator, Reviewer, Recorder).
Schedule the inspection meeting.

2. Overview : In the overview phase, a presentation is given to the


inspector with some background information needed to review the
software product properly.
3. Preparation : This is considered an individual activity. In this part of the
process, the inspector collects all the materials needed for inspection,
reviews that material and notes any defects.
4. Meeting : The moderator conducts the meeting. In the meeting, the
defects are collected and reviewed.
5. Rework : The author performs this part of the process in response to
the defects identified during the inspection.
6. Follow-up : In follow-up, the moderator makes the corrections and
then compiles the inspection management and defects summary
report.
Characteristics of inspection :
An experienced moderator who is not the author generally directs the
inspection. The task of the moderator is to conduct a document's peer
review.
Inspection is most formal and is guided by rules and checklists.
Entry and exit criteria are used in this evaluation procedure.
Pre-meeting preparation is necessary.
An inspection report is created and sent to the author so they may take
the necessary action.
A formal follow-up procedure is utilized after an inspection to ensure that
remedial action is taken promptly and on schedule.
The purpose of inspection is to bring in improvements to the process,
not only to find problems.
Stages of Testing
Software testing is a systematic process carried out in multiple stages to
ensure that a software application meets its requirements, functions as
intended, and is free of critical defects. Each stage of testing has a specific
goal, focus, and methodology.
Types of Testing Across Stage

• Unit Testing
• Integration Testing
• System Testing
• Acceptance Testing

Unit Testing
What is unit testing ?
A software development approach known as unit testing involves
checking the functionality of the smallest testable components or units
of an application one by one. Unit tests are carried out by software
developers and sometimes by QA personnel. Unit testing's primary goal
is to separate code for testing to see whether it functions as intended.
A crucial phase in the development process is unit testing. If carried out
properly, unit tests may identify coding errors before they become more
difficult to spot during subsequent testing phases.
Unit testing is a part of Test-Driven Development (TDD), a methodical
strategý that carefully constructs a product via ongoing testing and
refinement prior to using additional testing techniques like integration
testing, this testing approach is also the initial level of software testing. To
make sure a unit doesn't depend on any external code or functionalities,
unit tests are often isolated. Unit tests should be run often by teams,
whether manually or more frequently automatically.
Process of Unit Testing:
• Analyze Requirements: Understand the functionality of the unit.
• Write Test Cases: Create test cases for each scenario (including edge cases).
• Prepare Test Data: Provide appropriate input data for the unit.
• Execute Test Cases: Run tests using a unit testing framework.
• Evaluate Results: Compare actual output with expected results.
• Log Defects: Report any detected defects.
• Refactor Code (if necessary): Fix identified bugs and retest.

Teams may use integration testing to assess bigger program


components after every unit in the program operates as effectively and
error free as feasible.

Manual vs. automated unit testing :


Unit tests may be run manually or. automatically by developers. An intuitive
document outlining each stage in the process may be developed for those
using a manual technique, however automation testing is the most popular
approach for unit testing. Automated methods often create test cases
using a testing framework. In addition to presenting a summary of the test
cases, these frameworks are also configured to flag and report any failed
test cases.
Unit testing advantages :
There are many advantages to unit testing, including the following
• Compound mistakes happen less often the sooner an issue is discovered.
• Fixing issues as they arise often is less expensive than waiting until they
become serious.
• Simplified debugging procedures.
• The codebase can be modified easily by developers.
• Code may be transferred to new projects and reused by developers.
Unit testing disadvantages :
While unit testing is integral to any software development and testing strategy;
there are some aspects to be aware of. Disadvantages to unit testing include the
following :
• Not all bugs will be found during tests.
• Unit testing does not identify integration flaws; it just checks data sets
and their functionality.
• To test one line of code, more lines of test code may need to be
developed, which might require additional time.
• To successfully apply unit testing, developers may need to pick up new
skills, such as how to utilize certain automated software tools.
Integration Testing
What is integration testing ?
The second stage of the software testing process, after unit testing, is
known as integration testing. Integration testing is the process of inspecting
various parts or units of a software project to reveal flaws and ensure that
they function as intended.
The typical software project often comprises multiple software modules,
many of which were created by various programmers. Integration testing
demonstrates to the group how effectively these dissimilar components
interact.
Need to perform integration testing
There are many particular reasons why developers should do
integration testing, in addition to the basic reality that they must test all
software programmes before making them available to the general
public.
• Errors might result from incompatibility between programme components.
• Every software module must be able to communicate with the
database
• Every software developer has their own conceptual framework
and coding logic. Integrity testing guarantees that these diverse
elements work together seamlessly.
• Modules often interface with third-party APIs or tools, thus we require
integration testing to confirm that the data these tools receive is accurate.
• There may be possible hardware compatibility issues.
Advantages of integration testing
• Integration testing ensures that every integrated module functions
correctly.
• Integration testing uncovers interface errors.
• Testers can initiate integration testing once a module is completed and
doesn't require waiting for another module to be done and ready for
testing.
• Testers can detect bugs, defects and security issues.

Challenges of integration testing :


Unfortunately, integration testing has some difficulties to overcome as well.
• Questions will arise about how components from two distinct systems
produced by two different suppliers will impact and interact with one
another during testing.
• Integrating new and old systems requires extensive testing and possible
revisions.
• Integration testing needs testing not just the integration connections but
the environment itself, adding another level of complexity to the
process.This is because integration testing requires testing not only the
integration links but the environment itself.
System testing
System testing is a sort of software testing done on a whole integrated system
to determine if it complies with the necessary criteria. Integration testing
successful components are used as input during system testing. Integration
testing's objective is to find any inconsistency between the integrated
components. System testing finds flaws in the integrated modules as-well as the
whole system. A component or system's observed behaviour during testing is
the outcome of system testing. System testing is carried out by a testing team
that is separate from the development team and helps to assess the system's
quality. It has been tested in both functional and nonfunctional ways. Black-box
testing is what system testing iš. After integration testing but before acceptance
testing, system testing is carried out.

Process for system testing :


The steps for-system testing are as follows :
Setup of the test environment : Establish a test environment for higher-
quality testing.
Produce a test case : Produce a test case for the testing procedure.
Produce test data : Produce the data that will be put to the test.
Execute test case : Test cases are carried out after the production of the test
case and the test data.
Defect reporting: System flaws are discovered.
Regression testing : This technique is used to examine the consequences of
the testing procedure's side effects.
Log defects : In this stage, defects are corrected.
Retest : If the first test is unsuccessful, a second test is conducted.

Types of system testing

• Performance testing: It is a sort of software testing used to evaluate the


speed, scalability, stability and dependability of software applications and
products.
• Load testing : This sort of software testing is used to find out how a system
or software product will behave under high loads.
Stress testing : Stress testing is a sort of software testing carried out to
examine the system's resilience under changing loads.

• Testing for scalability is a sort of software testing used to evaluate how


well a system or software application performs in terms of its capacity to
scale up or scale down the volume of user requests.

Advantages of system testing :

The testers don't need to have further programming experience to do this


testing.

It will test the complete product or piece of software, allowing us to quickly


find any faults or flaws that slipped through integration and unit testing.

The testing environment resembles a real-world production or commercial


setting.

It addresses the technical and business needs of customers and uses


various test scripts to verify the system's full operation.

Following this testing, the product will have practically all potential flaws or
faults fixed, allowing the development team to safely go on to acceptance
testing.

Disadvantages of system testing :

Because this testing involves checking the complete product or piece of


software, it takes longer than other testing methods.

Since the testing involves testing the complete piece of software, the cost
will be considerable.

Without a proper debugging tool, the hidden faults won't be discovered.

You might also like