ccs 366 STA Unit 1
ccs 366 STA Unit 1
Introduction
In this chapter we will discuss the fundamentals of testing, such as why testing
is required, its limitations, aims and purposes, as well as the guiding principles,
step -by -step methods and psychological concerns that testers must take into
mind. We will be able to explain the fundamentals of testing after the
completion of this chapter.
Software testing is a method for figuring out if the real piece of software meets
requirements and is error-free. It involves running software or system
components manually or automatically in order to evaluate one or more of its
characteristics. Finding faults; gaps or unfulfilled requirements in comparison
to the documented specifications is the aim of software testing.
TESTING PROCESS
The process of software testing aims not only at finding faults in the existing
software but also at finding measures to improve the software in terms of
efficiency, accuracy and usability. It mainly aims at measuring the specification,
functionality and performance of a software program or application.
SOME TERMINOLOGIES
The program is a combination of source code and object code. Every phase of the
software development life cycle requires preparation of a few documentation
manuals which are shown in Figure 1.7. These are very helpful for development
and maintenance activities
Operating procedure manuals consist of instructions to set up, install, use and to
maintain the software. The list of operating procedure manuals / documents is
given in Figure 1.8.
Validation:
“It is the process of evaluating a system or component during or at the end of the
development process to determine whether it satisfies the specified requirements.”
Testing includes both verification and validation.
Thus Testing = Verification + Validation
Manual testing can be further divided into three types of testing, which
are as follows :
White box testing
Black box-testing
Gray box testing.
Automation-testing :
• Automation testing is a process of converting any manual test cases into the
test scripts with the help of automation tools or any programming language
is known as automation testing. With the help of automation testing, we can
enhance the speed of our test execution because here, we do not require any
human efforts. We need to write a test script and execute those scripts.
Black-Box Testing and White-Box Testing
Black box testing (also called functional testing) is testing that ignores the
internal mechanism of a system or component and focuses solely on the outputs
generated in response to selected inputs and execution conditions. White box
testing (also called structural testing and glass box testing) is testing that takes
into account the internal mechanism of a system or component.
What is white-box testing?
The technique of testing in which the tester is aware of the internal workings
of the product, has access to its source code and is conducted by making sure
that all internal operations are performed according to the specifications is
known as white-box testing.
Because Of the system's internal viewpoint, the phrase "white box" is employed.
The term "clear box," "white box" or "transparent box" refers to the capability of
seeing the software's inner workings through its exterior layer. Developers carry it
out before sending the program to the testing team, who then conducts black box
testing. Testing the infrastructure of the application is the primary goal of white-
box testing. As it covers unit testing and integration testing, which is performed at
lower levels. Given that it primarily focuses on the code structure, pathways,
conditions and branches of a program or piece of software, it necessitates
programming skills. Focusing on the inputs and outputs via the program and
enhancing its security are the main objectives of white-box testing.
It is also referred to as transparent testing, code-based testing, structural
testing and clear box testing. It is a good fit and is recommended for testing
algorithms.
Types of White Box Testing in Software Testing
White box testing is a type of software testing that examines the internal
structure and design of a program or application. The following are some
common types of white box testing :
• Unit testing : Tests individual units or components of the software to
ensure they function as intended.
• Integration testing : Tests the interactions between different units or
components of the software to ensure they work together correctly.
• Performance testing : Tests the performance of the software under
various loads and conditions to ensure it meets performance requirements.
• Security testing : Tests the software for vulnerabilities and weaknesses to
ensure it is secure.
• Code coverage testing : Measures the percentage of code that is executed
during testing to ensure that all parts of the code are tested.
• Regression testing : Tests the software after changes have been made to
ensure that the changes did not introduce new bugs or issues.
Techniques of White Box Testing
There are some techniques which is used for white box testing
• Statement coverage : This testing approach involves going over every
statement in the code to make sure that each one has been run at least once. As a
result, the code is checked line by line.
• Branch coverage : Is a testing approach in which test cases are created to
ensure that each branch is tested at least once. This method examines all
potential configurations for the system.
• Path coverage : Path coverage is a software testing approach that defines and
covers all potential pathways. From system entrance to exit points, pathways are
statements that may be executed. It takes a lot of time.
• Loop testing :With the help of this technique, loops and values in both
independent and dependent codes are examined. Errors often happen at the start
and conclusion of loops. This method includes following
• Concatenated loops
• Simple loops
• Nested loops
Basis path testing : Using this methodology, control flow diagrams are created
from code and subsequently calculations are made for cyclomatic
complexity(counting number of decision points in a source code).
Every dot in the input domain represents a set of inputs and every dot in the output
domain represents a set of outputs. Every set of input(s) will have a corresponding
set of output(s). The test cases are designed on the basis of user requirements
without considering the internal structure of the program. This black box
knowledge is sufficient to design a good number of test cases.
• Purpose: Tests the boundaries of input ranges, where errors are most likely
to occur.
• Approach: Test cases are designed at the edges of valid and invalid ranges.
• Example:
• Input Age: Valid range (18–60)
• Test Values: 17 (just below), 18 (lower boundary), 60 (upper
boundary), 61 (just above)
The Software Testing Life Cycle (STLC) is a systematic process for testing software
to ensure its quality, reliability, and performance. It consists of multiple phases
that guide the testing team from planning to closure.
The term "Software Testing Life Cycle" refers to a testing procedure with particular
phases that must be carried out in a certain order to guarantee that the- quality
objectives have been reached. Each step of the STLC process is completed in a
planned and orderly manner. Goals and deliverables vary for each phase. The STLC
stages vary depending on the organization, but the fundamentals are the same.
l . Requirements phase
2. Planning phase
3. Analysis phase
4. Design phase
5.Implementation phase
6. Execution phase
7. Conclusion phase
8.Closure phase
l . Requirements phase
Analyses and research the requirements throughout this phase of the STLC.
Participate in brainstorming discussions with other teams to see if the criteria can
be tested . The scope of the testing is determined at this step. Inform the team
during this phase if any feature cannot be tested so that the mitigation
approach(identifying potential risks or challenges at each phase and implementing
strategies to minimize or eliminate their impact on the software testing process)
may be prepared.
2. Planning phase
NO, is the response. While requirements certainly serve as a foundation, there are
also two additional highly significant aspects that affect test preparation. Which
are
3. Analysis phase
Test administration.
4.Design phase :
In this step, "HOW" to test is defined. The duties in this phase include : Describe
the test condition. To enhance coverage, divide the test conditions into many
smaller sub-conditions.
5.Implementation phase :
The construction of thorough test cases is the main undertaking in this STLC
phase. Determine the test cases' order of importance and which test cases will be
included in the regression suite. It is crucial to do a review to confirm the accuracy
of the test cases prior to finalizing them. Don't forget to sign off on the test cases
before beginning the real execution as well.
If your project incorporates automation, choose the test cases that should be
automated and begin scripting them. Remember to review them!
6.Execution phase :
As its name implies, this is the stage of the software testing life cycle when actual
execution occurs. However, make sure that your entrance requirement is satisfied
before you begin your execution. Execute the test cases and in the event of any
discrepancy, report the faults. Fill up the traceability metrics simultaneously to-
monitor its progress.
7.Conclusion Phase :
The exit criteria and reporting are the main topics of this STLC phase. You may
choose whether to send out a daily report or a weekly report, etc., depending on
your project and the preferences of your stakeholders.
The main thing to remember is that the substance of the report varies and relies
on whoever you are sending your reports to. There are many sorts of reports (DSR
- Daily Status Report, WSR - Weekly Status Reports) that you may send.
Include the technical aspects of the project in your report (number of test cases
succeeded, failed, defects reported, severity 1 problems, etc.) if your project
managers have a testing background since they will be more interested in the
technical side of the project.
However, if you are reporting to higher stakeholders, it's possible that they won't
be interested in the technical details; instead, focus on the risks that the testing
has helped to reduce.
8.Closure phase :
Verify that the test has been completed. Whether all test scenarios are run or
intentionally mitigated. Verify that no faults of severity 1(Severity 1 refers to a
critical defect or bug identified during the testing phase )have been opened.
Hold meetings to discuss lessons learned and produce a paper detailing them.
(Include what worked well, where changes are needed and what may be done
better.)
The V-model is also known as the verification and validation model. This requires
that each stage of the SDLC be completed before moving on to the next. The
waterfall model's sequential design approach is also followed. The device's testing
is scheduled concurrently with the relevant stage of development.
Verification is a static analysis technique (review) carried out without actually
running any code. To determine if certain criteria are met, the product
development process is evaluated. Testing is done by running code and validation
comprises dynamic analysis methods (functional and non-functiónal). After the
development phase the software is categorized through the validation step to see
whether it satisfies the needs and expectations of the client.
Therefore, the V - model features validation stages on one side and verification
phases on the other. Coding phase joins the verification and validation
processes in a V-shape. As a result, it is known as the V - model.
There are many stages in the V - model's verification phase :
Business requirement analysis :
This is the initial phase in which customer-side product needs are understood.
To fully comprehend the expectations and precise needs of the consumer, this
step involves comprehensive discussion.
System design :
System engineers utilise the user requirements document to analyse and
comprehend the business of the proposed system at this level.
Architecture design :
The first step in choosing an architecture is to have a solid understanding of
everything that will be included, such as the list of modules, a short description
of each module's operation, the linkages between the modules' interfaces, any
dependencies, database tables, architectural diagrams, technological details,
etc. A certain step includes the integration testing model.
Module design :
The system is divided into manageable modules during the module design
phase. Low-level design, which is the specification of the modules' intricate
design.
Coding step : The coding step is started after designing. It is determined on a
programming language that will work best based on the criteria. For coding,
there are certain rules and standards. The final build is enhanced for greater
performance prior to checking it into the repository and the code undergoes
several code reviews to verify its performance.
There are many stages in the V - model's validation phase-:
Unit testing :
Unit Test Plans (UTPs) are created in the V - model's module design phase. To
get rid of problems at the unit or code level, these UTPs are run. The smallest
thing that can exist on its own is a unit, such a programme module. Unit
testing ensures that even the tiniest component can operate properly when
separated from other units.
Integration testing :
During the architectural design phase, integration test plans are created. These
experiments demonstrate that separate groups may live and interact with one
another.
System testing :
During the system design phase, plans for system tests are created. System
test plans, in contrast to unit and integration test plans, are created by the
client's business team. System testing ensures that developer's requirements
are satisfied.
Acceptance testing :
The examination of business requirements is connected to acceptance testing.
The software product is tested in a user environment. Acceptance tests
highlight any system compatibility issues that may exist within the user
environment. Additionally, it identifies non-functional issues like load and
performance flaws in the context of actual-user interaction.
When to use the V model ?
When the requirement is well defined and not ambiguous.
The V-shaped model should be used for small to medium-sized projects where
requirements are clearly defined and fixed.
The V-shaped model should be chosen when sample technical resources are
available with essential technical expertise.
Advantage of V model :
1.Easy to understand.
2.Testing methods like planning, test designing happens well before coding.
3.This saves a lot of time. Hence a higher chance of success over the
waterfall model.
4.Avoids the downward flow of the defects.
5. Works well for small plans where requirements are easily understood.
Disadvantage of V - model :
1.Very rigid and least flexible.
2. Not good for a complex project.
3.Software is developed during the implementation stage, so no early
prototypes of the software are produced.
4. If any changes happen midway, then the test documents along with the
required documents, have to be updated.
Program Correctness and Verification
program correctness
Example:
Safety
Software Failure
Definition:
A software failure occurs when a software system does not perform its intended
function or produces incorrect results during execution, violating its specifications
or user expectations.
The system produces outputs or behaves in a way not aligned with its
requirements.
Errors
Faults (Defects)
Testing reduces the amount of flaws in any program, but this does not imply
that the application is defect-free since sometimes software seems to be
bug-free without enough testing. But if the end-user runs into flaws that
weren't discovered during testing, it's at the point of deployment on the
production server.
Exhaustive testing is not possible :
It's not feasible to test every possible scenario or input. Instead, focus on
risk-based testing and prioritizing critical functionalities.It might often
appear quite difficult to test all the modules and their features throughout
the real testing process using effective and ineffective combinations of the
input data.
Early testing :
Early testing refers to the idea that all testing activities should begin
in the early stages of the requirement analysis stage of the Software
Development Life Cycle (SDLC) in order to identify the defects. If we find the
bugs at an early stage, we can fix them right away, which could end up
costing us much less than if they are discovered in a later phase of the
testing process.
Since we will need the requirement definition papers in order to
conduct testing, if the requirements are mistakenly specified now, they may
be corrected later, perhaps during the development process.
Defect clustering :
Pesticide paradox :
This rule said that if the same set of test cases were run again over
a given period of time, the tests would not be able to discover any new
problems in the program or application.
Reviewing all the test cases periodically is essential to overcome
these pesticide contradictions. Additionally in order to incorporate many
components of the application or program, new and different tests must be
created, which aids in the discovery of additional flaws.
Testing is context-dependent :
The Context-Dependent Principle emphasizes that software
testing strategies, tools, and techniques must be adapted based on
the context in which the software operates. In simpler terms:
Program Inspections
• Definition: Program inspection is a formal review process where
developers, testers, and stakeholders examine software code or documents
to detect defects, deviations from standards, or areas for improvement.
• Goal: Identify bugs, ambiguities, and inconsistencies early in the software
development lifecycle (SDLC) to reduce costs and improve software quality.
1. Planning :
Define the scope and objectives of the inspection.
Select participants (Author, Moderator, Reviewer, Recorder).
Schedule the inspection meeting.
• Unit Testing
• Integration Testing
• System Testing
• Acceptance Testing
Unit Testing
What is unit testing ?
A software development approach known as unit testing involves
checking the functionality of the smallest testable components or units
of an application one by one. Unit tests are carried out by software
developers and sometimes by QA personnel. Unit testing's primary goal
is to separate code for testing to see whether it functions as intended.
A crucial phase in the development process is unit testing. If carried out
properly, unit tests may identify coding errors before they become more
difficult to spot during subsequent testing phases.
Unit testing is a part of Test-Driven Development (TDD), a methodical
strategý that carefully constructs a product via ongoing testing and
refinement prior to using additional testing techniques like integration
testing, this testing approach is also the initial level of software testing. To
make sure a unit doesn't depend on any external code or functionalities,
unit tests are often isolated. Unit tests should be run often by teams,
whether manually or more frequently automatically.
Process of Unit Testing:
• Analyze Requirements: Understand the functionality of the unit.
• Write Test Cases: Create test cases for each scenario (including edge cases).
• Prepare Test Data: Provide appropriate input data for the unit.
• Execute Test Cases: Run tests using a unit testing framework.
• Evaluate Results: Compare actual output with expected results.
• Log Defects: Report any detected defects.
• Refactor Code (if necessary): Fix identified bugs and retest.
Following this testing, the product will have practically all potential flaws or
faults fixed, allowing the development team to safely go on to acceptance
testing.
Since the testing involves testing the complete piece of software, the cost
will be considerable.