0% found this document useful (0 votes)
3 views

Unit 2

The document outlines key concepts in software testing, including definitions of errors, bugs, faults, and failures, as well as methodologies such as black box and white box testing. It describes the Software Development Life Cycle (SDLC) and various testing levels, including unit, integration, and system testing, along with specific types of performance testing like stress and usability testing. Additionally, it emphasizes the importance of test planning, verification vs validation, and the role of documentation in ensuring software quality.

Uploaded by

TanyaMathur
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Unit 2

The document outlines key concepts in software testing, including definitions of errors, bugs, faults, and failures, as well as methodologies such as black box and white box testing. It describes the Software Development Life Cycle (SDLC) and various testing levels, including unit, integration, and system testing, along with specific types of performance testing like stress and usability testing. Additionally, it emphasizes the importance of test planning, verification vs validation, and the role of documentation in ensuring software quality.

Uploaded by

TanyaMathur
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Software Testing

UNIT 2
OBJECTIVES
Uncover as many as errors (or bugs) as possible in a
given product.
Demonstrate a given software product matching its
requirement specifications.
Validate the quality of a software testing using the
minimum cost and efforts.
Generate high quality test cases, perform effective tests,
and issue correct and helpful problem reports .
Error, Bug, Fault & Failure

Error : It is a human action that produces the


incorrect result that produces a fault.
Bug : The presence of error at the time of execution
of the software.
Fault : State of software caused by an error.
Failure : Deviation of the software from its
expected result. It is an event.
SDLC(Software Development Life Cycle)

 Standard model used word wide to develop a software.

 A framework that describes the activities performed at


each stage of a software development project.

 Necessary to ensure the quality of the software.

 Logical steps taken to develop a software product.


Classical Waterfall Model
Feasibility Study

Requirements Analysis & Specification

Design

Coding & Unit Testing

Integration & System Testing

Maintenance

It is the oldest and most widely used model in the field of


software development.
Testing Life Cycle
Project Initiation
Summary Reports
System Study
Analysis
Test Plan
Regression Test

Design Test Cases


Report Defects

Execute Test Cases


( manual /automated )
Test Plan
It is a systematic approach to test a system i.e.
software. The plan typically contains a detailed
understanding of what the eventual testing
workflow will be.

Test Case
It is a specific procedure of testing a particular requirement.
It will include:
Identification of specific requirement tested
Test case success/failure criteria
Specific steps to execute test
Test data
Verification vs Validation
• Validation : The process of evaluating software at the end
of software development to ensure compliance with
intended usage
The software should do what the user really requires (Are
we building the right product?)

• Verification : The process of determining whether the


products of a given phase of the software development
process fulfill the requirements established during the
previous phase
The software should confirm to its specification (Are we
building the product right?)
Testing Methodologies
Black box testing
 No knowledge of internal program design or code
required.
 Tests are based on requirements and functionality.

White box testing


 Knowledge of the internal program design and
code required.
 Tests are based on coverage of code
statements, branches, paths, conditions.
Black box testing

requirements

output

input

events
White box testing

Test data

Tests Derives

Component Test
code outputs
Testing Levels
•Unit testing

•Integration testing

•System testing
UNIT TESTING

Tests each module individually.


Follows a white box testing (Logic of the program).
Done by developers.
INTEGRATION TESTING
Once all the modules have been unit tested,
integration testing is performed.
It is systematic testing.
Produce tests to identify errors associated with
interfacing.
Types:
Top Down Integration testing
Bottom Up Integration testing
Mixed Integration testing
SYSTEM TESTING
 The system as a whole is tested to uncover requirement errors.

 Verifies that all system elements work properly and that


overall system function and performance has been achieved.

Types:
Alpha Testing
Beta Testing
Acceptance Testing
Performance Testing
Alpha Testing
It is carried out by the test team within the developing
organization .

Beta Testing
It is performed by a selected group of friendly customers.

Acceptance Testing
It is performed by the customer to determine whether to accept
or reject the delivery of the system.

Performance Testing
It is carried out to check whether the system meets the
nonfunctional requirements identified in the SRS document.
Types of Performance Testing:
Stress Testing
Volume Testing
Configuration Testing
Compatibility Testing
Regression Testing
Recovery Testing
Maintenance Testing
Documentation Testing
Usability Testing
Stress Testing
• Definition of Stress Testing: Stress testing evaluates
system resilience and performance by simulating
extreme load conditions beyond normal operational
limits.
• Key Objectives: The primary goals include
identifying breaking points, verifying stability, and
assessing recovery capabilities under pressure.
• Common Tools: Popular stress testing tools
encompass Apache JMeter, LoadRunner, and
Gatling, each offering unique functionality for
simulations.
Volume Testing

• Volume Testing Defined: Volume testing measures


system performance when subjected to large data
sets, examining stability and reliability.
• System Behavior Analysis: It assesses how software
processes and retrieves extensive data, crucial for
optimal database management practices.
• Relevance in Database Systems: Effective volume
testing ensures databases handle significant
transactions, maintaining performance amidst high
user loads.
Compatibility Testing
• Compatibility Testing Overview: Compatibility testing
ensures software operates effectively across multiple
devices and platforms, enhancing user experience.
• Testing Techniques: Common techniques include
browser-based testing, mobile device emulators, and
cross-platform validation to ensure functionality.
• Challenges Encountered: Major challenges involve
diverse screen sizes, OS versions, and hardware
specifications impacting software performance
consistency.
Regression Testing
• Understanding Regression Testing: Regression
testing verifies that new code changes do not
disrupt existing software functionalities or features.
• Role in Software Maintenance: It ensures stability
by systematically checking previously working
functionalities after modifications in the codebase.
• Automated Tools Utilization: Tools like Selenium
and TestComplete automate regression tests,
increasing efficiency and reducing manual workload
significantly.
Recovery Testing
• Recovery Testing Defined: Recovery
testing assesses a system’s capacity to
restore operations post-failure, ensuring
reliability under stress.
• Key Objectives: The main goals involve
evaluating data integrity, performance
restoration, and system resilience
following disruptions.
• Future Trends: Emerging trends focus
on automated recovery testing tools to
streamline processes and enhance
efficiency significantly.
Maintenance Testing
• Maintenance Testing Definition: Maintenance testing
evaluates software updates and patches, ensuring they
maintain existing functionality and performance levels.
• Purpose in Software Reliability: It assesses the impact
of changes on long-term software reliability, confirming
ongoing system integrity post-modification.
• Strategic Significance: By identifying potential issues
early, maintenance testing significantly mitigates risks
associated with software updates and vulnerabilities.
Documentation Testing
• Documentation Validation: Verifying user guides
ensures accuracy and clarity, vital for effective user
support and troubleshooting.
• User Support Impact: Clear documentation
minimizes user errors, enhances satisfaction, and
reduces reliance on technical support teams.
• Best Practices in Documentation Testing: Utilizing
checklists and peer reviews during testing phases
improves the quality of manuals and user guides.
Usability Testing
• Defining Usability Testing: Usability
testing evaluates user interaction,
focusing on ease of use and
satisfaction through direct feedback.
• Common Procedures: Typical
procedures include user observation,
A/B testing, surveys, and heuristic
evaluations for comprehensive
insights.
• Metrics for Assessment: Key metrics
encompass task success rate, time on
task, error frequency, and user
satisfaction ratings during evaluation.
Smoke Testing
• Smoke testing, also known as Build Verification
Testing or Build Acceptance Testing, is a type of
software testing performed early in the development
process to ensure that the most critical functions of a
software application are working correctly It is used to
.
quickly identify and fix any major issues with the .
software before more detailed testing is performed
• Key Principles
• The primary goal of smoke testing is to determine
whether the build is stable enough to proceed with
further types of testing. It acts as a confirmation for the
quality assurance (QA) team to proceed with more
extensive testing. Smoke tests are a minimal set of
tests run on each build to verify that the important
features are working and there are no showstoppers in
the build
THANK YOU

You might also like