0% found this document useful (0 votes)
16 views10 pages

UNIT - I Software Testing

Software testing is a vital process in the software development life cycle aimed at identifying defects and ensuring quality, reliability, and performance of applications. It encompasses various types, including manual and automated testing, and involves key objectives such as verification, validation, and defect prevention. The document also distinguishes between testing and debugging, outlines types of bugs, and discusses testing styles and design techniques.

Uploaded by

crathinam05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views10 pages

UNIT - I Software Testing

Software testing is a vital process in the software development life cycle aimed at identifying defects and ensuring quality, reliability, and performance of applications. It encompasses various types, including manual and automated testing, and involves key objectives such as verification, validation, and defect prevention. The document also distinguishes between testing and debugging, outlines types of bugs, and discusses testing styles and design techniques.

Uploaded by

crathinam05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

UNIT - I

Introduction to Software Testing


Software Testing is a critical process in the software development life cycle that
involves evaluating a software application to ensure it functions as expected and is free
from defects.

It is the process of executing a program or system with the intent to identify errors,
gaps, or missing requirements compared to the actual requirements.

Why Software Testing?


 Software may have bugs or issues that can cause failure.
 It ensures the product is reliable, secure, and performs well.
 Helps developers and companies deliver high-quality software.

Key Objectives of Software Testing


1. To verify that the software works according to the specified requirements.
2. To validate that the software meets the user’s needs and expectations.
3. To detect and fix defects early in the development process.
4. To improve the overall quality of the product.
5. To ensure the software is bug-free, secure, and performs efficiently under various
conditions.

Types of Software Testing


 Manual Testing: Performed by human testers without automation tools.
 Automated Testing: Uses scripts and tools to execute tests automatically.
 Functional Testing: Checks the functionality of the application.
 Non-Functional Testing: Evaluates performance, usability, security, etc.

Purpose of Software Testing


The purpose of software testing with a deep focus on quality assurance, defect
detection, and system understanding. The purpose of software testing is given below as
follows:

1. To Find Defects
“Testing is the process of executing a program with the intent of finding errors.”
— Boris Beizer

 The primary purpose is to detect faults or bugs in the software.


 It ensures that defects are identified before the software reaches the end user.

2. To Ensure Quality
 Testing validates that the software meets specified quality standards.
 It checks attributes such as reliability, performance, maintainability, and
usability.

3. To Verify and Validate


 Verification: Are we building the product right?
 Validation: Are we building the right product?
 Testing ensures that the software behaves as intended and satisfies the user
requirements.

4. To Gain Confidence in Software


 Although testing cannot prove software is completely error-free, it increases
confidence in the software’s reliability.

5. To Prevent Defects
 Designing effective tests can prevent defects by encouraging better development
and design practices.

Beizer emphasized that test design itself helps reduce the chances of future bugs.

6. To Provide Decision-Making Information


 Testing generates valuable information for stakeholders.
 It helps in making informed decisions about release readiness, risk assessment,
and maintenance.

Purpose Description
Find Defects Identify bugs before deployment
Ensure Quality Maintain high standards of performance and reliability
Verify & Validate Ensure compliance with specs and real-world needs
Build Confidence Instill trust in the software’s functionality
Prevent Defects Improve design and development through thoughtful testing
Help management with release, patching, and risk-related
Assist Decision-Making
choices

Productivity and Quality in Software


In software testing, discussed productivity and quality in software engineering with a
strong emphasis on the role of testing, defect prevention, and systematic development
processes.
1. Software Productivity (as per Beizer)
Software productivity is not merely about writing more lines of code quickly, but about
producing working, correct, and maintainable software efficiently.

Key Points:

 Productivity should focus on correct output, not just volume.


 High productivity includes reuse, design discipline, and effective testing.
 Measuring productivity must account for quality of code, not just quantity (e.g.,
LOC is insufficient alone).

“Productivity in software is not just about speed, it’s about building the right thing
correctly.”
— Boris Beizer

2. Software Quality (as per Beizer)


Software quality as the degree to which software:

 Meets requirements,
 Is free of defects,
 And is reliable under various conditions.

He emphasized that quality cannot be "tested into" a product—it must be built-in


through strong design and development practices.

Key Elements of Quality:

 Correctness: Does it do what it's supposed to?


 Reliability: Does it fail under stress or unusual conditions?
 Efficiency: Does it make optimal use of resources?
 Maintainability: Can it be fixed or improved easily?

“More than the act of testing, the act of designing tests is one of the best bug
preventers known.”
— Boris Beizer

Relationship Between Productivity and Quality


 Quality affects productivity in the long run — poor quality leads to rework,
debugging, and delays, reducing effective productivity.
 Good testing and defect prevention early in the life cycle boost both quality
and productivity.
 He promoted structured testing techniques (e.g., white-box, black-box,
integration testing) as ways to improve quality without sacrificing productivity.
Summary Table

Aspect Productivity Quality

Efficient production of working Degree to which software meets specs and


Definition
software is defect-free

Beizer’s Productivity must be measured by Quality is built through prevention and


Focus correctness good design

Key Strategy Structured development and reuse Rigorous testing and defect prevention

Poor quality → low productivity (due Good quality → better long-term


Relation
to rework) productivity

Difference Between Testing and Debugging


Testing and Debugging are two essential activities in software development, but they
serve different purposes and occur at different stages of the defect management
process.

1. Definition

Aspect Testing Debugging


The process of executing software to The process of analyzing and fixing
What it is
find defects or verify correctness. the cause of identified defects.
To locate and fix the cause of
Main Goal To detect errors in the software.
errors.

2. Purpose

Testing Debugging
Checks whether the software behaves as
Fixes the issues revealed during testing.
expected.
Determines why it is wrong and corrects
Identifies what is wrong.
it.

3. Who Performs It
 Testing: Usually done by testers or QA engineers.
 Debugging: Done by developers/programmers.

4. Process Flow
1. Testing runs the software → detects failure or bug.
2. Debugging traces the source of the problem → modifies code → re-tests to
verify the fix.

5. Tools Used

Testing Tools Debugging Tools


Selenium, JUnit, TestNG, GDB, WinDbg, Chrome DevTools, IDE
LoadRunner debuggers

6. Key Differences Summary

Feature Testing Debugging


Objective Find bugs Fix bugs
Focus System behavior Internal code and logic
Performed By Testers/QA Developers
Involves Test cases, test data Breakpoints, tracing, step execution
Bug report or test
Outcome Corrected code
result

Example
 A tester runs a login feature and gets an unexpected error — that’s testing.
 The developer then traces the code, finds a null pointer exception, and corrects it
— that’s debugging.

Model for Testing in software testing methodologies

 The image shows a model of testing process


 It includes three models: A model of the environment, a model of the program and a model of
the expected bugs

 Tests are formal procedures where

 Inputs must be prepared


 Outcomes should be pre directed
 Tests should be documented
 Commands need to be executed
 Results are to observed

 All these errors are subjected to error


 A typical software system goes through with following three test

1. Unit/Component Testing
2. Integration Testing
3. System Testing

Unit/Component Testing

 A unit is the smallest testable piece of software that can be compiled, assembled, linked, loaded
etc
 A unit is usually the work of one programmer and consists of several hundred or fewer lines of
code
 Unit Testing is the testing we do to show that the unit does not satisfy its functional
specification, or that its implementation structure does not match the intended design structure
 A component is an integrated aggregate of one or more units
 Component Testing is the testing we do to show that the component does not satisfy its
functional specification or that its implementation structure does not match the intended design
structure
 Unit testing aims at testing each of the components that systems are built upon. As long as each
of them work as they are defined to , then the system as a whole has a better chance of working
together

Integration testing

 Integration is the process by which components are aggregated to created larger components
 Integration testing is testing done to show that even through the components were individually
satisfactory (after passing component testing)
 It checks if the combination of components are incorrect or inconsistent
 It verifying software quality by testing two or more dependent software modules as a group

System Testing

 A system is a big component


 System Testing is aimed at revealing bugs that cannot be attributed to components
 It includes testing for performance, security, accountability, configuration sensitivity, startup
and recovery
 System testing enables us to test, verify and validate both the business requirements as well as
the applications architecture

Definition of a Bug

A bug is a mismatch between the expected result and the actual result produced by a program due to
mistakes in design, coding, requirements, or other phases.

Types of Bugs in Software Testing

Here are the major types of bugs categorized based on their nature:

1. Functional Bugs

 Occur when the software does not behave according to the requirements.
 Example: Clicking a "Submit" button does nothing.

2. Performance Bugs

 Related to speed, response time, load handling, or resource usage.


 Example: A webpage takes 20 seconds to load instead of 2 seconds.

3. Syntax Bugs

 Errors in the source code syntax that prevent compilation or execution.


 Usually caught during compilation.
 Example: Missing semicolon, incorrect indentation in Python.

4. Logical Bugs

 Incorrect implementation of logic.


 Example: Using > instead of < in a decision-making statement.

5. Calculation Bugs

 Mistakes in formulas, numeric logic, or algorithms.


 Example: Wrong tax amount due to incorrect rounding.

6. Usability Bugs

 Affect user interface or user experience (UI/UX).


 Example: Poorly placed buttons, inconsistent font sizes.

7. Security Bugs

 Introduce vulnerabilities or potential security breaches.


 Example: SQL injection vulnerability in a login form.
8. Compatibility Bugs

 Arise when software fails to function across different environments:


o Browsers
o Operating Systems
o Devices
 Example: App works on Chrome but crashes on Safari.

9. Integration Bugs

 Occur when modules or systems interact incorrectly.


 Example: Payment gateway fails to return a confirmation to the shopping app.

10. Regression Bugs

 Old features stop working after code changes or enhancements.


 Example: Login stops working after implementing a new feature.

11. Boundary-Related Bugs

 Errors near the edges of input ranges.


 Example: A field accepting only 1–10 allows 0 or 11.

12. Missing Command Bugs

 A required action or operation is completely left out.


 Example: No logout option in a secure application.

13. Crash Bugs

 Cause the system to crash or freeze.


 Often critical in severity.
 Example: App closes when clicking on a certain button.

Testing Styles (Types of Testing Based on Execution)

Testing styles define how the software is tested:

Black Box Testing

 Focus: Functionality of the software without knowing internal code.


 Testers provide inputs and check outputs.
 Used in System Testing, Acceptance Testing.
 Example: Testing login by entering valid/invalid credentials.

White Box Testing (Clear Box / Structural Testing)

 Focus: Internal structure and logic of the code.


 Requires programming knowledge.
 Used in Unit Testing, Code Coverage.
 Example: Testing loops, branches, conditions.

Gray Box Testing

 Combines both Black Box and White Box.


 Tester has partial knowledge of internal structure.
 Common in Integration Testing.

Manual Testing

 Test cases are executed manually without automation tools.


 Suitable for Exploratory or Usability Testing.

Automation Testing

 Uses tools/scripts to execute test cases.


 Tools: Selenium, QTP, TestNG, JUnit.

2. Test Design Styles (Test Case Design Techniques)

Black Box Test Design Techniques:

1. Equivalence Partitioning

 Input data is divided into valid/invalid partitions.


 One value from each partition is tested.

2. Boundary Value Analysis (BVA)

 Focuses on values at the boundaries (min, max, just below/above).


 Effective for numeric inputs.

3. Decision Table Testing

 Uses logical conditions and actions.


 Helpful when testing business rules.

4. State Transition Testing

 Based on system states and events.


 Useful for workflow systems (e.g., ATM, login systems).

5. Use Case Testing

 Tests real-world user scenarios based on use cases.

White Box Test Design Techniques:


1. Statement Coverage

 Every line of code is executed at least once.

2. Branch/Decision Coverage

 Every decision (if/else) is tested for true and false conditions.

3. Path Coverage

 All possible paths in code are tested.

4. Condition Coverage

 Tests all boolean expressions independently.

Experienced-Based Testing Techniques

 Based on tester’s intuition, knowledge, and experience.

1. Exploratory Testing

 Simultaneous learning and testing.


 No formal test cases, used in early or late stages.

2. Error Guessing

 Tester predicts likely error-prone areas.

Summary Table:
Style Type Used In

Black Box Input/output-based System, Acceptance Testing

White Box Code structure-based Unit Testing


Partial internal
Gray Box Integration Testing
knowledge
Equivalence Class Data partitioning Functional Testing

Boundary Value Edge case inputs Functional Testing

State Transition Workflow behavior UI and embedded systems

Decision Table Rule-based logic Business apps, Finance

Exploratory Experience-based Agile Testing, Usability

You might also like