_unit 4- software engg. notes
_unit 4- software engg. notes
This process ensures that the application can handle all exceptional and
boundary cases, providing a robust and reliable user experience. By
systematically identifying and fixing issues, software testing helps deliver
high-quality software that performs as expected in various scenarios.
Black-Box Testing :
Definition:
Key Characteristics:
Testing Approach:
● Input and Output: The tester provides a set of inputs to the software
and observes the corresponding outputs, evaluating whether the
outputs match the expected results defined in the specifications.
● Requirement-Based Testing: The tests are designed based on
requirements, specifications, and use cases, ensuring that all functional
aspects of the software are covered.
● Error Discovery: It aims to identify discrepancies, errors, or bugs in the
software that could affect functionality, usability, or overall user
satisfaction.
Advantages:
Limitations:
● Limited Test Coverage: Black box testing may not cover all possible
paths or scenarios within the code, potentially missing certain defects.
● Difficulty in Designing Tests: Creating effective test cases requires a
deep understanding of the software’s requirements, which can be
challenging if the documentation is inadequate.
● Not Suitable for Complex Logic: For applications with intricate internal
logic or algorithms, black box testing might not be sufficient to identify
all issues.
Common Applications:
Conclusion:
Black box testing is a crucial method in software testing that evaluates the
software from a user's perspective, focusing on functionality rather than
internal code structure. It plays a vital role in ensuring software quality and
user satisfaction by validating that the application behaves as expected
under various scenarios.
Definition:
1. Identify Input Variables: Determine the input variables that have valid
ranges.
2. Define Boundaries: For each input variable, identify the boundaries,
which typically include:
○ Minimum valid value
○ Just below the minimum valid value
○ Just above the minimum valid value
○ Maximum valid value
○ Just below the maximum valid value
○ Just above the maximum valid value
3. Create Test Cases: Develop test cases based on these boundary
values. For each identified boundary, create input scenarios that test
both the boundary itself and values just outside it.
Example
○ Input: 0 (invalid)
○ Input: 1 (valid)
○ Input: 2 (valid)
○ Input: 99 (valid)
○ Input: 100 (valid)
○ Input: 101 (invalid)
Conclusion
Definition:
● Input Grouping: Inputs are grouped into valid and invalid equivalence
classes based on the expected behaviour of the system.
● Test Case Selection: Only one representative value from each
equivalence class is needed to test the functionality, minimising the
total number of test cases while still covering the input space
effectively.
Example:
Test Cases:
● For Class 1 (Valid): Choose 50 (or any other value between 1 and 100).
● For Class 2 (Invalid): Choose 0 (or any negative number).
● For Class 3 (Invalid): Choose 101 (or any number greater than 100).
● Not Comprehensive for Complex Cases: ECP may not cover all edge
cases, especially in situations where the software behaviour is not
consistent within the equivalence classes.
● Dependent on Accurate Class Definition: The effectiveness of this
technique relies on correctly defining the equivalence classes.
Misidentifying classes can lead to gaps in testing.
● Limited to Input Testing: Primarily useful for validating input values, it
may not be applicable for testing output or system states.
Conclusion:
Definition:
Key Principles:
Example
Consider a simple system that processes user orders based on two input
conditions:
Possible Effects:
Cause-Effect Graph:
● Create a graph with nodes for "Order Type" and "Payment Method" as
causes.
● Draw arrows to the outcomes, indicating which combinations of causes
lead to which effects.
For example:
Conclusion:
Definition:
Key Characteristics:
Objectives:
Testing Approach:
● Test Case Development: Test cases are derived from the code
structure, including various elements such as statements, branches,
and paths within the program.
● Code Coverage Analysis: White box testing often involves analysing
code coverage metrics to ensure that all parts of the code are
adequately tested, including edge cases.
Advantages:
Limitations:
Domain and Boundary Testing are techniques used in white box testing to
verify that the software behaves correctly within defined input ranges. While
domain testing focuses on valid and invalid input values, boundary testing
specifically targets the edge cases of these input ranges. Here’s a detailed
overview of each technique:
Domain Testing:
Definition:
1. Identify Input Domains: Determine the valid and invalid ranges for
each input variable based on the software requirements.
2. Categorise Inputs: Classify inputs into valid and invalid domains,
including both nominal (acceptable) and edge (boundary) cases.
3. Design Test Cases: Create test cases that cover various combinations
of valid and invalid inputs, ensuring that the software handles each
case correctly.
Example
Test Cases:
Boundary Testing:
Definition:
Example
● Boundaries:
○ Minimum valid value: 1
○ Maximum valid value: 100
Test Cases:
Limitations:
Conclusion:
Domain and Boundary Testing are vital techniques in white box testing
that focus on validating input ranges and edge conditions. By ensuring
that software handles all possible input scenarios correctly, these
techniques contribute to higher quality and more reliable software
applications.
(2) LOGIC BASED TESTING:
Definition
Key Principles
Objectives
Example
Limitations
Data Flow Testing is a white box testing technique that focuses on the flow
of data within a software application. It involves analysing how data is
defined, used, and manipulated throughout the code to ensure that it
behaves as expected. This technique is particularly effective in identifying
issues related to variable definitions, data initialization, and the proper use of
data in different scopes. Here’s a detailed overview of Data Flow Testing:
Definition:
Key Principles:
Example:
Consider the following pseudocode that calculates the total price based on
item quantity and price per item:
Data Flow Analysis:
● Variables:
○ quantity: Used after being defined (initialized with 5).
○ pricePerItem: Used after being defined (initialized with 20).
○ total: Defined and used within the function.
Test Cases:
Limitations:
Conclusion:
Data Flow Testing is a critical white box testing technique that emphasizes
the examination of data lifecycles and usage within the code. By focusing on
how data is defined, manipulated, and accessed, this technique helps identify
data-related errors, improve code quality, and enhance the overall reliability
of the software application.
Basic Path Testing is a white box testing technique that focuses on ensuring
that all possible execution paths through a program are tested at least once.
It aims to provide a systematic approach to testing by defining a set of basic
paths that represent the minimum number of test cases needed to cover the
control flow of the software. Here’s a detailed overview of Basic Path Testing:
Definition:
● Basic Path Testing: A white box testing technique that identifies a set
of execution paths through the software and creates test cases to cover
these paths. The technique emphasizes the need for testing the
fundamental logic and control flow within the code to ensure that all
paths are exercised.
Key Principles:
● Control Flow Graph (CFG): Basic Path Testing relies on the creation of
a Control Flow Graph, which visually represents the flow of control
through the program. Nodes in the graph represent code statements or
blocks, while edges represent the control flow between these
statements.
● Path Identification: The technique identifies all possible paths through
the program by analyzing the decision points and control structures,
allowing for systematic test case generation.
Objectives:
1. Create a Control Flow Graph (CFG): Analyze the code and create a
CFG that illustrates the flow of control, highlighting decision points,
branches, and loops.
2. Identify Independent Paths: Determine the independent paths in the
CFG. An independent path is a path that introduces at least one new
edge that has not been traversed in any previous path.
3. Develop Test Cases: Create test cases based on the independent paths
identified. Each test case should exercise a different path through the
code.
4. Execute Tests: Run the test cases and verify that the software behaves
as expected for each path.
Example:
● The graph consists of nodes for the entry point, each decision point,
and the return statements, with edges representing the flow between
these nodes.
Independent Paths:
Limitations:
Basic Path Testing is a valuable white box testing technique that emphasizes
the systematic examination of execution paths within a program. By ensuring
that all paths are tested at least once, this technique helps identify logical
errors, improve code quality, and enhance the overall reliability of the
software application.
UNIT - 4
(9) SOFTWARE TESTING
STRATEGIES:
Characteristics of Software Testing Strategy:
A software testing strategy is a comprehensive plan that outlines the
approach and methods to be used for testing software throughout its
development lifecycle. It ensures that testing is aligned with the overall goals
of the software project and that it is carried out efficiently and effectively.
Here are the key characteristics of a software testing strategy:
1. Test Objectives
● Clear Goals: The strategy should define clear objectives for testing,
such as ensuring functionality, performance, security, and user
satisfaction.
● Alignment with Business Needs: Testing objectives should align with
the business goals and user requirements to deliver maximum value.
2. Scope of Testing
● Test Case Development: The strategy should specify how test cases
will be designed, including the use of requirements, specifications, and
user stories as the basis for test case creation.
● Test Data Management: Guidelines for managing test data, including
creation, storage, and usage, should be included to ensure relevant and
valid data is used for testing.
5. Automation Strategy
6. Test Environment
9. Risk Management
Conclusion
Definition:
1. Test Planning: Define the scope, objectives, and resources required for
integration testing. Establish a test environment and select the
integration approach.
2. Test Case Design: Develop test cases based on the integration
requirements and specifications, focusing on the interactions between
modules.
3. Test Execution: Execute the integration tests and document the results.
Monitor the interactions between components and verify that they
work together as intended.
4. Defect Reporting: Report any defects or issues found during testing.
Collaborate with developers to address and resolve the identified
problems.
5. Regression Testing: Perform regression tests to ensure that newly
integrated components do not adversely affect existing functionality.
Conclusion
Definition
1. Unit Testing:
○ Tests individual components or modules for correctness.
○ Typically performed by developers during the coding phase.
2. Integration Testing:
○ Tests the interaction between integrated modules to ensure they
work together properly.
○ Focuses on data flow and control between components.
3. System Testing:
○ Tests the complete and integrated software system to validate
its compliance with specified requirements.
○ Conducted in an environment that mimics the production
environment.
4. User Acceptance Testing (UAT):
○ Conducted by end users to validate the software against their
requirements and expectations.
○ Ensures that the software is ready for deployment.
5. Regression Testing:
○ Ensures that new code changes do not adversely affect existing
functionality.
○ Involves re-running previously completed tests to verify that
everything still works as intended.
Conclusion
Definition
1. Unit Testing:
○ Focuses on individual classes and their methods.
○ Verifies that each method behaves correctly and meets its
requirements.
2. Integration Testing:
○ Examines interactions between classes and objects.
○ Ensures that objects work together seamlessly, verifying
message passing and method calls.
3. System Testing:
○ Validates the complete and integrated software system.
○ Tests the entire application to ensure that all objects and their
interactions work as intended.
4. Functional Testing:
○ Tests the functionality of the application by validating object
behaviours against functional requirements.
○ Ensures that classes implement the specified behaviour correctly.
5. Regression Testing:
○ Conducted after code changes to ensure that existing object
functionality is not adversely affected.
○ Re-runs previously executed test cases to validate that the
system remains stable.
Object-Oriented Testing Strategies
1. Method Testing:
○ Focuses on testing individual methods in a class to ensure they
perform as intended.
○ Typically involves defining test cases for each method based on
its input and expected output.
2. Class Testing:
○ Involves testing the overall behavior of a class, including its
attributes and methods.
○ Ensures that the class meets its functional requirements and
performs correctly under various scenarios.
3. Interaction Testing:
○ Examines the interactions between objects, validating that
messages are passed correctly and that the expected outcomes
are achieved.
○ Ensures that collaborations among objects produce the correct
results.
4. State-Based Testing:
○ Focuses on the different states of an object and how they affect
its behaviour.
○ Tests how methods behave under various state conditions and
transitions.
5. Inheritance Testing:
○ Validates the behaviour of derived classes and ensures that they
correctly inherit attributes and methods from parent classes.
○ Tests polymorphic behaviour to ensure that overridden methods
function correctly.
Advantages
Limitations
Conclusion
Definition
1. Test Planning: Define the scope, objectives, and schedule for alpha
testing. Identify the features to be tested and the resources required.
2. Test Case Development: Create test cases based on the requirements
and specifications, covering various functionalities and scenarios.
3. Environment Setup: Prepare the testing environment to closely
resemble the production environment, ensuring that hardware,
software, and network configurations are appropriate.
4. Test Execution: Execute the test cases and document the results,
including any defects, bugs, or usability issues encountered during
testing.
5. Defect Reporting and Resolution: Report identified defects to the
development team for resolution. Collaborate to prioritise and address
the issues before moving to beta testing.
6. Final Evaluation: Assess the overall quality of the software based on
testing results and determine readiness for the beta testing phase.
Advantages
● Early Bug Detection: Identifying and fixing bugs during alpha testing
can save time and resources, reducing the cost of fixing issues later in
the development lifecycle.
● Improved Software Quality: By validating requirements and assessing
usability, alpha testing helps enhance the overall quality of the
software before it reaches external users.
● Controlled Feedback: Internal testing allows the development team to
gather feedback from trusted sources, leading to informed decisions
about further improvements.
Limitations
Conclusion
Beta Testing:
Beta Testing is the second phase of software testing, following alpha testing.
It involves releasing the software to a selected group of external users who
test the application in a real-world environment. The primary goal of beta
testing is to gather feedback on the software’s performance, identify bugs or
issues, and ensure that it meets user expectations before the final release.
Definition
Advantages
Limitations