0% found this document useful (0 votes)
0 views

Module 4 Sepm Vansh

Release testing is a software testing phase conducted by a separate team to ensure the software meets functionality, performance, and reliability standards before delivery. It includes requirements-based testing, scenario testing, and performance testing to validate the system's readiness for use. The document also discusses development testing, including unit, component, and system testing, highlighting their processes and importance in identifying and fixing bugs during software development.

Uploaded by

Vansh negi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Module 4 Sepm Vansh

Release testing is a software testing phase conducted by a separate team to ensure the software meets functionality, performance, and reliability standards before delivery. It includes requirements-based testing, scenario testing, and performance testing to validate the system's readiness for use. The document also discusses development testing, including unit, component, and system testing, highlighting their processes and importance in identifying and fixing bugs during software development.

Uploaded by

Vansh negi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

BY VANSH NEGGI ( CSE DEPT.

Module 4
RELEASE TESTING
Release testing is a type of software testing done before a system is delivered
to customers or users. The main purpose is to ensure that the software is ready
for use and meets the required standards of functionality, performance, and
reliability. Unlike system testing, which is done by the development team to
find bugs, release testing is conducted by a separate testing team to validate
the system before release.

Key Features of Release Testing


1. Done by a Separate Team
o The development team is not responsible for release testing.
o A separate testing team ensures that the software meets all
requirements before release.
2. Focuses on Validation, Not Just Finding Bugs
o Release testing ensures the system meets user requirements and
is fit for use.
o It is different from defect testing, which focuses on finding and
fixing bugs.
3. Uses Black-Box Testing Approach
o The testers do not check the internal code of the system.
o They only test whether the software works correctly based on its
inputs and outputs.
o This is also called functional testing, as it checks if the software
functions as expected.
There are three main ways to do release testing:

1. Requirements-Based Testing
BY VANSH NEGGI ( CSE DEPT. )

 What is it?
This is about checking if the system does what it’s supposed to do, based
on the requirements (a list of things the system must do).
It is done in the following ways :-
o First, make sure every requirement is testable (you can create a
test for it).
o Then, create tests for each requirement.
o Finally, run those tests and check if the system passes them.
 Example:
Let’s say you’re testing a medical system. One requirement is:
o If a patient is allergic to a medication, the system should show a
warning.
1. Test with a patient who has no allergies: Prescribe a medication
and check that no warning appears.
2. Test with a patient who has a known allergy: Prescribe the allergic
medication and check that a warning appears.
3. Test what happens if the doctor ignores the warning: Make sure
the system asks for a reason why the warning was ignored.
This way, you’re checking that the system meets the requirement in different
situations.

2. Scenario Testing
 What is it?
This is about testing the system using real-life stories (scenarios) of how
users might use it.
Scenarios should be realistic and real system users should be able to
relate to them.
 How is it done?
o Create realistic scenarios (stories) that describe how the system
might be used.
BY VANSH NEGGI ( CSE DEPT. )

o Use these scenarios to test multiple features of the system at


once.
 Example:
Let’s say you’re testing a healthcare system. A scenario might be:
o A nurse logs into the system, downloads patient records, visits a
patient, checks medication side effects, updates the record, and
uploads the changes.
What’s being tested here?
1. Authentication: Can the nurse log in?
2. Data Encryption: Are the patient records secure when
downloaded?
3. Data Retrieval: Can the nurse access the patient’s medication
history?
4. Integration: Does the system correctly show side effects from the
drug database?
5. Updates: Can the nurse update the patient’s record and upload it
back to the system?
By testing this scenario, you’re checking how well the system works in a real-
world situation.

3. Performance Testing
 What is it?
This is about checking if the system can handle the workload it’s
supposed to handle. For example, can it process 100 transactions per
second without slowing down or crashing?
 How is it done?
o Test the system under normal conditions (e.g., 100 transactions
per second).
o Gradually increase the load (e.g., 200, 300 transactions per
second) to see when the system starts to slow down or fail.
BY VANSH NEGGI ( CSE DEPT. )

o Check if the system fails gracefully (e.g., doesn’t crash or lose


data) when it’s overloaded.
 Example:
Let’s say you’re testing a banking system designed to handle 300
transactions per second.
o Start by testing with 200 transactions per second and check if the
system works fine.
o Then, increase the load to 300 transactions per second and see if
it still performs well.
o Finally, push it beyond its limit (e.g., 400 transactions per second)
to see how it behaves. Does it slow down? Does it crash? Does it
handle the overload gracefully?
This helps you understand the system’s limits and ensure it won’t fail in real-
world use.

Why is Release Testing Important?


1. Ensures Quality: It makes sure the system works as expected and meets
user needs.
2. Builds Trust: Customers feel confident using the system because it’s
been thoroughly tested.
3. Finds Hidden Issues: It uncovers problems that might not show up
during development testing

DEVELOPMENT TESTING
 It is testing done by the development team during software
development.
 It helps find and fix bugs before deployment.
 There are three levels of development testing:
1. Unit Testing – Testing individual functions or classes.
BY VANSH NEGGI ( CSE DEPT. )

2. Component Testing – Testing multiple units combined into a


component.
3. System Testing – Testing the complete system after integration.
1. Unit Testing
Unit testing is the process of testing individual parts of the software, such as
functions, methods, or classes, to ensure they work correctly. It is the first level
of testing and focuses on verifying the smallest units of code.

Process of Unit Testing


 Step 1: Write Test Cases
Test cases are written for each function or method. Each test case checks
if the function produces the correct output for a given input.
 Step 2: Execute Tests
The test cases are executed, and the actual output is compared with the
expected output.
 Step 3: Debug and Fix
If the output doesn’t match the expected result, the code is debugged
and fixed.

Example of Unit Testing


 Function: A method that calculates the area of a rectangle.
 Test Cases:
1. Input: length = 5, width = 4 → Expected Output: 20.
2. Input: length = 0, width = 10 → Expected Output: 0 (edge case).

Key Points in Unit Testing


 Test All Operations: Every method or function in the class should be
tested.
 Check Attributes: Verify the values of all attributes (properties) of the
object.
BY VANSH NEGGI ( CSE DEPT. )

 Simulate States: Test the object in all possible states. For example, a
weather station can be in "Running" or "Shutdown" states, and unit tests
should verify its behavior in each state.

Automated Unit Testing


Automated unit testing uses tools like JUnit to run tests automatically
without manual intervention.

Automated tests have three parts:


1. Setup: Prepare the test (e.g., set input values).
2. Call: Run the function or method being tested.
3. Assertion: Compare the result with the expected output.
 Example of Automated Testing:
o Test: Check if a function returns "Hello, World!" when given the
input "World".
o Automated Test:
1. Setup: Input = "World".
2. Call: Run the function.
3. Assertion: Check if output = "Hello, World!".
Example of Unit testing
Let us see one sample example for a better understanding of the concept of
unit testing:
BY VANSH NEGGI ( CSE DEPT. )

Below are the application access details, which is given by the customer
o URL→ login Page
o Username/password/OK → home page
o To reach Amount transfer module follow the below
Loans → sales → Amount transfer
While performing unit testing, we should follow some rules, which are as
follows:
o To start unit testing, at least we should have one module.
o Test for positive values
o Test for negative values
o No over testing
o No assumption required
When we feel that the maximum test coverage is achieved, we will stop the
testing.
Now, we will start performing the unit testing on the different components
such as
o From account number(FAN)
o To account number(TAN)
o Amount
o Transfer
o Cancel

For the FAN components


BY VANSH NEGGI ( CSE DEPT. )

VALUES Description
1234 accept

4311 Error message→ account valid or not

blank Error message→ enter some values

For the TAN component


o Provide the values just like we did in From account number (FAN)
components
For Amount component
o Provide the values just like we did in FAN and TAN components.
For Transfer component
o Enter valid FAN value
o Enter valid TAN value
o Enter the correct value of Amount
o Click on the Transfer button→ amount transfer successfully( confirmation
message)
For Cancel Component
o Enter the values of FAN, TAN, and amount.
o Click on the Cancel button → all data should be cleared.

Component Testing
Component testing is a crucial step in software development where we check if
individual parts of a system, called components, work correctly. These
components are often made up of smaller pieces, or objects, that interact with
each other. For example, in a weather station system, a component might
BY VANSH NEGGI ( CSE DEPT. )

handle reconfiguration, and it could include objects that manage different tasks
like adjusting settings or updating sensors.
The individual parts of the component are already tested separately through
unit testing, so component testing ensures that they work properly when
combined.
Example: If components A, B, and C are merged into a larger system, testing is
done on their combined interface, not on each part separately, because some
errors only appear when they interact.

Types of Interfaces
There are different ways in which components can interact, leading to different
types of interfaces:
1. Parameter Interface – One component passes data (or function
references) to another. This is common in methods inside objects.
2. Shared Memory Interface – Multiple components access the same
memory area. One component writes data, and another reads it. This is
often used in embedded systems, where sensors generate data that
other components use.
BY VANSH NEGGI ( CSE DEPT. )

3. Procedural Interface – A component provides procedures or functions


that other components can call when needed. Many reusable
components work this way.
4. Message Passing Interface – One component sends a message to
request a service from another component. The second component
responds with the required information. This is common in object-
oriented and client-server systems.

Common Interface Errors


1. Interface Misuse – A component incorrectly calls another component.
This often happens in parameter interfaces when:
o Wrong data types are used.
o Parameters are in the wrong order.
o The wrong number of parameters are passed.
2. Interface Misunderstanding – A component assumes incorrect behavior
about another component. For example, a binary search function is
called with an unordered list, which causes it to fail.
3. Timing Errors – Found in real-time systems that use shared memory or
message-passing. When one component produces data and another
reads data, they might operate at different speeds, causing outdated or
incorrect data to be used.

Guidelines for Interface Testing


To effectively test interfaces, follow these guidelines:
1. Test Extreme Values
o List calls to external components.
o Test parameter values at extreme ranges to find inconsistencies.
2. Check for Null Pointers
o Test interfaces with null pointers when pointers are passed.
3. Force Failures
BY VANSH NEGGI ( CSE DEPT. )

o Deliberately cause failures in procedural interfaces to uncover


misunderstandings.
4. Stress Test Message Systems
o Send more messages than usual to reveal timing issues.
5. Vary Activation Order
o Change the order of component activation in shared memory
systems to find hidden assumptions.

System Testing
System testing is performed during development by integrating different
components to create a working version of the system. Once integrated, the
system is tested to check if all components work together correctly.
Purpose of System Testing
 Ensures that different parts of the system communicate properly and
transfer the correct data.
 It is similar to component testing but has key differences:
1. System testing includes pre-built components (such as off-the-
shelf software) along with newly developed ones.
2. Different teams or individuals may have built various components,
so system testing is a collaborative effort.

Example: Wilderness Weather Station System


 The weather station receives a request to send summarized weather
data to a remote computer.
A sequence diagram (Figure 3.5) helps visualize the process and design
test cases accordingly.
BY VANSH NEGGI ( CSE DEPT. )

Steps in the Weather Data Request Process:


1. Request is sent → SatComms requests data from WeatherStation.
2. WeatherStation processes request → Calls Commslink to get a
summary.
3. Commslink retrieves data → Calls WeatherData to generate a
summarized report.
This sequence helps define test cases by identifying inputs and expected
outputs:
1. Sending a request for a report should generate an acknowledgment and
return a report.
2. The summarized data should match the expected output, ensuring that
the data organization is correct.
3. The WeatherStation must correctly summarize raw weather data, which
is also used to test the WeatherData component.

Challenges in System Testing


BY VANSH NEGGI ( CSE DEPT. )

 It is difficult to decide how much testing is enough or when to stop


testing.
 Exhaustive testing (testing every possible scenario) is impossible, so only
a subset of test cases is used.
Key Areas to Test
1. All system functions accessed via menus must be tested.
2. Combinations of menu-based functions (e.g., text formatting tools)
should be tested together.
3. User inputs should be tested with both valid and invalid data to check
for proper handling.

Automated System Testing


 Automating system tests is more challenging than automating unit or
component tests.
 Unit tests are easier because their expected outputs can be predicted
and programmed.
 In system testing, the actual output is compared with the expected
results, but predicting all possible outcomes is difficult.

You might also like