0% found this document useful (0 votes)
25 views38 pages

Sta Revision

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views38 pages

Sta Revision

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 38

CCS366 SOFTWARE TESTING AND AUTOMATION

Unit – 5
Selenium
 Selenium is a popular open-source software testing framework used for
automating web applications.
 It is widely used for functional testing, regression testing, and
performance testing.
 Selenium supports multiple programming languages, including Java,
C#, Python, and Ruby, making it accessible to a wide range of
developers.
Selenium IDE
 Selenium, a powerful automation tool, simplifies the process of testing web applications by
automating browser interactions.
 It allows testers to write scripts in various languages, making it versatile for different
environments.
 Components like WebDriver and Grid enable more efficient testing, ensuring the application
functions smoothly across browsers.
 At the beginning, Selenium IDE( Integrated Development Environment ) was implemented as a
Firefox add-on/plugin and now it can be used Selenium IDE on every web browser.
 It provides record and playback functionality. The figure shows Selenium IDE.
Advantages of Selenium IDE:
 It is an open-source tool.
 Provide base, for extensions.
 It provides multi-browser support.
 No programming language experience is required while using Selenium IDE.
 The user can set breakpoints and debug.
 It provides record and playback functions.
 Easy to Use and install.
 Languages supported by Selenium IDE are Java, Python, C# etc.
Dis-advantages of Selenium IDE:
 There are no support iteration and conditional operations.
 Execution is slow.
 It does not have any API.
 It does not provide any mechanism for error handling.
 Dynamic web applications are not used to test.
Selenium RC
 RC stands for Remote Control.
 It allows the programmers to code in different programming languages like C#, Java, Perl,
PHP, Python, Ruby, Scala, Groovy.
Advantages of Selenium RC
 It supports all web browsers.

 It can perform iteration and conditional operations.


 Execution is faster as compared to IDE.
 It has built-in test result generators.
 It supports data-driven testing.
 It has a matured and complete API.
 There is also support for cross browser testing.
 There is also support for user preferred languages.
Dis-advantages of Selenium RC
 Programming language knowledge is needed.
 It does not support testing for IOS/Android.
 It is a little slower than Selenium Webdriver in terms of execution.
 It does not support record and playback functions.
 Complicated configuration than selenium IDE.
 The API in selenium RC are quite confusing.
Selenium Web Driver
 Selenium Web Driver automates and controls initiated by the web browser.
 It does not rely on JavaScript for automation.
 It controls the browser directly by communicating with it.
Advantage of Selenium Web Driver
 It directly communicates with the web browser.
 Execution is faster.
 It supports listeners.
 It supports IOS/Android application testing.
 Installation is simpler than Selenium RC.
 Purely object-oriented.
 No need of any separate Component.
Dis-advantage of Selenium Web Driver
 It requires programming knowledge.
 There’s no built-in mechanism for the generation of the test result file.
 Installation process is quite complicated than selenium IDE.
 There is no support for any new browsers.
 There is no detailed test reports.
Selenium Grid
Basically, it is a server that allows the test to use a web browser instance running on remote machines.
It provides the ability to run the test on a remote web browser, which helps to divide a load of testing
across multiple machines and it will save enormous time. It allows executing parallel tests across
different platforms and operating systems.
Selenium Grid is a network of HUB & nodes. Each node registers to the HUB with a certain
configuration and HUB is aware of the browsers available on the node. When a request comes to the
HUB for a specific browser [with desired capabilities object], the HUB, if found a match for the
requested web browser, redirects the call to that particular Grid Node and then a session is established
bi-directionally and execution starts. It makes the easy use for multiple machines to run the tests in
parallel.
Features of Selenium
1. Cross-browser compatibility: Selenium supports testing on multiple browsers like Chrome,
Firefox, Safari, Edge, and Internet Explorer.
2. Language support: Selenium supports multiple programming languages like Java, Python, C#,
Ruby, and JavaScript, making it easy for developers to write automation scripts in their
preferred language.
3. Multiple testing frameworks: Selenium can integrate with multiple testing frameworks like
JUnit, TestNG, and NUnit.
4. Record and playback: Selenium provides the option to record and playback test scripts, which
makes it easy for testers to create test cases without having to write code.
5. Parallel execution: Selenium can execute test cases in parallel across multiple machines, which
reduces the overall execution time.
6. Element identification: Selenium can identify web elements using various locators like ID,
Name, XPath, CSS Selector, and Class Name.
7. Handling dynamic elements: Selenium can handle dynamic web elements like dropdowns, pop-
ups, and alerts.
8. Integration with third-party tools: Selenium can integrate with various third-party tools like
Jenkins, Docker, and Appium.
9. Support for mobile testing: Selenium can also be used for mobile testing using Appium.

Different Web Drivers in Selenium


 Web drivers are tools that allow Selenium to interact with web browsers.
 They act as a bridge between the Selenium code and the browser to automate tasks
like clicking, typing, and navigating.
1. ChromeDriver
2.GeckoDriver
3.EdgeDriver
4.SafariDriver
5.InternetExplorerDriver
6.OperaDriver
7.RemoteWebDriver
8.HtmlUnitDriver
1. ChromeDriver

● Purpose: Automates Google Chrome.


● Usage: Ideal for running tests on Chrome.
● Advantages:
o Supports the latest Chrome features.
o Frequently updated for compatibility with Chrome versions.
● Limitations:

o Only works with Chrome; needs updates for browser upgrades.

2. GeckoDriver

● Purpose: Automates Mozilla Firefox.


● Usage: Required to run tests on Firefox.
● Advantages:

o Supports both standard and headless modes of Firefox.


o Compatible with modern Firefox versions.

● Limitations:

o Needs to be kept in sync with the Firefox version.

3. EdgeDriver

● Purpose: Automates Microsoft Edge.


● Usage: Used for testing on the Edge browser.
● Advantages:

o Fully supports Chromium-based Edge features.


o Allows automation in both Windows and macOS.

● Limitations:
o Works only with Edge and requires the correct version for the browser.

4. SafariDriver

● Purpose: Automates Apple Safari browser.


● Usage: Used for macOS and iOS testing.
● Advantages:

o Built into Safari; no separate download required.


o Supports mobile Safari testing on iOS devices.

● Limitations:

o Works only on Apple devices and Safari browser.

5. InternetExplorerDriver

● Purpose: Automates Internet Explorer (IE).


● Usage: For legacy systems using IE.
● Advantages:

o Compatible with older web applications.

● Limitations:

o Outdated; not supported by modern web standards.


o Slower compared to other drivers.

6. OperaDriver

● Purpose: Automates Opera browser.


● Usage: Used for running tests on Opera.
● Advantages:

o Supports Opera’s unique features.

● Limitations:

o Less popular, so not frequently updated.

7. RemoteWebDriver

● Purpose: Automates browsers remotely.


● Usage: Used for running tests on a remote machine using Selenium Grid.
● Advantages:

o Enables distributed testing across multiple machines.


o Works with various browsers and devices.

● Limitations:

o Requires setup of Selenium Grid or cloud services.

8. HtmlUnitDriver

● Purpose: Headless browser automation (no GUI).


● Usage: Used for faster execution without visual browser rendering.
● Advantages:

o Lightweight and quick.


o Suitable for backend testing or API validations.

● Limitations:

o Not ideal for UI-based testing since it has no visual representation.

WebDriver Event in Selenium


 WebDriver Event in Selenium refers to a feature that enables the tracking of events and actions
triggered during WebDriver operations. This is achieved using the EventFiringWebDriver class
in Selenium, which allows users to log, capture, or modify behavior during various WebDriver
operations like clicks, navigation, and element interaction.
Key Concepts:
1. EventFiringWebDriver:
o A wrapper around the WebDriver instance that listens to and logs WebDriver events.
o It triggers events before and after a specific WebDriver action.
2. WebDriverEventListener Interface:
o A pre-defined interface in Selenium that contains methods for handling different
WebDriver events.
o Users can implement this interface to define custom behavior for WebDriver events.
Common Methods in WebDriverEventListener
Here are some important methods provided by the interface:
1. beforeClickOn(WebElement element, WebDriver driver): Called before clicking on an element.
2. afterClickOn(WebElement element, WebDriver driver): Called after clicking on an element.
3. beforeNavigateTo(String url, WebDriver driver): Called before navigating to a specific URL.
4. afterNavigateTo(String url, WebDriver driver): Called after navigating to a specific URL.
5. onException(Throwable throwable, WebDriver driver): Called when an exception occurs
during WebDriver operation.
6. beforeChangeValueOf(WebElement element, WebDriver driver, CharSequence[]
keysToSend): Called before changing the value of a web element.
7. afterChangeValueOf(WebElement element, WebDriver driver, CharSequence[] keysToSend):
Called after changing the value of a web element.
Benefits of Using WebDriver Events
1. Debugging and Logging: Helps capture WebDriver actions for debugging or logging purposes.
2. Custom Behavior: Enables defining custom actions before or after specific WebDriver
operations.
3. Improved Test Maintenance: Facilitates tracking of application behavior and errors, improving
test case maintenance.
Example Code
Below is an example of how WebDriver Events are used:
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.support.events.AbstractWebDriverEventListener;
import org.openqa.selenium.support.events.EventFiringWebDriver;

public class WebDriverEventExample {


public static void main(String[] args) {
WebDriver driver = new ChromeDriver();

// Create an EventFiringWebDriver instance


EventFiringWebDriver eventDriver = new EventFiringWebDriver(driver);

// Register a listener
eventDriver.register(new WebDriverListener());

// Perform actions using eventDriver


eventDriver.get("https://example.com");
WebElement button = eventDriver.findElement(By.id("submit"));
button.click();

// Quit the driver


eventDriver.quit();
}
}

// Implement WebDriverEventListener
class WebDriverListener extends AbstractWebDriverEventListener {
@Override
public void beforeClickOn(WebElement element, WebDriver driver) {
System.out.println("Before clicking on: " + element.getText());
}

@Override
public void afterClickOn(WebElement element, WebDriver driver) {
System.out.println("After clicking on: " + element.getText());
}

@Override
public void onException(Throwable throwable, WebDriver driver) {
System.out.println("Exception occurred: " + throwable.getMessage());
}
}

Practical Applications
1. Logging Events: Track all user actions during test execution for better test visibility.
2. Error Reporting: Capture detailed logs when an exception occurs.
3. Monitoring: Observe WebDriver interactions in real-time during execution.
By leveraging WebDriver events, testers can enhance debugging, optimize logging, and gain better
insights into the test execution flow.

UNIT 1
Differentiation Between White-Box Testing and Black-Box Testing

Aspect White-Box Testing Black-Box Testing


Testing based on internal code and
Definition Testing based on application functionality.
logic.
Knowledge Requires programming and code
No programming knowledge is required.
Required knowledge.
Internal structure and code execution
Focus External behavior and user interface.
paths.
Derived from requirements and
Test Basis Derived from source code and design.
specifications.
Equivalence partitioning, boundary value
Techniques Used Statement, branch, and path coverage.
analysis, decision tables.
Tester's Role Tests how the code works internally. Tests what the software does.
Code analyzers, unit testing Functional testing tools (e.g., Selenium,
Tools Used
frameworks (e.g., JUnit, NUnit). QTP).
Testing loops, conditions, and internal Testing login forms, input fields, and
Examples
algorithms. outputs.
Detects logical errors and ensures code Ensures the software meets user
Advantages
quality. expectations.
Time-consuming, requires access to
Disadvantages May not identify internal code issues.
source code.
Common Use
Unit testing, integration testing. System testing, acceptance testing.
Cases

White-Box Testing vs. Black-Box Testing


White-Box Testing
Definition:
White-box testing (also known as clear box, glass box, or structural testing) involves testing the
internal structure, logic, and implementation of the software. The tester needs to have knowledge of the
source code and ensures that all possible paths and conditions are tested.
Key Characteristics:
1. Based on the internal workings of the application.
2. Requires programming knowledge.
3. Focuses on code coverage, including:
o Path testing.
o Loop testing.
o Condition testing.
Techniques:
1. Statement Coverage: Ensures every statement in the code is executed at least once.
2. Branch Coverage: Ensures all possible branches from each decision point are tested.
3. Path Coverage: Ensures all independent paths in the code are tested.
Example:
Imagine a function to check if a number is even or odd:
def check_number(num):
if num % 2 == 0:
return "Even"
else:
return "Odd"
 Statement Coverage: Ensure both the if and else statements are executed.
 Branch Coverage: Test with inputs like num = 2 (Even) and num = 3 (Odd).
Advantages:
 Detects hidden errors and logical issues in the code.
 Improves code quality by ensuring full code coverage.
Disadvantages:
 Requires access to source code.
 Time-consuming for large applications.

Black-Box Testing
Definition:
Black-box testing focuses on the functionality of the application without considering its internal
implementation. Testers verify whether the application behaves as expected by providing inputs and
analyzing outputs.
Key Characteristics:
1. Based on software specifications and requirements.
2. No knowledge of the internal code is required.
3. Focuses on testing:
o User interfaces.
o Functionality.
o Performance.
Techniques:
1. Equivalence Partitioning: Divide input data into valid and invalid partitions.
2. Boundary Value Analysis: Test at the boundaries of input ranges.
3. Decision Table Testing: Use decision rules for complex logic.
Example:
Consider a login functionality where the requirements specify:
 A valid username is "admin".
 A valid password is "1234".
Test Cases:
1. Input: username = "admin", password = "1234" → Expected Output: "Login Successful".
2. Input: username = "user", password = "1234" → Expected Output: "Invalid Username".
3. Input: username = "admin", password = "abcd" → Expected Output: "Invalid Password".
Advantages:
 Does not require programming knowledge.
 Helps ensure the software meets user requirements.
Disadvantages:
 Limited in identifying internal code issues.
 Testing coverage may be incomplete for complex applications.

Stages of Testing: Unit Testing, Integration Testing, and System Testing


1. Unit Testing
Definition:
 Each "unit" refers to the smallest piece of code, such as a function, method, or class.
 Unit testing focuses on testing individual components or modules of the software in isolation to
ensure that they work as intended.
Key Features:
 Performed early in the development lifecycle.
 Typically conducted by developers.
 Requires access to the codebase and is usually automated.
Objectives:
 Validate that each module performs correctly.
 Identify defects early in development.
Example:
A function to calculate the sum of two numbers:
def add(a, b):
return a + b
 Test Case 1: add(2, 3) → Expected Output: 5.
 Test Case 2: add(-1, 1) → Expected Output: 0.
Tools Used:
 Java: JUnit.
 Python: unittest, pytest.
 C#: NUnit.
Advantages:
 Detects defects at an early stage.
 Ensures each component is reliable before integration.
Challenges:
 Time-consuming if the application has numerous modules.
 Requires in-depth code knowledge.
2. Integration Testing
Definition:
 Integration testing evaluates how different modules or components interact with each other
when combined. It ensures the integrated system works correctly as a whole.
Key Features:
 Conducted after unit testing.
 Focuses on data flow between modules.
 Performed by both developers and testers.
Types of Integration Testing:
1. Big Bang Approach:
o All modules are integrated at once, and testing is conducted on the complete system.
o Advantage: Simple to implement.
o Disadvantage: Debugging becomes difficult if issues are found.
2. Incremental Approach:
o Modules are integrated and tested step by step.
o Top-Down:This approach begins by testing the higher-level modules (main modules)
first. If the lower-level modules (submodules) are not yet developed, they are replaced
with stubs.
o Bottom-Up: Testing starts from lower-level modules, using drivers for higher-level ones.
o Advantage: Easier to identify issues in smaller increments.
Example:
 Module 1: Login Page (accepts username and password).
 Module 2: Dashboard (displays user-specific information after login).
o Test: Ensure correct user data is displayed when valid login credentials are provided.
Tools Used:
 Selenium (for UI integration testing).
 Postman (for API integration testing).
Advantages:
 Verifies data flow between modules.
 Ensures the combined functionality of modules is correct.
Challenges:
 Dependency on the availability of all modules.
 More complex than unit testing.
3. System Testing
Definition:
System testing evaluates the complete, integrated system to ensure it meets the specified requirements. It
validates the entire software application, including hardware, software, and external systems.
Key Features:
 Conducted after integration testing.
 Performed by testers in an environment similar to production.
Objectives:
 Validate the end-to-end functionality of the application.
 Test against business and functional requirements.
Types of System Testing:
1. Functional Testing:
o Ensures that the software behaves as expected.
2. Non-Functional Testing:
o Includes performance, usability, security, and scalability testing.
Example:
For an e-commerce application:
 Validate the functionality: User can browse products, add them to the cart, and checkout.
 Performance Testing: Check how the system handles 1000 concurrent users.
 Security Testing: Ensure customer data is protected during transactions.
Tools Used:
 Functional: Selenium, QTP.
 Performance: JMeter, LoadRunner.
 Security: OWASP ZAP.
Advantages:
 Ensures the system meets business requirements.
 Validates the application in a real-world environment.
Challenges:
 Time-consuming as it involves testing the entire application.
 Requires a stable and integrated system.

Comparison of Testing Stages

Aspect Unit Testing Integration Testing System Testing

Individual Interaction between


Focus Entire system as a whole.
modules/components. modules.
Performed
Developers. Developers and testers. Testers.
By

Validate isolated Verify data flow and Ensure end-to-end system


Objective
functionality. interaction. behavior.
Aspect Unit Testing Integration Testing System Testing

Selenium, JMeter, OWASP


Tools JUnit, pytest. Postman, Selenium.
ZAP.

Testing Level Low-level testing. Mid-level testing. High-level testing.

V-Model Testing:
 The V-Model, also known as the Verification and Validation Model, is a software development
model where each development phase is directly associated with a corresponding testing phase.
 It’s called the "V-Model" because the process looks like the letter "V."

Key Idea:
 Left side of the V: Represents development stages (planning and designing the software).
 Right side of the V: Represents testing stages, which validate the work done in the
corresponding development stages.
 Bottom of the V: Coding or implementation.
Stages of the V-Model
1. Requirement Analysis
 What happens?
The software’s functional and non-functional requirements are collected and documented.
 Testing Phase: Acceptance Testing
o Ensures the software meets the client’s requirements.
2. System Design
 What happens?
The overall system architecture and modules are planned.
 Testing Phase: System Testing
o Validates the entire software system against the design and requirements.
3. Architecture design (High-Level Design (HLD))
 What happens?
Break the system into smaller modules, describing their functionality and interaction.
 Testing Phase: Integration Testing
o Ensures the modules work together as planned.
4. Module design (Low-Level Design (LLD))
 What happens?
Detailed design of individual modules or components is created, focusing on how each one will
work.
 Testing Phase: Unit Testing
o Tests individual components to ensure they work as intended.
5. Coding/Implementation
 What happens?
The actual development of the software is done here.
 Testing Phase: Begins with Unit Testing and proceeds upward.
Advantages of the V-Model
1. Early Detection of Defects: Testing starts early, preventing issues from propagating.
2. Clear Phases: Each phase is well-defined with corresponding testing.
3. Validation at Every Step: Ensures both functionality (does it work?) and requirements (is it
what the user wants?) are met.
4. Simple and Easy to Use: The structure is clear, making it easier to manage.
Disadvantages of the V-Model
1. Rigid: Changes in requirements are difficult to accommodate once the process starts.
2. No Iterations: Unlike Agile, it doesn’t allow flexibility or iterative development.
3. Costly: Errors in early stages can be expensive to fix later.
When to Use the V-Model
 When requirements are well-defined and unlikely to change.
 For small- to medium-sized projects with clear objectives.
 For projects where quality is critical (e.g., healthcare, defense).
Simple Example
Imagine building a car:
1. Requirement Analysis: Decide what the car should do (e.g., speed, fuel efficiency).
o Acceptance Testing: Test if the car meets these requirements.
2. System Design: Plan the car’s overall design (engine, chassis, etc.).
o System Testing: Test if the whole car works as expected.
3. High-Level Design: Break it into systems (engine, brakes).
o Integration Testing: Check if the brakes and engine work together.
4. Low-Level Design: Plan individual parts (pistons, gears).
o Unit Testing: Test each piston or gear to ensure it functions.

UNIT 2
Explain about Test Phases, Test Strategy, Resource Requirements with example
1. Test Phases
Definition: Test phases are the stages of testing during a software development lifecycle, each focusing
on specific objectives and ensuring the application meets quality standards.
Phases with Examples:
1. Unit Testing
o What it is: Testing individual components or modules.
o Example: Testing a function that calculates the total price in a shopping cart.
 Input: totalPrice([10, 20, 30])
 Expected Output: 60.
2. Integration Testing
o What it is: Testing how different modules interact with each other.
o Example: Testing the interaction between a login module and a dashboard module.
 Scenario: After logging in, the correct dashboard should load.
3. System Testing
o What it is: Testing the entire system as a whole.
o Example: Testing an e-commerce site for overall functionality:
 Browsing products, adding them to the cart, and completing a purchase.
4. Acceptance Testing
o What it is: Testing to ensure the software meets user requirements.
o Example: Verifying that an app displays correct search results based on customer
queries.
2. Test Strategy
Definition: The test strategy outlines the approach and methods used to achieve testing goals. It ensures
all aspects of the software are tested systematically.
Key Elements with Examples:
1. Types of Testing
o Example:
 Use functional testing to verify login functionality.
 Use performance testing to check if the system can handle 1000 users
simultaneously.
2. Test Environment
o Example: Use a staging server that mimics the production environment for system
testing.
3. Automation and Manual Testing
o Example: Automate repetitive tasks like regression testing using Selenium, and perform
manual testing for exploratory testing.
4. Entry and Exit Criteria
o Entry Criteria: Testing begins only after the development team provides a stable build.
o Exit Criteria: Testing ends when all critical defects are fixed and 95% of test cases pass.
3. Resource Requirements
Definition: Planning the resources (human, hardware, software, and budget) required for successful
testing.
Examples:
1. Human Resources
o Example:
 A test manager oversees the process.
 A test engineer writes and executes test cases.
 An automation engineer develops scripts for automated testing.
2. Hardware Resources
o Example:
 Testing requires multiple devices for compatibility testing:
 Desktops, laptops, tablets, and smartphones.
3. Software Resources
o Example:
 Testing tools like Selenium for automation, Postman for API testing, and JIRA for
bug tracking.
4. Budget Planning
o Example:
 Allocate a budget for purchasing software licenses (e.g., TestRail for test
management).
Summary
 Test Phases ensure the software is tested step-by-step, from individual units to the complete
system.
 Test Strategy defines the methods and tools to achieve testing goals.
 Resource Requirements ensure the right people, tools, and infrastructure are available to carry
out testing effectively.
By aligning these, testing becomes structured and ensures the delivery of high-quality software.

Explain about Test Schedule, Test Cases, Bug Reporting, Metrics, and Statistics
1. Test Schedule
The Test Schedule is a detailed plan that defines the timeline and sequence of testing activities to
ensure the completion of testing within the project deadline.
It includes milestones, deadlines, and the allocation of resources for each phase of testing.
Key Components:
 Start and End Dates: Specifies when each phase (unit, integration, system, etc.) will begin and
end.
 Milestones: Sets checkpoints for significant achievements, such as test case creation or the
completion of system testing.
 Resource Allocation: Assigns testers to specific tasks or modules.
 Dependencies: Identifies tasks that depend on the completion of others (e.g., integration testing
depends on the completion of unit testing).
Example:

Phase Start Date End Date Responsible Tester

Unit Testing Jan 1 Jan 5 Tester A

Integration Testing Jan 6 Jan 10 Tester B

System Testing Jan 11 Jan 15 Tester C

2. Test Cases
A Test Case is a set of actions, inputs, and expected outcomes used to verify that a software application
behaves as intended.
Key Components:
 Test Case ID: Unique identifier for the test case.
 Description: Brief explanation of what is being tested.
 Preconditions: Requirements that must be met before executing the test case.
 Test Steps: Detailed steps to perform the test.
 Expected Result: What should happen if the test passes.
 Actual Result: The actual behavior observed during testing.
 Status: Pass/Fail.
Example:

Test Case
Description Steps Expected Result Status
ID

Test login 1. Enter username2. Enter User is redirected to


TC001 Pass
functionality password3. Click Login dashboard

Purpose:
 Ensures all functionalities are tested.
 Provides a clear record of what was tested.
3. Bug Reporting
Bug Reporting is the process of identifying, documenting, and communicating defects found during testing.
Key Components:
 Bug ID: A unique identifier for the bug.
 Description: Detailed explanation of the defect.
 Severity and Priority:
o Severity: Impact on the system (e.g., critical, high, medium, low).

o Priority: How urgently it needs to be fixed.

 Steps to Reproduce: Detailed steps to replicate the issue.


 Expected vs. Actual Result: The difference between what was expected and what happened.
 Attachments: Screenshots or logs to provide evidence.

Example:

Field Details

Bug ID BUG001

Description Payment gateway crashes when submitting form

Severity Critical

Priority High

Steps to Reproduce 1. Fill in form2. Click Submit

Expected Result Payment is processed

Actual Result Application crashes

Purpose:
 Tracks defects efficiently.
 Facilitates communication between testers and developers.

4. Metrics and Statistics


Metrics and Statistics help measure the effectiveness and efficiency of the testing process. These are
numerical values or data points that provide insights into the quality of the software and the testing
process.
Key Metrics:
Purpose:
 Tracks progress and identifies bottlenecks.
 Assesses the quality of the software and effectiveness of testing.

Summary
1. Test Schedule: A timeline for completing all testing phases with clear milestones and
dependencies.
2. Test Cases: Detailed scenarios that verify the software’s functionality.
3. Bug Reporting: Structured documentation and tracking of defects for resolution.
4. Metrics and Statistics: Quantitative data to evaluate the quality of software and testing
processes.

Answer: Generating Effective Test Cases for the Checkout Functionality of an E-


commerce Website
Introduction
The checkout functionality in an e-commerce website is critical for ensuring seamless purchases by
users. It includes verifying payment gateways, calculating order totals, and ensuring order
confirmation. Effective test cases help identify and fix bugs, ensuring a smooth user experience.
Steps for Generating Test Cases
1. Requirement Analysis
 Purpose: Understand the functionality of the checkout process and define the scope of testing.
 Example: Review requirements for features such as applying discount codes, handling multiple
payment methods, and verifying shipping details.
2. Identify Test Scenarios
 Break down the checkout process into smaller, testable scenarios:
1. Validate cart contents before proceeding to checkout.
2. Verify user login or guest checkout options.
3. Test the application of discount codes and vouchers.
4. Ensure tax and shipping charges are calculated correctly.
5. Verify payment methods (credit card, debit card, UPI, etc.).
6. Validate successful order placement and confirmation email generation.
3. Create Detailed Test Cases
 Write detailed test cases for each identified scenario. Each test case should include:
o Test Case ID: A unique identifier.
o Description: Explains what is being tested.
o Preconditions: Sets up the environment or initial state.
o Steps to Execute: Lists steps for performing the test.
o Expected Result: Defines the outcome if the test passes.
Example Test Case for Checkout Functionality

Field Example Value

Test Case ID TC001

Description Verify that the user can successfully complete a purchase with a credit card.

Precondition User is logged in and has items in the cart.

Steps to 1. Click on the cart icon.2. Click 'Checkout.'3. Enter shipping address.4. Select 'Credit
Execute Card' as payment method.5. Enter valid card details.6. Click 'Place Order.'

Expected
Order is successfully placed, and confirmation email is sent to the user.
Result

Actual Result (Filled after test execution)

Status Pass/Fail

4. Define Positive and Negative Test Cases


Positive Test Cases
 Example: Ensure the system processes payments when valid card details are entered.
Negative Test Cases
 Example: Test the system’s response when an invalid credit card number is provided or if the
payment gateway is unavailable.
5. Test Data Preparation
 Prepare realistic test data for the checkout process:
o User credentials: Test logged-in and guest users.
o Product data: Items with different prices, quantities, and discounts.
o Payment data: Valid and invalid payment details.
6. Execute Test Cases
 Perform the tests based on the created test cases:
o Validate the accuracy of calculations (subtotal, tax, total).
o Test various payment methods and edge cases (e.g., insufficient funds, expired cards).
7. Log and Fix Defects
 Document any bugs encountered during testing.
 Example: If the order total is miscalculated due to a missing tax component, log the bug and re-
test after the fix.

Key Considerations in Checkout Testing


 Performance Testing: Test the response time of the checkout page under high traffic.
 Security Testing: Validate secure data transmission during payment (e.g., encryption of card
details).
 Compatibility Testing: Ensure the checkout process works across devices (desktop, mobile) and
browsers.

UNIT 4
Advanced Testing Concepts
1. Performance Testing
Performance testing evaluates the software's behavior under specific conditions to ensure speed,
scalability, and reliability.
 Load Testing: Determines the system's ability to handle expected user loads.
Example: Testing an e-commerce platform during a flash sale to ensure it can handle 10,000
users.
 Stress Testing: Examines the system under extreme loads to identify its breaking point.
Example: Simulating 50,000 users on a system designed for 20,000 users.
 Volume Testing: Checks how the system handles large data volumes.
Example: Uploading millions of records to assess database performance.
 Failover Testing: Ensures that backup systems function correctly in case of failure.
Example: Testing whether a secondary server activates seamlessly during a primary server
crash.
 Recovery Testing: Evaluates the system's ability to recover after a failure.
Example: Checking if a banking app resumes correctly after a network outage.

2. Configuration Testing
This tests the software on various hardware and software configurations to ensure compatibility.
Example: Testing a web application on different operating systems (Windows, macOS, Linux) and
browsers (Chrome, Firefox, Safari).

3. Compatibility Testing
Compatibility testing ensures the software works across different devices, networks, and environments.
Example: Testing an application’s compatibility on iOS and Android devices, or its responsiveness on
different screen sizes.

4. Usability Testing
This focuses on evaluating the user-friendliness of the application to enhance the user experience.
Goals:
 Simplify navigation.
 Ensure intuitive design.
Example: Testing a food delivery app to ensure customers can easily order food in three steps.

5. Testing the Documentation


Ensures the accuracy and completeness of software-related documents, such as user manuals and
installation guides.
Example: Checking an installation guide for a new software package to ensure it includes all necessary
steps and is easy to follow.
6. Security Testing
Security testing identifies vulnerabilities and protects the system from unauthorized access or attacks.
Key Areas:
 Authentication and authorization mechanisms.
 Data encryption and privacy.
Example: Testing for SQL injection vulnerabilities in a web application.

7. Testing in the Agile Environment


In Agile, testing is a continuous process integrated into every sprint to ensure high-quality output.
Features:
 Frequent collaboration between developers and testers.
 Automated testing for faster feedback loops.
Example: Performing regression testing in every sprint during Scrum development cycles.

8. Testing Web and Mobile Applications


 Web Application Testing: Focuses on functionality, performance, and security across various browsers
and networks.
Example: Testing a shopping website’s cart functionality on different browsers like Chrome and Firefox.
 Mobile Application Testing: Verifies compatibility, performance, and usability on mobile devices.
Example: Testing an app's behavior on 3G, 4G, and 5G networks.

Performance Testing
 Performance testing is a type of software testing that evaluates the speed, responsiveness, and
stability of a system under a specific workload.
 It ensures that the software performs well under expected and stress conditions.
 Performance testing is critical for applications where user experience and reliability are
paramount, such as e-commerce websites, banking applications, or mobile apps.
Objectives of Performance Testing
1. Identify Performance Bottlenecks
To pinpoint areas where the system underperforms or fails to meet expected standards.
2. Ensure Scalability
To verify whether the system can handle increasing workloads as the user base grows.
3. Validate Reliability
To confirm that the system can perform consistently under normal and stress conditions.
4. Optimize System Performance
To fine-tune the system's components (e.g., database queries, server configurations) for
maximum efficiency.
5. Ensure User Satisfaction
To ensure a smooth user experience by meeting response time and performance expectations.
Types of Performance Testing
1. Load Testing
o Definition: Measures system performance under expected user loads.
o Goal: To ensure the system can handle anticipated traffic and transactions.
o Example: Testing an e-commerce platform to verify if it can handle 10,000 simultaneous
users during a sale.
2. Stress Testing
o Definition: Tests the system under extreme workloads to determine its breaking point.
o Goal: To identify system limits and how it recovers after failure.
o Example: Simulating 50,000 users on a system designed for 20,000 users to observe its
failure behavior.
3. Volume Testing
o Definition: Examines how the system handles large volumes of data.
o Goal: To check database performance and data handling capacity.
o Example: Uploading millions of records into a database to ensure its stability.
4. Scalability Testing
o Definition: Determines the system's ability to scale up or down as per workload
requirements.
o Goal: To check how additional resources (CPU, memory) impact system performance.
o Example: Testing a server by gradually increasing the number of users.
5. Failover Testing
o Definition: Verifies the system's ability to maintain functionality during a failure.
o Goal: To ensure reliability and recovery.
o Example: Testing whether a backup server activates when the primary server crashes.
Key Metrics in Performance Testing
1. Response Time
Time taken by the system to respond to a user request.
Example: Measuring the time it takes for a search query to return results.
2. Throughput
The number of transactions processed by the system in a given time.
Example: Testing how many orders an e-commerce system processes per minute.
3. CPU and Memory Usage
Resources consumed by the system under various workloads.
Example: Observing memory usage during bulk file uploads.
4. Error Rate
Percentage of failed transactions or errors during testing.
Example: Monitoring login failures during peak traffic.
5. Concurrency
Number of simultaneous users the system can support without performance degradation.
Example: Checking the number of active users an online game server can handle.
Steps in Performance Testing
1. Requirement Analysis
o Define performance goals and benchmarks.
Example: Setting a maximum response time of 2 seconds for an e-commerce checkout
process.
2. Test Environment Setup
o Create a testing environment that mirrors the production system.
Example: Setting up servers, databases, and network configurations similar to live
conditions.
3. Test Script Design
o Develop scripts to simulate user behavior.
Example: Writing a script to mimic users browsing, adding items to a cart, and checking
out.
4. Execution
o Run performance tests under different conditions (normal, peak, stress).
Example: Testing an application with 5,000, 10,000, and 15,000 users.
5. Monitoring
o Observe system performance metrics during testing.
Example: Using tools like JMeter or LoadRunner to monitor response times and error
rates.
6. Analysis and Reporting
o Analyze test results to identify bottlenecks and propose improvements.
Example: Reporting that a slow database query is causing high response times.
Tools for Performance Testing
1. JMeter
Open-source tool for load and stress testing.
Example: Simulating 1,000 users accessing a website.
2. LoadRunner
Comprehensive tool for performance and load testing.
Example: Testing a banking application under high transaction loads.
3. Gatling
A tool for load testing and analyzing performance in real-time.
Example: Testing API endpoints for an application.
4. AppDynamics/New Relic
Tools for application performance monitoring.
Example: Identifying server response issues in production environments.

Discuss the key principles and practices of testing within an Agile


framework, providing examples to illustrate your points. How does Agile
methodology influence software testing processes and techniques?
Key Principles and Practices of Testing within an Agile Framework
 In Agile development, testing plays a critical role in ensuring continuous delivery of high-quality
software.
 Testing within Agile adheres to several principles and practices that align with its iterative and
collaborative approach.
Key Principles
1. Continuous Testing
Testing is not a one-time phase but an ongoing activity throughout the development cycle.
Example: Running automated regression tests after every sprint to ensure new features do not
break existing functionality.
2. Early and Frequent Testing
Testing begins as soon as development starts, allowing defects to be identified and resolved
early.
Example: Performing unit tests immediately after a module is coded.
3. Collaboration and Communication
Testers work closely with developers, product owners, and other stakeholders.
Example: Testers and developers discuss user stories in sprint planning meetings to clarify
acceptance criteria.
4. Test-Driven Development (TDD)
Writing test cases before coding ensures that every piece of functionality is thoroughly validated.
Example: A test case for validating user login is created before implementing the login feature.
5. Automation Focus
Agile emphasizes automation to keep up with the rapid pace of development.
Example: Using tools like Selenium or JUnit to automate functional and unit tests.
6. Incremental Testing
Each iteration includes testing of newly added features while ensuring the integration with
previous functionality.
Example: Testing new shopping cart functionality in an e-commerce site, along with existing
checkout features.
7. Customer-Centric Testing
Focuses on testing features from the end-user’s perspective to ensure usability and satisfaction.
Example: Performing usability testing for a mobile app to ensure a smooth user experience.

Key Practices
1. Sprint-Based Testing
Testing is aligned with sprints, where test cases are developed and executed for user stories
delivered in each sprint.
Example: Testing a search functionality developed in Sprint 2 of a project.
2. Behavior-Driven Development (BDD)
In BDD, test scenarios are written in plain language, bridging the gap between technical and
non-technical team members.
Example: Writing test scenarios like:
o "Given a user is on the login page, when valid credentials are entered, the user is
redirected to the dashboard."
3. Exploratory Testing
Testers explore the application to identify unexpected issues not covered by automated or
scripted tests.
Example: Testing an e-commerce site's payment gateway by entering invalid card details.
4. Acceptance Test-Driven Development (ATDD)
Stakeholders collaborate to define acceptance criteria, which are then converted into test cases.
Example: Defining a test scenario that validates a successful purchase on an e-commerce
website.
5. Automated Regression Testing
Automated tests are run frequently to ensure that existing features work after new code changes.
Example: Running a suite of tests on a continuous integration (CI) pipeline like Jenkins.
How Agile Methodology Influences Testing Processes and Techniques
1. Frequent Feedback Loops
Agile fosters continuous feedback from testers, developers, and stakeholders, allowing quick
adjustments.
Example: Testing a prototype in early sprints and incorporating feedback into subsequent
iterations.
2. Shorter Development Cycles
Agile’s iterative nature requires testers to work faster and more efficiently, often leveraging
automation tools.
Example: Using Selenium to automate repetitive UI tests.
3. Focus on Collaboration
Testers are integral to Agile teams, participating in all stages of development, from planning to
deployment.
Example: Testers contributing to user story refinement and helping clarify acceptance criteria.
4. Increased Use of Automation
The need for speed in Agile projects means most repetitive and regression testing is automated.
Example: Automating end-to-end testing of a web application.
5. Shift-Left Testing
Testing moves earlier in the development lifecycle, with testers involved right from the
requirements phase.
Example: Reviewing user stories and writing test cases before coding starts.
6. Adaptability and Continuous Improvement
Agile testing processes are flexible, allowing adjustments based on retrospectives and sprint
reviews.
Example: Adapting a testing strategy after identifying gaps in the previous sprint.

UNIT 3
Test Objective Identification
 Definition: Identifying the specific goals that testing aims to achieve.
 Purpose: Ensures testing activities align with business and technical requirements.
 Example: The objective for an e-commerce website might be to verify that the checkout process
works without errors under high traffic.

Test Design Factors


 Definition: Factors influencing the design of test cases, including functionality, performance,
usability, security, and compatibility.
 Example: Designing tests for both desktop and mobile versions of a website to ensure cross-
platform compatibility.

Requirement Identification
 Definition: Analyzing and documenting requirements to ensure the test coverage aligns with
user needs and business objectives.
 Example: Identifying requirements like "The system must support payments through credit cards
and digital wallets."

Testable Requirements
 Definition: Breaking down requirements into clear, measurable, and testable statements.
 Example: "The login page should authenticate users within 2 seconds after entering valid
credentials."

Modeling a Test Design Process


 Definition: Creating a structured framework for designing, executing, and validating test cases.
 Example: Using UML diagrams to model workflows and interactions within an application.

Modeling Test Results


 Definition: Representing test results in a structured format to analyze system performance and
defects.
 Example: Using charts or tables to visualize response times under load conditions.

Boundary Value Testing


 Definition: Testing the values at the boundaries of input ranges.
 Example: For an age input field accepting values from 18 to 60, testing values like 17, 18, 60,
and 61.

Equivalence Class Testing


 Definition: Dividing input data into equivalence classes where test cases from each class are
expected to produce similar results.
 Example: For a password field, valid inputs (8-12 characters) form one class, and invalid inputs
(less than 8 or more than 12 characters) form another.
Path Testing
 Definition: Testing all possible execution paths within a program to ensure complete code
coverage.
 Example: Testing a calculator app's "add," "subtract," "multiply," and "divide" functions
separately and in combination.

Data Flow Testing


 Definition: Testing the flow of data within a program, focusing on how variables are defined,
used, and accessed.
 Example: Verifying that a shopping cart total updates correctly when items are added or
removed.

Test Design Preparedness Metrics


 Definition: Metrics used to measure readiness for test design activities.
 Example: Checking if 100% of requirements are mapped to test cases before execution.

Test Case Design Effectiveness


 Definition: Evaluating the quality of test cases in detecting defects.
 Example: Calculating defect detection percentage (number of defects found during testing/total
defects).

Model-Driven Test Design


 Definition: Using models like state diagrams or use-case diagrams to drive the test design
process.
 Example: Creating test cases based on a workflow model for an order management system.

Test Procedures
 Definition: Step-by-step instructions for executing test cases.
 Example: "Open the login page, enter valid credentials, click 'Login,' and verify redirection to
the dashboard."
Test Case Organization and Tracking
 Definition: Managing and monitoring test cases to ensure proper execution and maintenance.
 Example: Using tools like JIRA or TestRail to organize test cases and track their execution
status.

Bug Reporting
 Definition: Documenting issues identified during testing for resolution by the development team.
 Example: Reporting that "The checkout button does not work in the mobile view."

Bug Life Cycle


 Definition: The process a defect goes through, from detection to closure.
 Steps:
1. New: The bug is logged.
2. Assigned: Assigned to a developer.
3. In Progress: Being fixed.
4. Fixed: Developer marks it as fixed.
5. Retested: Tester verifies the fix.
6. Closed: Bug is resolved, or Reopened: If not fixed properly.
 Example: A login issue is logged, assigned to a developer, fixed, tested, and closed.

Previous Year Question Paper 2 Marks Question with Answers


FROM NOV/DEC 2023:
1. Why is early testing important in the software development process?
o Early testing helps in identifying defects at the initial stages of development, reducing the
cost and effort required to fix them later. It ensures that issues are addressed before they
escalate, leading to better-quality software.

2. State any four differences between verification and validation.


o Verification ensures the product is built correctly, while validation checks if the right
product is built.
o Verification involves static testing techniques, whereas validation involves dynamic
testing techniques.
o Verification is performed during development phases, but validation occurs after
development.
o Verification answers the question, "Are we building the product right?" Validation
answers, "Are we building the right product?"

3. What is the primary goal of a test plan in software testing?


o The primary goal of a test plan is to define the testing strategy, scope, objectives,
resources, and schedule for the testing process, ensuring that all aspects of the
application are tested thoroughly.

4. Recall the significance of bug reporting in the test planning phase of software development.
o Bug reporting helps track and document defects during the test planning phase, ensuring
developers are aware of issues that need to be resolved. It improves collaboration and
enhances the quality of the final product.

5. What are the design factors to consider in test design for software testing?
o Functionality, performance, usability, security, compatibility, scalability, and reliability
are critical factors to consider in test design. These ensure comprehensive test coverage
and alignment with user requirements.

6. Define boundary value testing.


o Boundary value testing is a technique that tests the boundaries of input ranges. For
example, if an age field accepts values between 18 and 60, the boundary values to test
would be 18, 19, 59, and 60.

7. What is compatibility testing?


o Compatibility testing ensures that an application works as intended across different
hardware, software, operating systems, browsers, and devices.

8. Give any two examples for security testing.


o Testing login mechanisms for vulnerabilities like SQL injection.
o Verifying secure data transmission using encryption protocols like HTTPS.
9. Name two popular web driver implementations.
o Selenium WebDriver.
o Appium WebDriver.

10. Cite the purpose of the testing.xml file in software testing.


 The testing.xml file is used in TestNG to configure and organize test execution. It defines test
suites, test cases, and parameters for executing tests in a systematic manner.

FROM APR/MAY 2024:


1. Outline the need for software testing.
 Software testing ensures the product meets user requirements, is defect-free, and functions
correctly. It reduces risks, improves quality, ensures reliability, and builds user confidence.

2. Differentiate error, faults, and failures.


 Error: Mistakes made by developers during coding or design.
 Fault: A defect in the software that may lead to incorrect behavior.
 Failure: When the software does not perform as expected during execution.

3. Define bug reporting.


 Bug reporting is the process of documenting defects found during testing. It includes details like
the bug description, severity, steps to reproduce, and environment details.

4. Explain about tester assignments.


 Tester assignments involve assigning specific tasks or modules to testers based on their
expertise. It ensures efficient resource utilization and thorough test coverage.

5. Explain boundary value testing.


 Boundary value testing focuses on testing values at the boundaries of input ranges. For example,
if an input field accepts values from 1 to 100, the boundaries tested would be 1, 100, 0, and 101.
6. Define model-driven test design.
 Model-driven test design is an approach where test cases are derived from models representing
system behavior or architecture, such as state diagrams or use case models.

7. State the difference between mobile and web application testing.


 Mobile Application Testing: Focuses on testing features like screen resolutions, mobile-specific
functionalities (GPS, gestures), and different mobile platforms (iOS, Android).
 Web Application Testing: Ensures the application works across browsers, handles
responsiveness, and supports various resolutions and network conditions.

8. Define test log and need for a test plan.


 Test Log: A record of the events and activities during the testing process, used to track progress
and identify issues.
 Test Plan Need: Provides a roadmap for the testing process, defining scope, objectives,
resources, and schedules to ensure structured and efficient testing.

9. Mention any three software testing tools.


 Selenium, JUnit, TestNG.

10. Outline the need for test metrics.


 Test metrics help measure the effectiveness, progress, and quality of the testing process.
Examples include defect density, test coverage, and test execution rates, which provide insights
for improvement.

You might also like