Sta Revision
Sta Revision
Unit – 5
Selenium
Selenium is a popular open-source software testing framework used for
automating web applications.
It is widely used for functional testing, regression testing, and
performance testing.
Selenium supports multiple programming languages, including Java,
C#, Python, and Ruby, making it accessible to a wide range of
developers.
Selenium IDE
Selenium, a powerful automation tool, simplifies the process of testing web applications by
automating browser interactions.
It allows testers to write scripts in various languages, making it versatile for different
environments.
Components like WebDriver and Grid enable more efficient testing, ensuring the application
functions smoothly across browsers.
At the beginning, Selenium IDE( Integrated Development Environment ) was implemented as a
Firefox add-on/plugin and now it can be used Selenium IDE on every web browser.
It provides record and playback functionality. The figure shows Selenium IDE.
Advantages of Selenium IDE:
It is an open-source tool.
Provide base, for extensions.
It provides multi-browser support.
No programming language experience is required while using Selenium IDE.
The user can set breakpoints and debug.
It provides record and playback functions.
Easy to Use and install.
Languages supported by Selenium IDE are Java, Python, C# etc.
Dis-advantages of Selenium IDE:
There are no support iteration and conditional operations.
Execution is slow.
It does not have any API.
It does not provide any mechanism for error handling.
Dynamic web applications are not used to test.
Selenium RC
RC stands for Remote Control.
It allows the programmers to code in different programming languages like C#, Java, Perl,
PHP, Python, Ruby, Scala, Groovy.
Advantages of Selenium RC
It supports all web browsers.
2. GeckoDriver
● Limitations:
3. EdgeDriver
● Limitations:
o Works only with Edge and requires the correct version for the browser.
4. SafariDriver
● Limitations:
5. InternetExplorerDriver
● Limitations:
6. OperaDriver
● Limitations:
7. RemoteWebDriver
● Limitations:
8. HtmlUnitDriver
● Limitations:
// Register a listener
eventDriver.register(new WebDriverListener());
// Implement WebDriverEventListener
class WebDriverListener extends AbstractWebDriverEventListener {
@Override
public void beforeClickOn(WebElement element, WebDriver driver) {
System.out.println("Before clicking on: " + element.getText());
}
@Override
public void afterClickOn(WebElement element, WebDriver driver) {
System.out.println("After clicking on: " + element.getText());
}
@Override
public void onException(Throwable throwable, WebDriver driver) {
System.out.println("Exception occurred: " + throwable.getMessage());
}
}
Practical Applications
1. Logging Events: Track all user actions during test execution for better test visibility.
2. Error Reporting: Capture detailed logs when an exception occurs.
3. Monitoring: Observe WebDriver interactions in real-time during execution.
By leveraging WebDriver events, testers can enhance debugging, optimize logging, and gain better
insights into the test execution flow.
UNIT 1
Differentiation Between White-Box Testing and Black-Box Testing
Black-Box Testing
Definition:
Black-box testing focuses on the functionality of the application without considering its internal
implementation. Testers verify whether the application behaves as expected by providing inputs and
analyzing outputs.
Key Characteristics:
1. Based on software specifications and requirements.
2. No knowledge of the internal code is required.
3. Focuses on testing:
o User interfaces.
o Functionality.
o Performance.
Techniques:
1. Equivalence Partitioning: Divide input data into valid and invalid partitions.
2. Boundary Value Analysis: Test at the boundaries of input ranges.
3. Decision Table Testing: Use decision rules for complex logic.
Example:
Consider a login functionality where the requirements specify:
A valid username is "admin".
A valid password is "1234".
Test Cases:
1. Input: username = "admin", password = "1234" → Expected Output: "Login Successful".
2. Input: username = "user", password = "1234" → Expected Output: "Invalid Username".
3. Input: username = "admin", password = "abcd" → Expected Output: "Invalid Password".
Advantages:
Does not require programming knowledge.
Helps ensure the software meets user requirements.
Disadvantages:
Limited in identifying internal code issues.
Testing coverage may be incomplete for complex applications.
V-Model Testing:
The V-Model, also known as the Verification and Validation Model, is a software development
model where each development phase is directly associated with a corresponding testing phase.
It’s called the "V-Model" because the process looks like the letter "V."
Key Idea:
Left side of the V: Represents development stages (planning and designing the software).
Right side of the V: Represents testing stages, which validate the work done in the
corresponding development stages.
Bottom of the V: Coding or implementation.
Stages of the V-Model
1. Requirement Analysis
What happens?
The software’s functional and non-functional requirements are collected and documented.
Testing Phase: Acceptance Testing
o Ensures the software meets the client’s requirements.
2. System Design
What happens?
The overall system architecture and modules are planned.
Testing Phase: System Testing
o Validates the entire software system against the design and requirements.
3. Architecture design (High-Level Design (HLD))
What happens?
Break the system into smaller modules, describing their functionality and interaction.
Testing Phase: Integration Testing
o Ensures the modules work together as planned.
4. Module design (Low-Level Design (LLD))
What happens?
Detailed design of individual modules or components is created, focusing on how each one will
work.
Testing Phase: Unit Testing
o Tests individual components to ensure they work as intended.
5. Coding/Implementation
What happens?
The actual development of the software is done here.
Testing Phase: Begins with Unit Testing and proceeds upward.
Advantages of the V-Model
1. Early Detection of Defects: Testing starts early, preventing issues from propagating.
2. Clear Phases: Each phase is well-defined with corresponding testing.
3. Validation at Every Step: Ensures both functionality (does it work?) and requirements (is it
what the user wants?) are met.
4. Simple and Easy to Use: The structure is clear, making it easier to manage.
Disadvantages of the V-Model
1. Rigid: Changes in requirements are difficult to accommodate once the process starts.
2. No Iterations: Unlike Agile, it doesn’t allow flexibility or iterative development.
3. Costly: Errors in early stages can be expensive to fix later.
When to Use the V-Model
When requirements are well-defined and unlikely to change.
For small- to medium-sized projects with clear objectives.
For projects where quality is critical (e.g., healthcare, defense).
Simple Example
Imagine building a car:
1. Requirement Analysis: Decide what the car should do (e.g., speed, fuel efficiency).
o Acceptance Testing: Test if the car meets these requirements.
2. System Design: Plan the car’s overall design (engine, chassis, etc.).
o System Testing: Test if the whole car works as expected.
3. High-Level Design: Break it into systems (engine, brakes).
o Integration Testing: Check if the brakes and engine work together.
4. Low-Level Design: Plan individual parts (pistons, gears).
o Unit Testing: Test each piston or gear to ensure it functions.
UNIT 2
Explain about Test Phases, Test Strategy, Resource Requirements with example
1. Test Phases
Definition: Test phases are the stages of testing during a software development lifecycle, each focusing
on specific objectives and ensuring the application meets quality standards.
Phases with Examples:
1. Unit Testing
o What it is: Testing individual components or modules.
o Example: Testing a function that calculates the total price in a shopping cart.
Input: totalPrice([10, 20, 30])
Expected Output: 60.
2. Integration Testing
o What it is: Testing how different modules interact with each other.
o Example: Testing the interaction between a login module and a dashboard module.
Scenario: After logging in, the correct dashboard should load.
3. System Testing
o What it is: Testing the entire system as a whole.
o Example: Testing an e-commerce site for overall functionality:
Browsing products, adding them to the cart, and completing a purchase.
4. Acceptance Testing
o What it is: Testing to ensure the software meets user requirements.
o Example: Verifying that an app displays correct search results based on customer
queries.
2. Test Strategy
Definition: The test strategy outlines the approach and methods used to achieve testing goals. It ensures
all aspects of the software are tested systematically.
Key Elements with Examples:
1. Types of Testing
o Example:
Use functional testing to verify login functionality.
Use performance testing to check if the system can handle 1000 users
simultaneously.
2. Test Environment
o Example: Use a staging server that mimics the production environment for system
testing.
3. Automation and Manual Testing
o Example: Automate repetitive tasks like regression testing using Selenium, and perform
manual testing for exploratory testing.
4. Entry and Exit Criteria
o Entry Criteria: Testing begins only after the development team provides a stable build.
o Exit Criteria: Testing ends when all critical defects are fixed and 95% of test cases pass.
3. Resource Requirements
Definition: Planning the resources (human, hardware, software, and budget) required for successful
testing.
Examples:
1. Human Resources
o Example:
A test manager oversees the process.
A test engineer writes and executes test cases.
An automation engineer develops scripts for automated testing.
2. Hardware Resources
o Example:
Testing requires multiple devices for compatibility testing:
Desktops, laptops, tablets, and smartphones.
3. Software Resources
o Example:
Testing tools like Selenium for automation, Postman for API testing, and JIRA for
bug tracking.
4. Budget Planning
o Example:
Allocate a budget for purchasing software licenses (e.g., TestRail for test
management).
Summary
Test Phases ensure the software is tested step-by-step, from individual units to the complete
system.
Test Strategy defines the methods and tools to achieve testing goals.
Resource Requirements ensure the right people, tools, and infrastructure are available to carry
out testing effectively.
By aligning these, testing becomes structured and ensures the delivery of high-quality software.
Explain about Test Schedule, Test Cases, Bug Reporting, Metrics, and Statistics
1. Test Schedule
The Test Schedule is a detailed plan that defines the timeline and sequence of testing activities to
ensure the completion of testing within the project deadline.
It includes milestones, deadlines, and the allocation of resources for each phase of testing.
Key Components:
Start and End Dates: Specifies when each phase (unit, integration, system, etc.) will begin and
end.
Milestones: Sets checkpoints for significant achievements, such as test case creation or the
completion of system testing.
Resource Allocation: Assigns testers to specific tasks or modules.
Dependencies: Identifies tasks that depend on the completion of others (e.g., integration testing
depends on the completion of unit testing).
Example:
2. Test Cases
A Test Case is a set of actions, inputs, and expected outcomes used to verify that a software application
behaves as intended.
Key Components:
Test Case ID: Unique identifier for the test case.
Description: Brief explanation of what is being tested.
Preconditions: Requirements that must be met before executing the test case.
Test Steps: Detailed steps to perform the test.
Expected Result: What should happen if the test passes.
Actual Result: The actual behavior observed during testing.
Status: Pass/Fail.
Example:
Test Case
Description Steps Expected Result Status
ID
Purpose:
Ensures all functionalities are tested.
Provides a clear record of what was tested.
3. Bug Reporting
Bug Reporting is the process of identifying, documenting, and communicating defects found during testing.
Key Components:
Bug ID: A unique identifier for the bug.
Description: Detailed explanation of the defect.
Severity and Priority:
o Severity: Impact on the system (e.g., critical, high, medium, low).
Example:
Field Details
Bug ID BUG001
Severity Critical
Priority High
Purpose:
Tracks defects efficiently.
Facilitates communication between testers and developers.
Summary
1. Test Schedule: A timeline for completing all testing phases with clear milestones and
dependencies.
2. Test Cases: Detailed scenarios that verify the software’s functionality.
3. Bug Reporting: Structured documentation and tracking of defects for resolution.
4. Metrics and Statistics: Quantitative data to evaluate the quality of software and testing
processes.
Description Verify that the user can successfully complete a purchase with a credit card.
Steps to 1. Click on the cart icon.2. Click 'Checkout.'3. Enter shipping address.4. Select 'Credit
Execute Card' as payment method.5. Enter valid card details.6. Click 'Place Order.'
Expected
Order is successfully placed, and confirmation email is sent to the user.
Result
Status Pass/Fail
UNIT 4
Advanced Testing Concepts
1. Performance Testing
Performance testing evaluates the software's behavior under specific conditions to ensure speed,
scalability, and reliability.
Load Testing: Determines the system's ability to handle expected user loads.
Example: Testing an e-commerce platform during a flash sale to ensure it can handle 10,000
users.
Stress Testing: Examines the system under extreme loads to identify its breaking point.
Example: Simulating 50,000 users on a system designed for 20,000 users.
Volume Testing: Checks how the system handles large data volumes.
Example: Uploading millions of records to assess database performance.
Failover Testing: Ensures that backup systems function correctly in case of failure.
Example: Testing whether a secondary server activates seamlessly during a primary server
crash.
Recovery Testing: Evaluates the system's ability to recover after a failure.
Example: Checking if a banking app resumes correctly after a network outage.
2. Configuration Testing
This tests the software on various hardware and software configurations to ensure compatibility.
Example: Testing a web application on different operating systems (Windows, macOS, Linux) and
browsers (Chrome, Firefox, Safari).
3. Compatibility Testing
Compatibility testing ensures the software works across different devices, networks, and environments.
Example: Testing an application’s compatibility on iOS and Android devices, or its responsiveness on
different screen sizes.
4. Usability Testing
This focuses on evaluating the user-friendliness of the application to enhance the user experience.
Goals:
Simplify navigation.
Ensure intuitive design.
Example: Testing a food delivery app to ensure customers can easily order food in three steps.
Performance Testing
Performance testing is a type of software testing that evaluates the speed, responsiveness, and
stability of a system under a specific workload.
It ensures that the software performs well under expected and stress conditions.
Performance testing is critical for applications where user experience and reliability are
paramount, such as e-commerce websites, banking applications, or mobile apps.
Objectives of Performance Testing
1. Identify Performance Bottlenecks
To pinpoint areas where the system underperforms or fails to meet expected standards.
2. Ensure Scalability
To verify whether the system can handle increasing workloads as the user base grows.
3. Validate Reliability
To confirm that the system can perform consistently under normal and stress conditions.
4. Optimize System Performance
To fine-tune the system's components (e.g., database queries, server configurations) for
maximum efficiency.
5. Ensure User Satisfaction
To ensure a smooth user experience by meeting response time and performance expectations.
Types of Performance Testing
1. Load Testing
o Definition: Measures system performance under expected user loads.
o Goal: To ensure the system can handle anticipated traffic and transactions.
o Example: Testing an e-commerce platform to verify if it can handle 10,000 simultaneous
users during a sale.
2. Stress Testing
o Definition: Tests the system under extreme workloads to determine its breaking point.
o Goal: To identify system limits and how it recovers after failure.
o Example: Simulating 50,000 users on a system designed for 20,000 users to observe its
failure behavior.
3. Volume Testing
o Definition: Examines how the system handles large volumes of data.
o Goal: To check database performance and data handling capacity.
o Example: Uploading millions of records into a database to ensure its stability.
4. Scalability Testing
o Definition: Determines the system's ability to scale up or down as per workload
requirements.
o Goal: To check how additional resources (CPU, memory) impact system performance.
o Example: Testing a server by gradually increasing the number of users.
5. Failover Testing
o Definition: Verifies the system's ability to maintain functionality during a failure.
o Goal: To ensure reliability and recovery.
o Example: Testing whether a backup server activates when the primary server crashes.
Key Metrics in Performance Testing
1. Response Time
Time taken by the system to respond to a user request.
Example: Measuring the time it takes for a search query to return results.
2. Throughput
The number of transactions processed by the system in a given time.
Example: Testing how many orders an e-commerce system processes per minute.
3. CPU and Memory Usage
Resources consumed by the system under various workloads.
Example: Observing memory usage during bulk file uploads.
4. Error Rate
Percentage of failed transactions or errors during testing.
Example: Monitoring login failures during peak traffic.
5. Concurrency
Number of simultaneous users the system can support without performance degradation.
Example: Checking the number of active users an online game server can handle.
Steps in Performance Testing
1. Requirement Analysis
o Define performance goals and benchmarks.
Example: Setting a maximum response time of 2 seconds for an e-commerce checkout
process.
2. Test Environment Setup
o Create a testing environment that mirrors the production system.
Example: Setting up servers, databases, and network configurations similar to live
conditions.
3. Test Script Design
o Develop scripts to simulate user behavior.
Example: Writing a script to mimic users browsing, adding items to a cart, and checking
out.
4. Execution
o Run performance tests under different conditions (normal, peak, stress).
Example: Testing an application with 5,000, 10,000, and 15,000 users.
5. Monitoring
o Observe system performance metrics during testing.
Example: Using tools like JMeter or LoadRunner to monitor response times and error
rates.
6. Analysis and Reporting
o Analyze test results to identify bottlenecks and propose improvements.
Example: Reporting that a slow database query is causing high response times.
Tools for Performance Testing
1. JMeter
Open-source tool for load and stress testing.
Example: Simulating 1,000 users accessing a website.
2. LoadRunner
Comprehensive tool for performance and load testing.
Example: Testing a banking application under high transaction loads.
3. Gatling
A tool for load testing and analyzing performance in real-time.
Example: Testing API endpoints for an application.
4. AppDynamics/New Relic
Tools for application performance monitoring.
Example: Identifying server response issues in production environments.
Key Practices
1. Sprint-Based Testing
Testing is aligned with sprints, where test cases are developed and executed for user stories
delivered in each sprint.
Example: Testing a search functionality developed in Sprint 2 of a project.
2. Behavior-Driven Development (BDD)
In BDD, test scenarios are written in plain language, bridging the gap between technical and
non-technical team members.
Example: Writing test scenarios like:
o "Given a user is on the login page, when valid credentials are entered, the user is
redirected to the dashboard."
3. Exploratory Testing
Testers explore the application to identify unexpected issues not covered by automated or
scripted tests.
Example: Testing an e-commerce site's payment gateway by entering invalid card details.
4. Acceptance Test-Driven Development (ATDD)
Stakeholders collaborate to define acceptance criteria, which are then converted into test cases.
Example: Defining a test scenario that validates a successful purchase on an e-commerce
website.
5. Automated Regression Testing
Automated tests are run frequently to ensure that existing features work after new code changes.
Example: Running a suite of tests on a continuous integration (CI) pipeline like Jenkins.
How Agile Methodology Influences Testing Processes and Techniques
1. Frequent Feedback Loops
Agile fosters continuous feedback from testers, developers, and stakeholders, allowing quick
adjustments.
Example: Testing a prototype in early sprints and incorporating feedback into subsequent
iterations.
2. Shorter Development Cycles
Agile’s iterative nature requires testers to work faster and more efficiently, often leveraging
automation tools.
Example: Using Selenium to automate repetitive UI tests.
3. Focus on Collaboration
Testers are integral to Agile teams, participating in all stages of development, from planning to
deployment.
Example: Testers contributing to user story refinement and helping clarify acceptance criteria.
4. Increased Use of Automation
The need for speed in Agile projects means most repetitive and regression testing is automated.
Example: Automating end-to-end testing of a web application.
5. Shift-Left Testing
Testing moves earlier in the development lifecycle, with testers involved right from the
requirements phase.
Example: Reviewing user stories and writing test cases before coding starts.
6. Adaptability and Continuous Improvement
Agile testing processes are flexible, allowing adjustments based on retrospectives and sprint
reviews.
Example: Adapting a testing strategy after identifying gaps in the previous sprint.
UNIT 3
Test Objective Identification
Definition: Identifying the specific goals that testing aims to achieve.
Purpose: Ensures testing activities align with business and technical requirements.
Example: The objective for an e-commerce website might be to verify that the checkout process
works without errors under high traffic.
Requirement Identification
Definition: Analyzing and documenting requirements to ensure the test coverage aligns with
user needs and business objectives.
Example: Identifying requirements like "The system must support payments through credit cards
and digital wallets."
Testable Requirements
Definition: Breaking down requirements into clear, measurable, and testable statements.
Example: "The login page should authenticate users within 2 seconds after entering valid
credentials."
Test Procedures
Definition: Step-by-step instructions for executing test cases.
Example: "Open the login page, enter valid credentials, click 'Login,' and verify redirection to
the dashboard."
Test Case Organization and Tracking
Definition: Managing and monitoring test cases to ensure proper execution and maintenance.
Example: Using tools like JIRA or TestRail to organize test cases and track their execution
status.
Bug Reporting
Definition: Documenting issues identified during testing for resolution by the development team.
Example: Reporting that "The checkout button does not work in the mobile view."
4. Recall the significance of bug reporting in the test planning phase of software development.
o Bug reporting helps track and document defects during the test planning phase, ensuring
developers are aware of issues that need to be resolved. It improves collaboration and
enhances the quality of the final product.
5. What are the design factors to consider in test design for software testing?
o Functionality, performance, usability, security, compatibility, scalability, and reliability
are critical factors to consider in test design. These ensure comprehensive test coverage
and alignment with user requirements.