Manual Testing Interview Questionnaire Part 1 1688871379
Manual Testing Interview Questionnaire Part 1 1688871379
Email:[email protected]
+923343308650
What is Software?
Software is a set of instructions, data or program used to operate computer.
What are types of Software?
System software helps the user, hardware and application software to communicate with each
other. Examples Operating system, Bios, Device -drivers etc.
Application software is a pre-ready software to use, helps end users to collect notes, data,
research Example: Microsoft word, Calculators, Browser.
Web applications are an application programs that is stored on a remote- servers and delivered
over the internet through a browser interface. Example online forms, shopping carts, video
streaming, social media, games etc.
What is SDLC?
SDLC is a defined process which is used to develop a software.
What are the stages of SDLC?
• Requirement Analysis
• Design
• Implementation
• Verification or Testing
• Maintenance
Generated By Syed Tabish Ishtiaq
Email:[email protected]
+923343308650
What happens in SDLC Phases?
1. Requirements Gathering Phase:
o Entry Criteria: Business case approved, project charter in place, and stakeholders
identified.
o Activities: Gather user requirements, conduct interviews and workshops, document
functional and non-functional requirements.
o Exit Criteria: Approved requirements document, signed-off user requirements.
o Deliverables: Requirements document, use cases, user stories.
2. System Analysis Phase:
o Entry Criteria: Completed requirements gathering phase, approved requirements
document.
o Activities: Analyze requirements, define system architecture, identify system
components and interfaces.
o Exit Criteria: System architecture document, system design specifications.
o Deliverables: System architecture document, system design specifications.
3. System Design Phase:
o Entry Criteria: Completed system analysis phase, approved system design
specifications.
o Activities: Develop detailed system design, create database schema, design user
interface, define system modules and functions.
o Exit Criteria: Detailed system design document, approved database schema and
user interface designs.
o Deliverables: Detailed system design document, database schema, user interface
designs.
4. Coding/Development Phase:
o Entry Criteria: Completed system design phase, approved detailed system design.
o Activities: Write code, develop software components, perform unit testing,
integrate modules.
o Exit Criteria: Developed and tested software modules, integrated system.
o Deliverables: Code files, software modules, integrated system.
5. Testing Phase:
o Entry Criteria: Completed coding/development phase, integrated system.
o Activities: Develop test cases, perform functional and non-functional testing,
execute test cases, log and track defects.
o Exit Criteria: Completed test cases, resolved defects, acceptable test coverage.
o Deliverables: Test cases, test results, defect reports.
6. Deployment/Release Phase:
o Entry Criteria: Completed testing phase, approved software.
o Activities: Prepare for deployment, create installation packages, perform user
acceptance testing, deploy software.
o Exit Criteria: Successfully deployed software, acceptance from users.
o Deliverables: Installed software, user acceptance sign-off.
7. Maintenance and Support Phase:
o Entry Criteria: Completed deployment/release phase.
o Activities: Provide ongoing maintenance and support, address user feedback and
issues, perform bug fixes and updates.
Generated By Syed Tabish Ishtiaq
Email:[email protected]
+923343308650
o Exit Criteria: Sustained system performance, satisfied user requirements.
o Deliverables: Bug fixes, system updates, user support.
• Customer Satisfaction
• Welcome Change
• Deliver Frequently
• Work Together
• Motivated Team
• Face-to-face
• Working Software
Generated By Syed Tabish Ishtiaq
Email:[email protected]
+923343308650
• Constant Pace
• Good Design
• Simplicity
• Self-Organization
• Reflect and Adjust
What is Scrum?
Scrum is an agile framework for managing and organizing complex projects particularly in
software development follows agile principle.
1. Product Backlog: The Product Backlog is a prioritized list of user stories, features, and
requirements that need to be implemented in the product. It is managed and maintained by
the Product Owner.
2. Sprint Backlog: The Sprint Backlog is a subset of the Product Backlog and contains the list
of tasks and user stories that the Development Team commits to complete within a Sprint.
3. Increment: The Increment is the sum of all the completed and potentially shippable product
backlog items at the end of a Sprint. It represents a usable and potentially releasable version
of the product.
4. Sprint Goal: The Sprint Goal is a short statement that describes the objective or purpose of
the Sprint. It provides a clear focus and direction for the Development Team during the
Sprint.
How many scrum meeting’s/ events occured?
1. Sprint Planning: This meeting marks the beginning of the sprint and involves the Product
Owner and Development Team. They collaborate to determine which backlog items will
be worked on during the sprint and create a sprint goal.
2. Daily Scrum: Also known as the daily stand-up, this is a short daily meeting for the
Development Team to synchronize their activities. Each team member answers three
questions: What did I do yesterday? What will I do today? Are there any obstacles in my
way?
3. Sprint Review: At the end of the sprint, the Development Team presents the completed
work to the stakeholders and gathers feedback. The Product Owner assesses the work
completed against the sprint goal.
4. Sprint Retrospective: This meeting occurs after the Sprint Review and is an opportunity
for the Scrum Team to reflect on the sprint and identify areas for improvement. They
discuss what went well, what could be improved, and define action items for the next sprint.
Generated By Syed Tabish Ishtiaq
Email:[email protected]
+923343308650
What is Sprint?
Sprint is a process of s a time-boxed iteration during which the Scrum Team works to complete a
set of prioritized sprint backlog items.
1. Burndown Chart: Tracks remaining work over time in a sprint or project, helping the team
monitor progress and stay on track.
2. Velocity Chart: Shows average work completed by the team in each sprint, aiding in future
work estimation and project timeline planning.
3. Cumulative Flow Diagram: Visualizes work flow through different stages, identifies
bottlenecks, and evaluates overall workflow efficiency.
4. Release Burndown Chart: Tracks progress of completing backlog items or features over
time, ensuring alignment with planned scope and release date.
5. Sprint Goal Chart: Illustrates the sprint goal and key objectives, keeping the team focused
and aligned throughout the iteration.
6. Team Task Board: Visualizes tasks or user stories within a sprint, promoting transparency
and collaboration among team members.
What meant by User story and user point in scrum?
1. User Stories: User stories are brief descriptions of a feature or functionality written in a
user-centric format. They capture the user's role, the desired feature, and the benefit it
provides. User stories promote collaboration and understanding between the development
team and stakeholders.
2. Story Points: Story points are a relative measure used to estimate the effort required to
complete a user story. They represent the complexity and size of the work. Story points are
assigned based on the team's collective judgment, using a scale like the Fibonacci sequence.
They help in planning and prioritizing tasks, and provide a basis for measuring the team's
productivity and velocity.
1. White-box Testing àWhite-box testing examines the internal structure, design, and code
of a software application. Testers have access to the system's internal details and create test
cases based on this knowledge. The goal is to validate the internal logic and flow of the
software.
2. Black-box Testing à Black-box testing focuses on the external behavior of a software
application. Testers have no knowledge of the internal implementation and treat the
software as a "black box." Test cases are created based on specified requirements to
validate if the software functions correctly without considering internal details.
Generated By Syed Tabish Ishtiaq
Email:[email protected]
+923343308650
3. Grey-box Testing à Grey-box testing combines elements of white-box and black-box
testing. Testers have partial knowledge of the internal structure and code of the software.
They leverage this knowledge to design more effective test cases. Grey-box testing strikes
a balance between internal understanding and requirement-based testing.
Unit Testing à Unit testing focuses on testing small, individual parts of the software to ensure
they work correctly. It helps identify bugs or errors early on and improves the overall quality of
the software.
Integration Testing à Integration testing verifies how different parts of the software interact with
each other. It ensures that integrated components function as intended and transfer data accurately.
Integration testing helps uncover and resolve issues that may arise from the integration of
components.
System Testing à System testing evaluates the entire software system to ensure it functions
properly. It checks if all components and subsystems work together seamlessly, meet the specified
requirements, and deliver the intended functionality. System testing covers various aspects,
including performance and reliability.
Acceptance Testing à Acceptance testing assesses whether the software meets the business
requirements and fulfills user expectations. It involves testing real-world scenarios and user inputs
to validate the software's functionality. Acceptance testing is typically conducted by end-users or
stakeholders to determine if the software is ready for production use.
Functional Testing àFunctional testing is the process of checking if software works as intended
and meets the specified requirements.
Non-Functional Testing àNon-functional testing evaluates how well software performs under
different conditions, focusing on aspects like speed, usability, security, reliability, and
compatibility.
Structural Testing à Structural testing, also known as white-box testing, examines the internal
structure and code of software to ensure its logical flow and relationships between components are
correct.
Change Related Testing àChange-related testing is performed when software is modified or
enhanced, aiming to test the affected areas and ensure that the changes do not introduce new issues
or disrupt existing functionality.
Unit Testing à Unit testing is when individual parts or components of software are tested to ensure
they work correctly in isolation.
Generated By Syed Tabish Ishtiaq
Email:[email protected]
+923343308650
Exploratory Testing à Exploratory testing is an approach where testers explore the software
application without predefined test cases. They learn, test, and adapt their testing as they interact
with the system, uncovering defects and gaining a deeper understanding of the software.
Ad-hoc Testing à Ad-hoc testing is an informal and unplanned testing approach. Testers
perform tests based on their experience, intuition, and domain knowledge, without following any
specific test documentation or predefined steps.
Component Testing à Component testing focuses on testing the different parts or modules of
software to make sure they function properly and fit together seamlessly.
Smoke Testing à Smoke testing is a quick initial test that checks if the essential features of an
application work without going into extensive testing.
Sanity Testing à Sanity testing is a brief and focused test performed to quickly verify if recent
changes or fixes in the software have not caused major issues or bugs.
Regression Testing à Regression testing involves retesting specific areas of software to confirm
that previous functionality has not been impacted by changes or updates.
Integration Testing à Integration testing verifies how well different components or modules of
software work together and interact with each other.
API Testing à API testing is the evaluation of the application programming interfaces (APIs) to
ensure they function correctly, are reliable, secure, and meet the specified requirements.
UI Testing à UI testing examines the user interface of software to ensure it is user-friendly,
visually appealing, and operates correctly.
System Testing à System testing assesses the entire software system to ensure that all components
function together as intended and meet the specified requirements.
White-box Testing à White-box testing is a method that looks into the internal structure and code
of software to validate its logic and functionality.
Black-box Testing à Black-box testing is a technique that assesses the functionality of software
without considering its internal code or structure. It focuses on inputs and outputs.
Acceptance Testing à Acceptance testing is performed to determine if the software meets the
user's requirements and is ready for acceptance or deployment.
Alpha Testing à Alpha testing involves selected users or testers at the development site checking
for defects and providing feedback before releasing the software to the public.
Beta Testing à Beta testing involves releasing the software to a limited group of external users or
the public to gather feedback, identify bugs, and evaluate overall performance.
Production Testing à Production testing is conducted on the live or production environment to
ensure the software functions correctly and meets performance and reliability expectations.
1. Statement Coverage Testing à This testing ensures that every line of code is executed at
least once during testing, covering all executable statements.
2. Branch Coverage Testing à This testing ensures that all possible branches or decision
points in the code, including both true and false branches, are tested.
3. Path Coverage Testing à This testing aims to cover every possible path through the code,
ensuring that all sequences of statements and branches are executed.
4. Condition Coverage Testing àThis testing verifies that all conditions in the code,
including both true and false evaluations, are tested to ensure all possible outcomes are
evaluated.
5. Loop Coverage Testing à This testing focuses on testing loops in the code, including zero
iterations, one iteration, multiple iterations, and boundary cases.
6. Data Flow Testing à This testing examines how data is processed and propagated within
the code, testing variables, assignments, and data dependencies to identify any issues
related to data flow.
7. Mutation Testing à This testing involves making intentional modifications or mutations
in the code to assess the effectiveness of test cases in detecting and identifying these
mutations.
8. Boundary Value Testing àThis testing checks the input and output boundaries of code
segments to ensure that the code behaves correctly at the edges of valid and invalid inputs.
9. Equivalence Partitioning à This testing involves dividing input data into equivalent
groups to reduce the number of test cases while ensuring that each group is adequately
covered.
1. Regression Testing à Regression testing validates that the existing software functionality
is unaffected by changes. It involves retesting previously tested features to ensure they still
work correctly.
Generated By Syed Tabish Ishtiaq
Email:[email protected]
+923343308650
2. Integration Testing à Integration testing verifies that different software components or
modules work together seamlessly after changes. It ensures proper interaction and
functionality integration.
3. Retesting à Retesting focuses on retesting specific areas or functionalities that have been
modified or fixed to confirm that the reported issues have been resolved and the affected
functionality works as expected.
4. Smoke Testing à Smoke testing is a quick initial test conducted after changes to ensure
that critical software functionalities are working correctly before proceeding with more
detailed testing.
5. Impact Analysis Testing à Impact analysis testing assesses the potential impact of
changes on different areas of the software. It identifies affected areas and guides the
selection of appropriate test cases for testing.
6. Configuration Testing à Configuration testing ensures that the software functions
correctly with different configurations, setups, and environments after changes. It tests
compatibility and performance under various configurations.
7. Patch Testing à Patch testing validates the correct application of software patches or
hotfixes. It ensures that the patches address reported issues without introducing new
problems.
8. Compatibility Testing à Compatibility testing verifies that changes to the software have
not affected its compatibility with different platforms, browsers, operating systems, or
hardware configurations.
9. Impact on Performance Testing à Impact on performance testing evaluates the effect of
changes on the software's performance. It identifies performance improvements or
degradation resulting from the changes.
10. Data Migration Testing à Data migration testing is performed when changes involve
transferring data from one system or format to another. It ensures accurate and reliable data
transfer without loss or corruption.
what mean by software test design technique?
• Software test design techniques à are systematic methods for creating test cases that
ensure comprehensive coverage of the software. They help testers identify what to test—
inputs, conditions, and actions—to design well-organized test cases. These techniques
maximize coverage while minimizing test cases. Examples include equivalence
partitioning, boundary value analysis, and decision table testing. They improve
efficiency, prioritize critical areas, and uncover defects, ensuring high-quality software.
1. Static Design Techniques à Static design techniques evaluate software design without
executing the code. They find design flaws, inconsistencies, and issues early in
development. Examples include design reviews, inspections, and code inspections.
2. Static Analysis à Static analysis analyzes source code or software artifacts without
executing the program. Tools scan the code for defects, vulnerabilities, violations, and
issues. It improves code quality, identifies errors, and enhances reliability and security.
3. Dynamic Test Design Techniques à Dynamic test design techniques create test cases
based on software behavior during execution. They focus on scenarios, inputs, and paths.
Generated By Syed Tabish Ishtiaq
Email:[email protected]
+923343308650
Examples include equivalence partitioning, boundary value analysis, state transition
testing, and decision table testing. These techniques ensure thorough testing and detect
defects during runtime.
What are static Test Design Techniques?
1. Informal Review à Informal review is a flexible and casual approach where a group
provides feedback and identifies potential issues in software artifacts without following a
strict process or predefined roles.
2. Walkthrough à Walk-through is a collaborative approach where the author guides
participants through a software artifact to gather feedback, clarify concepts, and promote
understanding among stakeholders.
3. Technical Review à Technical review is a structured evaluation conducted by experts to
assess technical aspects of software artifacts, such as design or code, to identify defects,
risks, or improvements.
4. Inspection à Inspection is a rigorous technique where experts systematically examine
software artifacts to identify defects, inconsistencies, or violations of standards, aiming to
improve quality and correctness.
5. Static analysis à is a technique that analyzes software artifacts, like source code or
documentation, without running the program. It uses tools to identify defects,
vulnerabilities, violations, or other issues. It improves code quality, finds errors, and
strengthens software reliability and security.
1. Equal Partitioning: Equal partitioning is a test design technique that groups input data
into partitions, reducing the number of test cases needed. Each partition is tested at least
once, covering different input value ranges or categories.
Consider a system that requires a user to input their income for a tax calculation.
Instead of testing every possible income value, we can apply equal partitioning to group
the income values into partitions. Let's assume we divide the income range into three
partitions: Low, Medium, and High.
2. Boundary Value Analysis: Boundary value analysis tests the boundaries of input values
by selecting test cases at or just beyond those boundaries. It helps identify defects that
often occur at the edges of valid and invalid input ranges.
Example of BVA:-
Consider a system that requires the user to input a number of items for an online shopping
cart. The system allows a minimum of 1 item and a maximum of 10 items in the cart.
Applying boundary value analysis, we would select test cases at the boundaries and just
beyond them:
1. Minimum Value (Boundary): Test the system with 1 item in the cart. This checks
if the system correctly handles the minimum valid input.
2. Just Below Minimum (Invalid): Test the system with 0 items in the cart. This
ensures that invalid inputs below the minimum value are rejected.
3. Maximum Value (Boundary): Test the system with 10 items in the cart. This checks
if the system correctly handles the maximum valid input.
4. Just Above Maximum (Invalid): Test the system with 11 items in the cart. This
ensures that invalid inputs above the maximum value are rejected.
By selecting test cases at the boundaries and just beyond them, boundary value analysis
helps identify defects that often occur at the edges of valid and invalid input ranges. It
focuses on critical areas where issues are more likely to arise, providing thorough test
coverage while minimizing the number of test cases required.
3. Decision Table Testing: Decision table testing systematically tests various combinations
of conditions and corresponding actions using a tabular format. Testers create test cases
to cover all possible combinations, ensuring comprehensive coverage and identifying
defects related to decision logic.
Example of Decision Table:-
Consider a system that calculates the shipping cost for an online shopping platform
based on the weight and destination of the package. The decision table for this
scenario might look as follows:
Condition 1: Weight Condition 2: Destination Action: Shipping Cost Calculation
Below 1kg Local $5.00
Below 1kg International $15.00
1kg and above Local $10.00
1kg and above International $25.00
Using decision table testing, we can create test cases that cover all possible
combinations of conditions and corresponding actions:
Generated By Syed Tabish Ishtiaq
Email:[email protected]
+923343308650
1. Test case 1: Weight below 1kg, Destination: Local. The expected action is
a shipping cost of $5.00.
2. Test case 2: Weight below 1kg, Destination: International. The expected
action is a shipping cost of $15.00.
3. Test case 3: Weight 1kg and above, Destination: Local. The expected action
is a shipping cost of $10.00.
4. Test case 4: Weight 1kg and above, Destination: International. The expected
action is a shipping cost of $25.00.
By creating test cases that cover all possible combinations, decision table testing
ensures comprehensive coverage of the system's decision logic. It helps identify
defects or issues related to decision-making within the system, ensuring accurate
and reliable results based on different combinations of conditions.
4. State Transition Testing: State transition testing focuses on testing the behavior of
systems with different states and transitions. Test cases cover various state transitions,
ensuring correct system functionality as it moves between states.
Consider a system for a traffic signal that has three states: "Red," "Yellow," and
"Green." State transition testing focuses on testing the behavior of the system as it
moves between these states. Let's define the valid state transitions:
• From "Red," the signal transitions to "Green."
• From "Green," the signal transitions to "Yellow."
• From "Yellow," the signal transitions to either "Red" or "Green."
Based on these state transitions, we can design test cases to cover various scenarios:
1. Test case 1: Start with the traffic signal in the "Red" state. Transition the signal
to "Green" and verify if the system updates the state correctly.
2. Test case 2: Start with the traffic signal in the "Green" state. Transition the
signal to "Yellow" and verify if the system updates the state correctly.
3. Test case 3: Start with the traffic signal in the "Yellow" state. Transition the
signal to "Red" and verify if the system updates the state correctly.
4. Test case 4: Start with the traffic signal in the "Yellow" state. Transition the
signal to "Green" and verify if the system updates the state correctly.
By designing test cases that cover various state transitions, state transition testing
ensures that the system behaves correctly as it moves between different states. This
technique helps identify any issues related to state changes, ensuring the system's
functionality aligns with the expected behavior of a traffic signal.
5. Use Case Testing: Use case testing validates system functionality based on specified use
cases. Test cases are designed to verify if the system behaves as expected during
execution of different use cases, ensuring compliance with use case requirements.
Generated By Syed Tabish Ishtiaq
Email:[email protected]
+923343308650
Consider a system for an online shopping platform that includes various use cases. Let's
focus on the "Add to Cart" use case. The steps for this use case might include:
1. User logs in to the system.
2. User searches for a product.
3. User selects a product.
4. User adds the product to the cart.
5. System updates the cart with the added product.
To perform use case testing for the "Add to Cart" use case, we can design test cases
that cover different scenarios:
1. Test case 1: Start with a logged-in user. Search for a specific product, select it,
and verify that it is added to the cart correctly.
2. Test case 2: Start with a logged-in user. Search for a product that is out of stock,
select it, and verify that the system displays an appropriate message indicating
the unavailability of the product.
3. Test case 3: Start with a logged-out user. Attempt to add a product to the cart
and verify that the system prompts the user to log in.
4. Test case 4: Start with a logged-in user. Add multiple products to the cart,
ensuring that the system updates the cart with all selected products correctly.
By designing test cases that correspond to different use case scenarios, use case testing
ensures that the system behaves as expected during the execution of specific use cases.
This technique helps validate the system's compliance with the requirements outlined
in the use cases and ensures a satisfactory user experience for each use case
Structured based T\techniques:-
1. Statement Coverage: Ensures that every statement in the source code is executed at least
once during testing.
Example of Statement Coverage:-
2. Decision Coverage: Ensures that every decision or branch in the source code is executed
at least once during testing.
Example for Decision Coverage:-
Imagine a software application that allows users to register for an online shopping platform.
Testers with experience in similar registration processes can use Error Guessing to
anticipate potential errors or defects. They may guess that users might encounter issues
with:
1. Invalid email format: Testers can design test cases to intentionally enter email
addresses with incorrect formats, such as missing the "@" symbol or including
invalid characters.
2. Weak password validation: Testers can create test cases to test the application's
response to weak passwords, such as using easily guessable passwords or
passwords that do not meet the required complexity criteria.
3. Error handling: Testers can explore the system's response to unexpected scenarios,
such as entering special characters or excessively long inputs in registration fields.
4. Duplicate username: Testers can intentionally attempt to register with a username
that already exists in the system to verify if the application properly detects and
handles such cases.
By using their experience and intuition, testers can design test cases specifically targeting
these potential errors. Through Error Guessing, they can uncover defects or issues that
might not be easily identified using other formal testing techniques. This technique helps
improve the overall quality of the software by anticipating and addressing potential
problems based on testers' experience.
Exploratory Testing: Testers dynamically design, execute, and evaluate tests based on their
knowledge and intuition, without predefined test cases. They explore the software, perform ad-
hoc tests, and adapt their approach based on immediate feedback. This technique helps quickly
find defects and understand software behavior.
Example for Exploratory Testing:-
Imagine a mobile banking application that allows users to perform various transactions,
such as checking account balance, transferring funds, and paying bills. Testers can apply
Generated By Syed Tabish Ishtiaq
Email:[email protected]
+923343308650
Exploratory Testing to gain insights into the application's behavior and identify potential
defects.
1. Testers launch the mobile banking application and start exploring its user interface
and features. They interact with different screens, buttons, and menus to familiarize
themselves with the application's functionality.
2. Testers perform ad-hoc tests by trying different combinations of actions and inputs.
For example, they may transfer funds from one account to another while
simultaneously checking the account balance to ensure that the transaction is
reflected correctly.
3. While performing the tests, testers pay attention to the application's response,
including any error messages, delays, or unexpected behaviors. They use their
knowledge and intuition to identify potential issues or defects, such as incorrect
calculations, missing validation checks, or UI inconsistencies.
4. Testers adapt their testing approach based on immediate feedback and insights
gained during testing. If they discover an issue, they may investigate further,
perform additional tests, or modify their test scenarios to uncover the root cause or
related defects.
5. Throughout the exploratory testing session, testers document their observations,
findings, and any defects encountered. They can provide detailed feedback to the
development team, helping them understand the issues and improve the software's
quality.
Exploratory Testing allows testers to dynamically explore and evaluate the software based
on their knowledge and intuition. By performing ad-hoc tests and adapting their approach,
they can quickly identify defects, gain a better understanding of the software's behavior,
and contribute valuable insights to improve the application's overall quality.
What is STLC?
STLC, or Software Testing Life Cycle, is a structured approach to conducting software testing
activities. It consists of a set of phases and activities that are designed to ensure the quality,
reliability, and functionality of the software being developed. The primary goal of STLC is to
identify defects, validate the software against predefined requirements, and ensure that it performs
as intended.
• Define Testing Objectives: Clearly state testing goals aligned with project requirements
and stakeholder expectations.
• Identify Scope and Test Coverage: Determine which features, functionalities, and modules
to test and establish coverage criteria.
• Create a Test Strategy: Develop an overarching approach for testing, including levels,
types, techniques, and any constraints.
• Estimate Test Effort and Resources: Assess the effort needed for testing considering
software complexity, available resources, and time/budget constraints. Allocate resources
accordingly.
• Define Test Deliverables: Identify and document test deliverables, such as Test Plans, Test
Cases, Test Scripts, and Test Data, specifying their format, structure, and content.
• Create a Test Schedule: Establish a detailed timeline for testing, considering dependencies
and allowing sufficient time for test execution, defect management, and retesting.
• Identify Test Environments and Tools: Determine the required test environments,
including hardware, software, and network configurations, and identify relevant test tools
and frameworks.
• Define Test Entry and Exit Criteria: Establish criteria for starting testing (entry criteria)
and determining completion (exit criteria), including development completion,
environment readiness, and quality thresholds.
• Analyze and Mitigate Risks: Identify potential risks affecting testing, assess their severity
and likelihood, and define strategies to minimize their impact.
• Establish Communication and Collaboration: Set up effective channels to communicate
with stakeholders, development team members, and other relevant parties, ensuring
awareness of test planning activities, timelines, and expectations.
What are the reference documents for Test Planning?
1. Requirements Documents
2. Business or Functional Specifications
3. Use Cases or User Stories
4. Design Documents
5. Test Strategy
6. Test Case Templates
7. Test Data Requirements
8. Project Schedule
9. Defect Management Process
10. Test Environment Specifications
11. Test Exit Criteria