How Would You Approach The Test Planning?: 2. Design The Test Strategy Define Scope of Testing
How Would You Approach The Test Planning?: 2. Design The Test Strategy Define Scope of Testing
The target clients and end users – people who are searching for optimal time, cost, route,
particular service type, the conditions of aircraft and comfort in traveling 24/7.
Key features:
Before the start of any test activity, scope of the testing should be known with rough estimates
and thoroughly refined user stories to be developed and tested to make sure we are delivering the
best possible product.
VS
The components of the system that will not be tested also need to be clearly defined as being
"out of scope."
Get detailed customer’s requirement with all necessary open questions addressed and
clarified later on
Time and effort needed to implement them
Rough Calculations of Project Budget
Get ready to use Product Specification whilst open to CRs
Skills & talent of QA team
Identify Testing Type
Each testing type is aimed to identify a specific type of product bugs. But, all Testing Types are
aimed at achieving one common goal “Early detection of all the defects before releasing the
product.
Depending on level
Unit testing
Integration testing
System testing
API testing
Functional testing UI
Non functional testing (Performance, Load, Volume, Usability)
Test Logistics
To select the right member for specified task, you have to consider if his skill is qualified for the
task or not, also estimate the project budget.
Person having the following skills is ideal for performing software testing:
Test requirement and specifications are refined and ready + QA Engineers depending on their
role and duties are hired and onboarded (Human Resources) + Test environment is deployed and
in the best way replicates Prod/Stagging env.
Test Objective is the overall goal and achievement of the test execution. The objective of the
testing is finding as many software defects as possible; ensure that the software under test is
critical bug free before release.
1. List all the software features (functionality, performance, GUI…) which may need to test.
2. Define the final goal of the test based on above features
Test Criteria is a standard or rule on which a test procedure or test judgment can be based.
There’re 2 types of test criteria as following
Suspension Criteria
Specify the critical suspension criteria for a test. If the suspension criteria are met during testing,
the active test cycle will be suspended until the criteria are resolved.
Test Plan Example: If your team members report that there are 35% of test cases failed, you
should suspend testing until the development team/AQA team fixes all the failed cases.
Exit Criteria
It specifies the criteria that denote a successful completion of a test phase. The exit criteria are
the targeted results of the test and are necessary before proceeding to the next phase of
development. Example:93% of all critical test cases must pass.
Some methods of defining exit criteria are by specifying a targeted run rate and pass rate.
Run rate is ratio between number test cases executed/total test cases of test
specification. For example, the test specification has total 120 TCs, but the tester only
executed 100 TCs, So the run rate is 100/120 = 0.83 (83%)
Pass rate is ratio between numbers test cases passed / test cases executed. For example,
in above 100 TCs executed, there’re 80 TCs that passed, so the pass rate is 80/100 = 0.8
(80%)
5. Resource Planning
A testing environment is a setup of software and hardware on which the testing team is going to
execute test cases. The test environment consists of PROD env (real business) and user
environment, as well as physical environments, such as server, front end running environment,
data base server.
To finish this task, you need a strong cooperation between Test Team and DevOps Team
.Here’re some recommended questions. Of course, you can ask the other questions if you need.
What is the maximum user connection which this website can handle at the same time?
What are hardware/software requirements to install this website?
Does the user's computer need any particular setting to browse the website?
Test Scripts
Test Data
Test Traceability Matrix
Error logs and execution logs.
Test Results/reports
Defect Report
Test procedures guidelines
Release notes
What are the different sorts of information that you might ask the product team
/developers?
Team/situation:
1. How experienced is the engineering team? How familiar is the team with the problem domain?
2. Is the team comfortable with ambiguity?
3. Is the team comfortable interacting directly with end-users and customers?
4. Does the team possess a good mix of viewpoints, backgrounds, and communication styles?
5. What is the history of the team? Are they coming off a positive experience with high
momentum, or a negative experience with no momentum?
Goals:
1. How is development tied to the goals of the business, and how will we measure our impact
2. How is development tied to the goals of our users/customers, and how will we measure that
impact?
3. As we discover new information, have we refined the goal appropriately
Problem solving questions:
1. Does the team have a shared understanding of the problem we are solving? Are we speaking
the same language? Does the team have a shared understanding of the problem’s root cause?
2.How much time and resources do we have to solve the problem? Are we running out of either?
Do we share a similar sense of urgency?
3.If we are releasing a feature for an existing product, have we identified the feature as a basic
must-have “table stakes” feature, a performance feature, or a differentiator?
4.Would some reframing of our goals and/or KPIs allow us to more effectively understand the
problem from the perspective of the business? If this is impossible, is the team OK with the
ambiguity?
5.Have we leveraged all of our available sources of data to understand the problem, current
workarounds, customer behavior, and competitive options?
1. Does the team have a shared understanding of our current priorities, and our near-term
prioritization and sequencing of work? Are we in agreement that this is the best plan of action?
3. Is the team comfortable with the rough sizing (estimation) of prioritized items?
4. Are we currently on a good trajectory to deliver the best solution possible given our current
constraints?
5. To what extent are edge cases and newfound dependencies influencing prioritization?
6. Do we have a shared understanding of the target fidelity for the next iteration?
7. Was the team involved in prioritizing work and developing the guidelines around how work
was sequenced? If their involvement was minimal, are they ok with that?
Users/Customers
1. What is the current customer attitude towards the problem? Is this something that users
have been expecting for years?
4. Are we investing our time such that we are maximizing value to our customers? How
would an average customer respond if they were to participate in our last meeting or last
discussion? Would they call our work valuable? Is our work being described and framed in
terms of its utility and value to customers?
5. At what point will we expose our solution to customers using their data, in their context,
and with their day-to-day organic workflows?
6. Once released to users, how will we gather feedback such that it is actionable and timely?
Do we intend to act on this feedback? Will our customers expect us to act on their feedback
immediately?
7. Are our usability testing methods matched with the phase of the solution? Are they
generating statistically significant data appropriate for the phase goal? Are we using a good
mix of qualitative and quantitative methods?
8. How will we measure what our customers do in addition to what they say?
9. What challenges might we face when gathering customer feedback? How can we address
these challenges?
11. How will we make customers aware of this feature? Is this a feature with limited appeal to
our customer base? How will we measure adoption as distinct from usability or feature/need
fit? Is driving adoption of the feature in scope, and if so, what is the plan to drive adoption?
Dependencies
1.Will our solution disrupt our customers in the short term? How will this be managed, and what
data will we need to feel confident in our management strategy?
3.Will our solution touch other parts of the product? Are we at risk of negatively impacting the
user experience elsewhere?
Style of work
1. Do we have clear agendas for the next week’s worth of meetings? Do any meetings need to be
repurposed to meet current challenges? Can any meetings be canceled?
2. Are meetings managed for maximum utility? Who documents the decisions and action items
resulting from meetings? Are they conducted in conducive settings, with the right tools and
participants?
3. Do we have a clear idea of individual responsibilities? When there are overlaps, have we
discussed potential conflicts?
4. Does our team have the requisite information to make most decisions regarding the solution
autonomously? Or, are we losing cycles due to ambiguous goals?
5. Are we limiting the amount of extraneous and “noisy” data? Does the team feel protected from
any competing agendas and politics?
7. Are we calling things by the same name? Have we developed a common vocabulary to discuss
our solution, the problem, and our assumptions?
8. Do we have momentum? Is that sense of momentum and progress shared by all members of the
team? Can the team focus on doing instead of strategizing? Is everyone inspired? Is anyone
unclear about what their week looks like? Is momentum building or waning? How can we recover
from a recent drop in momentum?
9. Is anyone on the team blocked? What can be done to remove those blockers? Is our process
suffering from any bottlenecks? What can be done to remove those bottlenecks?
10. Does the team feel focused and productive? Are we having fun?
11. Is the work suitably decomposed so as to limit work in progress? Does the team share a solid
definition of done?
12. Has the team formalized its hand-offs, stages, and agreed upon process?
13. Have factions formed within the team and, if so, are these factions healthy? Do different
points of view drive value to the customer, or should we work to reduce personal bias? Can we
harness these differences for good? Can any back-channels be made public?
14. Is the team collaborating and communicating in an effective manner? Are tools being used
consistently?
15. Can I provide adequate information to other stakeholders regarding the status and focus of the
effort? If a rough schedule is required, is that available? Are we maintaining a burndown chart?
How does my work fit into the overall roadmap?
● Different test types / phases you would use for testing
Testing Phases:
Phase 1
I. Unit testing
Phase 2
API testing
Functional testing from UI
E2E
In this phase of software testing, a tester mainly works on the requirements and perform testing
on key scenarios and workflows.
Phase 3
In this phase, Usability Testing, Exploratory Testing and User acceptance testing comes into play
Phase 4
This phase is based on the non-functional requirements of the software. These include:
Performance
Security
Stability
Scalability
Data migration
Check Filter
Create Custom Filter with data set - 1
Cancel Filter creation -3
Close Filter page -3
Weather observations around the world
Wind Speed -2
Cloud cover-2
Wind Speed-2
Surface Pressure-2
Bookmarks
Settings
Map options -1
Map appearance -1
Label type -2
Label apperience -2
Unit of measure -2
Restore default settings -3
Close Settings -4
Remove all -2
Close Tab -4
API testing:
From the site, tapping on an airport gives the information about the airport’s departure and
arrival information.
For the airport in Umeå (UME), find the GET request that retrieves information about flights that
are arriving in the airport and departing from the airport.
Write briefly about the request sent and structure of the payload and what are the different tests
that you would add for testing the API.
GET /api/airport/times/UME/arrivals
GET /api/flights/arrival?
airport=UME&begin=<some_value>&end=<some_value>&limit=<number>
Payload:
{
“airportId”: “UUID”
“arrivalAirportCandidatesCount”: <num>,
“callsign”: “MayDay”,
“departureAirportCandidatesCount” : <num>,
“estArrivalAirport”: “UME”,
“estArrivalAirportHorizonralDistance”: <num>,
“estArrivalAirportVerticalDistance”: <num>,
“firstSeen”: <num>,
“lastSeen”: <num>
}
pi/flights/arrival?airport=EBCI&begin=1610675897&end=1610812697&limit=10&offset=0
GET /api/airport/times/UME/departure
2.GET /api/flights/departure?
airport=UME&begin=<some_value>&end=<some_value>&limit=<number>
Payload:
{
“airportId”: “UUID”
“departureAirportCandidatesCount”: <num>,
“callsign”: “MayDay”,
“arrivalAirportCandidatesCount” : <num>,
“estDepartureAirport”: “UME”,
“estDepartureAirportHorizonralDistance”: <num>,
“estDepartureAirportVerticalDistance”: <num>,
“firstSeen”: <num>,
“lastSeen”: <num>
}
Test cases:
[N] Incorrect begin time
[N] Incorrect end time
[N] Exceeded limit number
[N] Non existing airport name
[N] Retrieve data for non a-authorized user
[N] –Incorrect token
Automation test
describe (‘MyTestSuite’, function()
{
it (‘Verify Page is launched’,function()
{
cy.visit (‘https://www.hajper.com/sv’)
cy.title().should (‘eq’, ‘HAJPER’,)
})
it (‘Verify if the buttons in Regulation Header (as in image) are visible in the header’,function()
cy.get (‘.nav’).contains (‘Spelpaus’)
cy.get (‘.nav’).contains (‘Spelgranser’)
cy.get (‘.nav’).contains (‘Sjalvtest’)
cy.get (‘.nav’).contains (‘18+’)
})
it (‘Navigate to route /casino/explore and click on of any game and check that a notification is
shown on the top of the’,function()
cy.route (‘/casino/explore’)
cy.get (‘.love-icon’).click
cy.title(‘love game can be marked on only if you are logged in’)