Unit-2
Unit-2
UNIT 2
TEST PLAN
TEST PLANNING
A Test Plan is a detailed document that catalogs the test strategies, objectives, schedule,
estimations, deadlines, and resources required to complete that project. Think of it as a blueprint
for running the tests needed to ensure the software is working correctly – controlled by test
managers.
A well-crafted test plan is a dynamic document that changes according to progressions in
the project and stays current at all times.
It is the point of reference based on which testing activities are executed and coordinated
among a QA team.
The test plan is also shared with Business Analysts, Project Managers, Dev teams, and
anyone associated with the project. This mainly offers transparency into QA activities so that
all stakeholders know how the software will be tested.
• They help individuals outside the QA teams (developers, business managers, customer-facing
teams) understand exactly how the website or app will be tested.
• They offer a clear guide for QA engineers to conduct their testing activities.
• They detail aspects such as test scope, test estimation, strategy, etc.
• Collating all this information into a single document makes it easier to review by management
personnel or reuse for other projects.
• Scope: Details the objectives of the particular project. Also, it details user scenarios to be used in
tests. The scope can specify scenarios or issues the project will not cover if necessary.
1
CCS366-UNIT 2/DMIEC
• Schedule: Details start dates and deadlines for testers to deliver results.
• Resource Allocation: Details which tester will work on which test.
• Tools: Details what tools will be used for testing, bug reporting, and other relevant activities.
• Defect Management: Details how bugs will be reported, to whom, and what each bug report
needs to be accompanied by. For example, should bugs be reported with screenshots, text logs,
or videos of their occurrence in the code?
• Risk Management: Details what risks may occur during software testing and what risks the
software itself may suffer if released without sufficient testing.
• Exit Parameters: Details when testing activities must stop. This part describes the expected
results from the QA operations, giving testers a benchmark to compare actual results.
1. Product Analysis
2. Designing Test Strategy
3. Defining Objectives
4. Establish Test Criteria
5. Planning Resource Allocation
6. Planning Setup of Test Environment
7. Determine test schedule and estimation
8. Establish Test Deliverables
1. Product Analysis
Start with learning more about the product being tested, the client, and the end-users of similar
products. Ideally, this phase should focus on answering the following questions:
2
CCS366-UNIT 2/DMIEC
• Risks and Issues: Describes all possible risks that may occur during testing – tight deadlines,
poor management, inadequate or erroneous budget estimate – and the effect of these risks on the
product or business.
• Test Logistics: Mentions the names of testers (or their skills) and the tests to be run by them.
This section also includes the tools and the schedule laid out for testing.
3. Defining Objectives
This phase defines the goals and expected results of test execution. Since all testing intends to
identify as many defects as possible, the objects must include:
• A list of all software features – functionality, GUI, performance standards- must be tested.
• The ideal result or benchmark for every aspect of the software that needs testing. This is the
benchmark to which all actual results will be compared.
4. Establish Test Criteria
Test Criteria refers to standards or rules governing all activities in a testing project. The two
main test criteria are:
3
CCS366-UNIT 2/DMIEC
• Suspension Criteria: Defines the benchmarks for suspending all tests. For example, if QA team
members find that 50% of all test cases have failed, then all testing is suspended until the
developers resolve all of the bugs that have been identified so far.
• Exit Criteria: Defines the benchmarks that signify the successful completion of a test phase or
project. The exit criteria are the expected results of tests and must be met before moving on to
the next stage of development. For example, 80% of all test cases must be marked successful
before a feature or portion of the software can be considered suitable for public use.
5. Planning Resource Allocation
• This phase creates a detailed breakdown of all resources required for project completion.
Resources include human effort, equipment, and all infrastructure needed for accurate
and comprehensive testing.
• This part of test planning decides the project’s required measure of resources (number of
testers and equipment). This also helps test managers formulate a correctly calculated
schedule and estimation for the project.
Ideally, test environments should be real devices so testers can monitor software behavior in real
user conditions.
Whether it is manual testing or automation testing nothing beats real devices, installed with real
browsers and operating systems are non-negotiable as test environments.
Then, create a schedule to complete these tasks in the designated time with a specific amount of
effort.
Creating the schedule, however, does require input from multiple perspectives:
• Employee availability, number of working days, project deadlines, and daily resource
availability.
4
CCS366-UNIT 2/DMIEC
• Risks associated with the project which has been evaluated in an earlier stage.
8. Establish Test Deliverables
Test Deliverables refer to a list of documents, tools, and other equipment that must be created,
provided, and maintained to support testing activities in a project.
A different set of deliverables is required before, during, and after testing.
Deliverables required before testing
Documentation on
• Test Plan
• Test Design
• Test Results
• Release Notes
• Defect Report
Creating a comprehensive test plan is crucial for ensuring the quality and reliability of software.
A test plan outlines the testing approach, scope, objectives, resources, and schedules for a
software testing project. Here are some important concepts to consider when developing a test
plan:
1. Scope and Objectives: Clearly define the scope of the testing effort, including the
features, functions, and components that will be tested. Outline the objectives of testing,
such as identifying defects, validating functionality, and ensuring compliance with
requirements.
5
CCS366-UNIT 2/DMIEC
2. Test Strategy: Describe the overall approach to testing, including the types of testing
(e.g., unit, integration, system, acceptance) that will be performed. Explain the rationale
behind choosing specific testing techniques and methodologies.
3. Test Environments: Specify the hardware, software, and network configurations needed
to conduct testing effectively. This includes details about development and testing
environments, database versions, operating systems, browsers, etc.
4. Test Deliverables: List the documents and artifacts that will be produced as part of the
testing process, such as test cases, test scripts, test data, defect reports, and test logs.
5. Test Schedule: Outline the timeline for different testing phases, including start and end
dates for each phase, milestones, and dependencies. Consider factors like resource
availability and development progress.
6. Test Resources: Identify the personnel, tools, and infrastructure required for testing. This
includes testers, developers, test automation tools, test management tools, and any
specialized hardware or software.
7. Risk Assessment: Identify potential risks that might impact the testing process or the
software quality. Assess the impact and likelihood of each risk and propose mitigation
strategies.
8. Test Cases and Test Scripts: Define the test cases that will be executed during testing.
Each test case should include the test scenario, input data, expected outcomes, and steps
to reproduce the test. For automated testing, provide the test scripts and tools to be used.
9. Test Data: Describe the data needed for testing, including sample data, test databases,
and any specific data conditions that need to be simulated.
10. Defect Management: Define the process for reporting, tracking, prioritizing, and
resolving defects. Include guidelines for defect classification, severity, and priority.
11. Test Execution: Detail how the testing will be executed, including any manual or
automated procedures, testing sequences, and regression testing strategies.
12. Exit Criteria: Specify the conditions that must be met for each testing phase to be
considered complete. This might include criteria related to test coverage, defect
resolution, and overall system stability.
13. Test Sign-off and Approval: Define the process for obtaining approval to proceed from
one testing phase to another or for releasing the software to production.
6
CCS366-UNIT 2/DMIEC
14. Documentation: Address how documentation will be managed throughout the testing
process. This includes version control for test plans, test cases, and other related
documents.
15. Change Management: Describe how changes to the software or requirements will be
managed during testing. Address how these changes may impact the test plan and
ongoing testing efforts.
Remember that a test plan should be tailored to the specific project and organization's needs. It
should be a living document that evolves as the project progresses and new information becomes
available. Regularly review and update the test plan to ensure it remains relevant and aligned
with the project's goals.
HIGH-LEVEL EXPECTATIONS
in a software test plan refer to the overarching goals and outcomes that the testing effort aims to
achieve. These expectations set the tone for the testing process and provide a clear direction for
the testing team. Here are some examples of high-level expectations that could be included in a
software test plan:
7
CCS366-UNIT 2/DMIEC
7. Timely Delivery: The testing process should be conducted efficiently and effectively to
avoid delays in the overall project timeline.
8. Documentation: All test cases, test scripts, defects, and testing outcomes should be well-
documented to provide clear traceability and insights into the testing process.
9. Communication: Regular communication should be maintained between the testing
team, development team, and stakeholders to keep everyone informed about testing
progress and outcomes.
10. Risk Mitigation: The testing process should identify and address potential risks that
could impact the software's quality, stability, or delivery.
11. Continuous Improvement: The testing process should be iterative, and feedback from
testing cycles should be used to improve the testing strategy and quality assurance
practices.
12. Compliance: If applicable, the software should adhere to industry regulations, standards,
and best practices.
13. Stakeholder Satisfaction: The testing effort should contribute to overall stakeholder
satisfaction by ensuring that the software meets or exceeds their expectations.
These high-level expectations should align with the project's objectives, requirements, and the
organization's quality standards. They provide a roadmap for the testing team and help establish
the overall testing strategy that guides the more detailed aspects of the test plan, such as test
cases, schedules, resources, and risk management.
A complete test plan helps the people who are not involved in test group to understand why
product validation is needed and how it is to be performed. However, if the test plan is not
8
CCS366-UNIT 2/DMIEC
complete, it might not be possible to check how the software operates when installed on different
operating systems or when used with other software. To avoid this problem, IEEE states some
components that should be covered in a test plan. These components are listed in Table.
Component Purpose
9
CCS366-UNIT 2/DMIEC
A carefully developed test plan facilitates effective test execution, proper analysis of errors, and
preparation of error report. To develop a test plan, a number of steps are followed, as listed below.
▪ Set objectives of test plan: Before developing a test plan, it is necessary to understand its
purpose. But, before determining the objectives of a test plan, it is necessary to determine
the objectives of the software. This is because the objectives of a test plan are highly
dependent on that of software. For example, if the objective of the software is to accomplish
all user requirements, then a test plan is generated to meet this objective.
▪ Develop a test matrix: A test matrix indicates the components of the software that are to
be tested. It also specifies the tests required to check these components. Test matrix is also
used as a test proof to show that a test exists for all components of the software that require
testing. In addition, test matrix is used to indicate the testing method, which is used to test
the entire software.
▪ Develop test administrative component: A test plan must be prepared within a fixed time
so that software testing can begin as soon as possible. The purpose of administrative
component of a test plan is to specify the time schedule and resources (administrative people
involved while developing the test plan) required to execute the test plan. However, if the
implementation plan (plan that describes how the processes in the software are carried out)
of software changes, the test plan also changes. In this case, the schedule to execute the test
plan also gets affected.
▪ Write the test plan: The components of a test plan such as its objectives, test matrix, and
administrative component are documented. All these documents are then collected together
to form a complete test plan. These documents are organized either in an informal or formal
manner.
10
CCS366-UNIT 2/DMIEC
▪ Overview: Describes the objectives and functions of the software to be performed. It also
describes the objectives of test plan such as defining responsibilities, identifying test
environment and giving a complete detail of the sources from where the information is
gathered to develop the test plan.
▪ Test scope: Specifies features and combination of features, which are to be tested. These
features may include user manuals or system documents. It also specifies the features and
their combinations that are not to be tested.
▪ Test methodologies: Specifies the types of tests required for testing features and
combination of these features such as regression tests and stress tests. It also provides
description of sources of test data along with how test data is useful to ensure that testing is
adequate such as selection of boundary or null values. In addition, it describes the procedure
for identifying and recording test results.
11
CCS366-UNIT 2/DMIEC
▪ Test phases: Identifies different types of tests such as unit testing, integration testing and
provides a brief description of the process used to perform these tests. Moreover, it identifies
the testers that are responsible for performing testing and provides a detailed description of
the source and type of data to be used. It also describes the procedure of evaluating test
results and describes the work products, which are initiated or completed in this phase.
▪ Test environment: Test environment: Identifies the hardware, software, automated testing
tools;
▪ Schedule: Provides detailed schedule of testing activities and defines the responsibilities
to respective people. In addition, it indicates dependencies of testing activities and the time
frames for them.
▪ Approvals and distribution: Identifies the individuals who approve a test plan and its
results. It also identifies the people to whom the test plan document(s) is distributed.
Incomplete and incorrect test cases lead to incorrect and erroneous test outputs. To avoid this, the
test cases must be prepared in such a way that they check the software with all possible inputs.
This process is known as exhaustive testing and the test case, which is able to perform exhaustive
testing, is known as ideal test case. Generally, a test case is unable to perform exhaustive testing;
therefore, a test case that gives satisfactory results is selected. In order to select a test case, certain
questions should be addressed.
12
CCS366-UNIT 2/DMIEC
To provide an answer to these questions, test selection criterion is used that specifies the
conditions to be met by a set of test cases designed for a given program. For example, if the
criterion is to exercise all the control statements of a program at least once, then a set of test cases,
which meets the specified condition should be selected.
The process of generating test cases helps to identify the problems that exist in the software
requirements and design. For generating a test case, firstly the criterion to evaluate a set of test
cases is specified and then the set of test cases satisfying that criterion is generated. There are two
methods used to generate test cases, which are listed below.
▪ Code-based test case generation: This approach, also known as structure based test case
generation, is used to assess the entire software code to generate test cases. It considers only
the actual software code to generate test cases and is not concerned with the user
requirements. Test cases developed using this approach are generally used for performing
unit testing. These test cases can easily test statements, branches, special values, and
symbols present in the unit being tested.
▪ Specification-based test case generation: This approach uses specifications, which
indicate the functions that are produced by the software to generate test cases. In other
words, it considers only the external view of the software to generate test cases. It is
generally used for integration testing and system testing to ensure that the software is
performing the required task. Since this approach considers only the external view of the
software, it does not test the design decisions and may not cover all statements of a program.
Moreover, as test cases are derived from specifications, the errors present in these
specifications may remain uncovered.
Several tools known as test case generators are used for generating test cases. In addition to test
case generation, these tools specify the components of the software that are to be tested. An
example of test case generator is the ‘astra quick test’, which captures business processes in the
visual map and generates data-driven tests automatically.
A test plan is neither not related to the details of testing units nor it specifies the test cases to be
used for testing units. Thus, test case specification is done in order to test each unit separately.
13
CCS366-UNIT 2/DMIEC
Depending on the testing method specified in a test plan, the features of the unit to be tested are
determined. The overall approach stated in the test plan is refined into two parts: specific test
methods and the evaluation criteria. Based on these test methods and the criteria, the test cases to
test the unit are specified.
For each unit being tested, these test case specifications describe the test cases, required inputs
for test cases, test conditions, and the expected outputs from the test cases. Generally, it is required
to specify the test cases before using them for testing. This is because the effectiveness of testing
depends to a great extent on the nature of test cases.
Test case specifications are written in the form of a document. This is because the quality of test
cases is evaluated by performing a test case review, which requires a formal document. The
review of test case document ensures that test cases satisfy the chosen criteria and conform to the
policy specified in the test plan. Another benefit of specifying test cases in a formal document is
that it helps testers to select an effective set of test cases.
Developing a test strategy, which efficiently meets the requirements of an organization, is critical
to the success of software development in that organization.
The choice of software testing strategy is highly dependent on the nature of the developed
software. For example, if the software is highly data intensive then a strategy that checks
structures and values properly to ensure that all inputs given to the software are correct and
complete should be developed. Similarly, if it is transaction intensive then the strategy should be
such that it is able to check the flow of all the transactions. The design and architecture of the
software are also useful in choosing testing strategy. A number of software testing strategies are
14
CCS366-UNIT 2/DMIEC
developed in the testing process. All these strategies provide the tester a template, which is used
for testing. Generally, all testing strategies have following characteristics.
1. Testing proceeds in an outward manner. It starts from testing the individual units,
progresses to integrating these units, and finally, moves to system testing.
2. Testing techniques used during different phases of software development are different.
3. Testing is conducted by the software developer and by an ITG.
4. Testing and debugging should not be used synonymously. However, any testing strategy
must accommodate debugging with itself.
There are different types of software testing strategies, which are selected by the testers depending
upon the nature and size of the software. The commonly used software testing strategies are listed
below.
▪ Analytic testing strategy: This uses formal and informal techniques to access and prioritize
risks that arise during software testing. It takes a complete overview of requirements, design,
and implementation of objects to determine the motive of testing.
▪ Model-based testing strategy: This strategy tests the functionality of the software
according to the real world scenario (like software functioning in an organization). It
15
CCS366-UNIT 2/DMIEC
recognizes the domain of data and selects suitable test cases according to the probability of
errors in that domain.
▪ Methodical testing strategy: It tests the functions and status of software according to the
checklist, which is based on user requirements. This strategy is also used to test the
functionality, reliability, usability, and performance of the software.
▪ Process-oriented testing strategy: It tests the software according to already existing
standards such as the IEEE standards. In addition, it checks the functionality of the
software by using automated testing tools.
▪ Dynamic testing strategy: This tests the software after having a collective decision of the
testing team. Along with testing, this strategy provides information about the software
such as test cases used for testing the errors present in it.
▪ Philosophical testing strategy: It tests the software assuming that any component of the
software can stop functioning anytime. It takes help from software developers, users and
systems analysts to test the software.
A testing strategy should be developed with the intent to provide the most effective and efficient
way of testing the software. While developing a testing strategy, some questions arise such as:
when and what type of testing is to be done? What are the objectives of testing? Who is
responsible for performing testing? What outputs are produced as a result of testing? The inputs
that should be available while developing a testing strategy are listed below.
16
CCS366-UNIT 2/DMIEC
The output produced by the software testing strategy includes a detailed document, which
indicates the entire test plan including all test cases used during the testing phase. A testing
strategy also specifies a list of testing issues that need to be resolved.
An efficient software testing strategy includes two types of tests, namely, low-level tests and
high-level tests. Low-level tests ensure correct implementation of small part of the source code
and high-level tests ensure that major software functions are validated according to user
requirements. A testing strategy sets certain milestones for the software such as final date for
completion of testing and the date of delivering the software. These milestones are important
when there is limited time to meet the deadline.
In spite of these advantages, there are certain issues that need to be addressed for successful
implementation of software testing strategy. These issues are discussed here.
▪ In addition to detecting errors, a good testing strategy should also assess portability and
usability of the software.
▪ It should use quantifiable manner to specify software requirements such as outputs
expected from software, test effectiveness, and mean time to failure which should be
clearly stated in the test plan.
▪ It should improve testing method continuously to make it more effective.
▪ Test plans that support rapid cycle testing should be developed. The feedback from rapid
cycle testing can be used to control the corresponding strategies.
▪ It should develop robust software, which is able to test itself using debugging techniques.
▪ It should conduct formal technical reviews to evaluate the test cases and test strategy. The
formal technical reviews can detect errors and inconsistencies present in the testing
process.
Characteristics of STLC
• STLC is a fundamental part of the SDLC but STLC consists of only the testing phases.
• STLC starts as soon as requirements are defined or software requirement document is
shared by stakeholders.
• STLC yields a step-by-step process to ensure quality software.
17
CCS366-UNIT 2/DMIEC
Phases of STLC
1. Requirement Analysis: Requirement Analysis is the first step of the Software Testing Life
Cycle (STLC). In this phase quality assurance team understands the requirements like what is
to be tested. If anything is missing or not understandable then the quality assurance team meets
with the stakeholders to better understand the detailed knowledge of requirements.
The activities that take place during the Requirement Analysis stage include:
• Reviewing the software requirements document (SRD) and other related documents
• Interviewing stakeholders to gather additional information
• Identifying any ambiguities or inconsistencies in the requirements
• Identifying any missing or incomplete requirements
• Identifying any potential risks or issues that may impact the testing process
Creating a requirement traceability matrix (RTM) to map requirements to test cases
At the end of this stage, the testing team should have a clear understanding of the software
requirements and should have identified any potential issues that may impact the testing process.
This will help to ensure that the testing process is focused on the most important areas of the
software and that the testing team is able to deliver high-quality results.
2. Test Planning: Test Planning is the most efficient phase of the software testing life cycle where
all testing plans are defined. In this phase manager of the testing, team calculates the estimated
effort and cost for the testing work. This phase gets started once the requirement- gathering phase
is completed.
The activities that take place during the Test Planning stage include:
• Identifying the testing objectives and scope
• Developing a test strategy: selecting the testing methods and techniques that will be used
• Identifying the testing environment and resources needed
• Identifying the test cases that will be executed and the test data that will be used
• Estimating the time and cost required for testing
• Identifying the test deliverables and milestones
• Assigning roles and responsibilities to the testing team
• Reviewing and approving the test plan
At the end of this stage, the testing team should have a detailed plan for the testing activities
that will be performed, and a clear understanding of the testing objectives, scope, and
18
CCS366-UNIT 2/DMIEC
deliverables. This will help to ensure that the testing process is well-organized and that the testing
team is able to deliver high-quality results.
3. Test Case Development: The test case development phase gets started once the test
planning phase is completed. In this phase testing team notes down the detailed test cases. The
testing team also prepares the required test data for the testing. When the test cases are
prepared then they are reviewed by the quality assurance team.
The activities that take place during the Test Case Development stage include:
• Identifying the test cases that will be developed
• Writing test cases that are clear, concise, and easy to understand
• Creating test data and test scenarios that will be used in the test cases
• Identifying the expected results for each test case
• Reviewing and validating the test cases
• Updating the requirement traceability matrix (RTM) to map requirements to test cases
19
CCS366-UNIT 2/DMIEC
TEST CASE:
A test case is a set of actions performed on a system to determine if it satisfies software
requirements and functions correctly. The purpose of a test case is to determine if different
features within a system are performing as expected and to confirm that the system satisfies all
related standards, guidelines and customer requirements. The process of writing a test case can
also help reveal errors or defects within the system.
Test cases are typically written by members of the quality assurance (QA) team or the
testing team and can be used as step-by-step instructions for each system test. Testing begins
once the development team has finished a system feature or set of features. A sequence or
collection of test cases is called a test suite.
Test cases define what must be done to test a system, including the steps executed in the
system, the input data values that are entered into the system and the results that are expected
throughout test case execution.
20
CCS366-UNIT 2/DMIEC
Test cases must be designed to fully reflect the software application features and functionality
under evaluation. QA engineers should write test cases so only one thing is tested at a time. The
language used to write a test case should be simple and easy to understand, active instead of
passive, and exact and consistent when naming elements.
• Test name. A title that describes the functionality or feature that the test is verifying.
• Test ID. Typically a numeric or alphanumeric identifier that QA engineers and testers use to
group test cases into test suites.
• Objective. Also called the description, this important component describes what the test
intends to verify in one to two sentences.
• References. Links to user stories, design specifications or requirements that the test is
expected to verify.
• Prerequisites. Any conditions that are necessary for the tester or QA engineer to perform the
test.
• Test setup. This component identifies what the test case needs to run correctly, such as app
version, operation system, date and time requirements and security specifications.
• Test steps. Detailed descriptions of the sequential actions that must be taken to complete the
test.
• Expected results. An outline of how the system should respond to each test step.
21
CCS366-UNIT 2/DMIEC
• Repeatable, meaning the document can be used to perform the test numerous times.
• Reusable, meaning the document can be reused to successfully perform the test again in the
future.
To achieve these goals, QA and testing engineers can use the following best practices:
• Prioritize which test cases to write based on project timelines and the risk factors of the
system or application.
• Create unique test cases and avoid irrelevant or duplicate test cases.
• Confirm that the test suite checks all specified requirements mentioned in the specification
document.
• Write test cases that are transparent and straightforward. The title of each test case should be
short.
• Test case steps should be broken into the smallest possible segments to avoid confusion
when executing.
• Test cases should be written in a way that allows others to easily understand them and
modify the document when necessary.
This is a type of black box testing that can reveal if an app's interface works with the rest
of the system and its users by identifying whether the functions that the software is expected to
22
CCS366-UNIT 2/DMIEC
perform are a success or failure. Functionality test cases are based on system specifications or
user stories, allowing tests to be performed without accessing the internal structures of the
software. This test case is usually written by the QA team.
These test cases can help validate response times and confirm the overall effectiveness of
the system. Performance test cases include a very strict set of success criteria and can be used to
understand how the system will operate in the real world. Performance test cases are typically
written by the testing team, but they are often automated because one system can demand
hundreds of thousands of performance tests.
Unit testing involves analyzing individual units or components of the software to confirm each
unit performs as expected. A unit is the smallest testable element of software. It often takes a few
inputs to produce a single output.
This type of test case can verify that specific element of the graphical user interface (GUI) look
and perform as expected. UI test cases can reveal errors in elements that the user interacts with,
such as grammar and spelling errors, broken links and cosmetic inconsistencies. UI tests often
require cross-browser functionality to ensure an app performs consistently across different
browsers. These test cases are usually written by the testing team with some help from the design
team.
These test cases are used to confirm that the system restricts actions and permissions
when necessary to protect data. Security tests cases often focus on authentication and encryption
23
CCS366-UNIT 2/DMIEC
and frequently use security-based tests, such as penetration testing. The security team is
responsible for writing these test cases -- if one exists in the organization.
An integration test case is written to determine how the different software modules
interact with each other. The main purpose of this test case is to confirm that the interfaces
between different modules work correctly. Integration test cases are typically written by the
testing team, with input provided by the development team.
This type of test case aims to examine what is happening internally, helping testers
understand where the data is going in the system. Testing teams frequently use SQL queries to
write database test cases.
A usability test case can be used to reveal how users naturally approach and use an
application. Instead of providing step-by-step details, a usability test case will provide the tester
with a high-level scenario or task to complete. These test cases are typically written by the
design and testing teams and should be performed before user acceptance testing.
These test cases focus on analyzing the user acceptance testing environment. They are
broad enough to cover the entire system and their purpose is to verify if the application is
acceptable to the user. User acceptance test cases are prepared by the testing team or product
manager and then used by the end user or client. These tests are often the last step before the
system goes to production.
24
CCS366-UNIT 2/DMIEC
Regression Testing
This test confirms recent code or program changes have not affected existing system
features. Regression testing involves selecting all or some of the executed test cases and running
them again to confirm the software's existing functionalities still perform appropriately.
The Key Differences Between A Test Case And A Test Scenario Include:
• A test case provides a set of actions performed to verify that specific software features are
performing correctly. A test scenario is any feature that can be tested.
• A test case is beneficial in exhaustive testing -- a software testing approach that involves
testing every possible data combination. A test scenario is more agile and focuses on the end-
to-end functionality of the software.
• A test case looks at what to test and how to test it while a test scenario only identifies what to
test.
• A test case requires more resources and time for test execution than a test scenario.
• A test case includes information such as test steps, expected results and data while a test
scenario only includes the functionality to be tested.
To configure automated tests or test sets to run without any user interaction by creating
a test schedule. This schedule can be a one-time scheduled run, or it can be a recurring schedule
on specific days of the week.
1. On the Test Schedules screen, you can perform various tasks with test schedules.
To access this screen, select Test Management > Test Schedules on the main QAComplete
toolbar.
25
CCS366-UNIT 2/DMIEC
2. Click the Recent Items button to display the items that have been changed lately.
Click an item in that list to go to the corresponding Edit forms and edit its properties.
26
CCS366-UNIT 2/DMIEC
Additional actions
27
CCS366-UNIT 2/DMIEC
Option Description
Date Created The date and time the test schedule was created on.
Date Updated The date when the test schedule was updated last time.
Updated By The user who updated the test schedule last time.
Date Last Launched The date when the test schedule was launched last time.
Start Date The date when the test schedule becomes active.
28
CCS366-UNIT 2/DMIEC
Option Description
End Date The date when the test schedule becomes inactive.
Start Time The time when the test run starts according to the test schedule.
Agent The automation agent used to run tests for this schedule.
Link Run to Release The release to which the scheduled test run is linked.
Link Run to Configuration The configuration to which the scheduled test run is linked.
29
CCS366-UNIT 2/DMIEC
30
CCS366-UNIT 2/DMIEC
1. Title/Bug ID
2. Environment
3. Steps to reproduce a Bug
4. Expected Result
5. Actual Result
31
CCS366-UNIT 2/DMIEC
1. Click on the “Add to Cart” button on the Homepage (this takes the user to the Cart).
2. Check if the same product is added to the cart.
4. Expected Result
This component of Bug Report describes how the software is supposed to function in the given
scenario. The developer gets to know what the requirement is from the expected results. This
helps them gauge the extent to which the bug is disrupting the user experience.
Describe the ideal end-user scenario, and try to offer as much detail as possible. For the above
example, the expected result should be:
“The selected product should be visible in the cart.”
5. Actual Result
Detail what the bug is actually doing and how it is a distortion of the expected result.
32
CCS366-UNIT 2/DMIEC
The range of debugging tools offered by Browser Stack’s mobile app and web testing products
are as follows:
• Live: Pre-installed developer tools on all remote desktop browsers and Chrome developer tools
on real mobile devices (exclusive on Browser Stack
• Automate: Screenshots, Video Recording, Video-Log Sync, Text Logs, Network Logs, Selenium
Logs, Console Logs
• App Live: Real-time Device Logs from Logcat or Console
• App Automate: Screenshots, Video Recording, Video-Log Sync, Text Logs, Network Logs,
Appium Logs, Device Logs, App Profiling
7. Bug Severity
Every bug must be assigned a level of severity and corresponding priority. This reveals
the extent to which the bug affects the system, and in turn, how quickly it needs to be fixed.
Levels of Bug Severity:
• Low: Bug can be fixed at a later date. Other, more serious bugs take priority
• Medium: Bug can be fixed in the normal course of development and testing.
• High: Bug must be resolved at the earliest as it affects the system adversely and renders it
unusable until it is resolved.
33