0% found this document useful (0 votes)
2 views

Week 8- Test_Management

The document outlines the objectives and processes involved in test management, emphasizing the importance of structured testing to ensure software quality. Key elements include test organization, planning, detailed test design, monitoring, and quality assurance, each with specific responsibilities and components. It also discusses the hierarchy of test plans and the significance of effective test execution and reporting in the software development lifecycle.

Uploaded by

nimsian
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Week 8- Test_Management

The document outlines the objectives and processes involved in test management, emphasizing the importance of structured testing to ensure software quality. Key elements include test organization, planning, detailed test design, monitoring, and quality assurance, each with specific responsibilities and components. It also discusses the hierarchy of test plans and the significance of effective test execution and reporting in the software development lifecycle.

Uploaded by

nimsian
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

Test Management

Week 8
Objective
• Remember, the objective of testing will vary
depending on context:

• Find the important bugs


• Help make release decisions
• Identify issues and workarounds to help reduce support
costs
• Identify safe usage scenarios
• Discover product behaviour in certain scenarios
• Assess compliance vs. regulations, standards, specifications
• Demonstrate compliance to reduce liability
• Reverse engineer an existing solution
• Etc.
Context & Mission

• The kinds of information you are looking for will change during
the lifecycle of a project:

• Early in testing, we might be more “sympathetic”, and test


that the software can do the easy things (before we try the
hard things)

• Later in testing, we might be more “aggressive”, searching


for bugs that would prevent release
Context & Mission

• The kinds of information you are looking for will change during
the lifecycle of the software product itself, e.g.:

• Prototype: Basic functionality established?


• Initial release: Is it fit for production? Can it be Operationalized?
• Subsequent releases: What is the impact of the change?
Introduction to Test Management
• Definition: Test management ensures that new or modified service products meet business

requirements by planning, executing, and monitoring testing activities.

Key Elements:

• Test Organization: Defines roles and responsibilities within a structured testing team to

optimize the efficiency and effectiveness of testing efforts.

• Test Planning: Establishes a detailed approach to testing, including identifying test items,

defining scope, scheduling, and assigning tasks.

• Detailed Test Design: Specifies test cases and test procedures required for comprehensive

testing of software functionalities.

• Test Monitoring & Assessment: Ensures the testing process aligns with project goals by

tracking progress, verifying outcomes, and identifying improvement areas.

• Product Quality Assurance: Maintains high software quality by overseeing testing activities
Test Organization
• What is it?
• A structured approach to managing software testing, ensuring
systematic coverage and efficiency.

• Key Responsibilities:
• Establishing the test framework and defining explicit roles for test
execution.
• Developing testing policies and applying standards to maintain
consistency.
• Setting up and maintaining a test environment suitable for project
requirements.

• Importance:
• Facilitates seamless collaboration between testing teams and
developers.
• Reduces testing inefficiencies and improves software quality.
Test Planning

• Definition: A strategic document that outlines the objectives,


methodology, and schedule of testing activities.

• Key Components:
• Identifying critical software functionalities that require testing.
• Outlining the scope of testing to define what is included and excluded.
• Preparing a detailed test plan and timeline to ensure timely execution.
• Assigning testing responsibilities to various team members.

• Why is it necessary?
• Helps mitigate risks by proactively identifying potential issues.
• Ensures resources are allocated efficiently for optimal test coverage.
Detailed Test Design & Specifications

• Purpose:
• Develops structured and repeatable test cases to validate software
functionality.
• Components:
• Requirement traceability to ensure test cases cover all functional
requirements.
• Well-defined test cases that specify inputs, expected outputs, and
execution conditions.
• Test execution procedures to outline how testing should be performed
systematically.
• Outcome:
• Ensures thorough testing coverage and consistency across multiple test
cycles.
Test Monitoring & Assessment

• Definition: Continuous tracking of test progress to maintain


alignment with project requirements and deadlines.

• Key Tasks:
• Reviewing and analyzing test reports for accuracy and completeness.
• Assessing the integrity of the software configuration and changes.
• Validating that software functionality meets business requirements
through structured testing.

• Why is it important?
• Detects testing bottlenecks and inefficiencies.
• Helps improve overall software reliability and defect tracking.
Product Quality Assurance

• Objective:
• To confirm that software meets predefined quality standards before
deployment.

• Responsibilities:
• Supervising test execution to ensure compliance with best practices.
• Participating in review meetings to refine the testing process.
• Approving release and deployment only when all quality criteria are met
Test Organization

• Testing is a process, and to ensure high-quality software, a


well-structured testing organization is required.

• A dedicated testing group ensures better testing practices


and quality assurance.
Responsibilities of a Testing Group

• Maintenance and application of test policies


• Development and application of testing standards
• Participation in requirement, design, and code reviews
• Test planning
• Test execution
• Test measurement
• Test monitoring
• Defect tracking
• Acquisition of testing tools
• Test reporting
Role of a Tester

• Test specialists, test engineers, or simply testers are part of


the testing group.
• A tester is NOT a developer or an analyst.
• They do not debug or repair code.
• Their primary responsibility is ensuring effective testing and
addressing quality issues.
Essential Skills for a Tester

• Analytical thinking and problem-solving skills


• Attention to detail
• Strong understanding of testing methodologies
• Knowledge of automation tools
• Ability to create detailed test cases and reports
• Communication and collaboration skills
Testing Group Hierarchy
Structure of Testing Group
• Test Manager:
• Defines the overall testing strategy and ensures alignment with project
goals.
• Manages resources, monitors progress, and interacts with stakeholders.

• Test Leader:
• Assigns tasks, supervises engineers, and ensures proper test execution.

• Test Engineers:
• Design, implement, and execute test cases to verify software
functionality.
• Junior Test Engineers:
• Support senior engineers, execute predefined test cases, and document
Test Planning
• A test plan is defined as a document that describes the scope, approach,
resources, and schedule of intended testing activities.

• Test plan is driven with the business goals of the product.

• In order to meet a set of goals, the test plan identifies the following:
• Test items
• Features to be tested
• Testing tasks
• Tools selection
• Time and effort estimate
• Who will do each task
• Any risks
• Milestones
Test Plan Component
IEEE Std 829–1983 has described the test plan components. These components (see Fig.
below) must appear for every test item. These components are described below.
Test Plan Components

• Test Plan Components:


• Test Plan Identifier: Unique identifier for the test plan.
• Introduction: Overview of the project and testing goals.
• Test Items and Features: List of items and features to be
tested.
• Approach: Overall testing strategy and techniques.
• Pass/Fail Criteria: Criteria for determining test success or
failure.
• Test Deliverables: Documents and reports to be produced.
• Environmental Needs: Hardware, software, and tools required.
• Responsibilities: Roles and responsibilities of team members.
• Scheduling: Timeline for testing activities.
• Risks and Contingencies: Potential risks and mitigation
strategies.
Test Plan Hierarchy
• Test plans can be organized in several ways depending on the organizational policy.
There is often a hierarchy of plans that includes several levels of quality assurance
and test plans.

• At the top of the plan hierarchy is a master plan which gives an overview of all
verification and validation activities for the project, as well as details related to other
quality issues such as audits, standards, and configuration control.

• Below the master test plan, there is individual planning for each activity in
verification and validation, as shown in Fig. in slide 21.

• The test plan at each level of testing must account for information that is specific to
that level, e.g. risks and assumptions, resource requirements, schedule, testing
strategy, and required skills.
Test Plan Hierarchy
The test plans according to
each level are written in an
order such that the plan
prepared first is executed
last, i.e. the acceptance
test plan is the first plan to
be formalized but is
executed last.
The reason for its early
preparation is that the
things required for its
completion are available
first.
Master Test Plan

• Purpose:
• Serves as the high-level document governing all test
activities.

• Steps:
• Define scope, goals, and objectives of testing.
• Identify necessary testing resources, tools, and personnel.
• Develop a structured schedule to ensure efficient test
execution.
• Evaluate risks and define mitigation strategies.
Verification Test Plan

• Verification test planning includes the following tasks:

1. The item on which verification is to be performed.


2. The method to be used for verification: review, inspection, walkthrough.
3. The specific areas of the work product that will be verified.
4. The specific areas of the work product that will not be verified.
5. Risks associated.
6. Prioritizing the areas of work product to be verified.
7. Resources, schedule, facilities, tools, and responsibilities.
Validation Test Plan

• Validation: Confirms that the final product meets user needs


and business objectives.

• Validation test planning includes the following tasks:

1. Testing techniques
2. Testing tools
3. Support software and documents
4. Configuration management
5. Risks associated, such as budget, resources, schedule, and training.
Unit Test Plan

• Purpose:
• Validates that individual software modules function
correctly.

• To prepare for unit test, the developer must do the following


tasks:
1. Prepare the unit test plan in a document form.
2. Design the test cases.
3. Define the relationship between tests.
4. Prepare the test harness, i.e. stubs and drivers, for unit
testing..
Integration Test Plan

• Goal:
• Ensures that individual software modules work correctly
when integrated.

• Tasks:
• Identifying and mapping interactions between different
components.
• Designing interface-specific test cases.
• Developing necessary stubs and drivers to simulate
missing components
Questions

•What is the role of a test plan in software testing, and


what key components does it include?

•How does risk analysis influence test planning? Provide


an example.

•Explain the difference between features to be tested and


features not to be tested in a test plan.

•Why is it important to define pass/fail criteria in a test


plan?
System & Acceptance Test Plans

• System Testing:
• Validates the complete software system’s performance,
security, and scalability.

• Acceptance Testing:
• Confirms that the software meets all user requirements
before deployment.
• Defines clear pass/fail acceptance criteria to measure
readiness.
Test Estimation
• Purpose:
• Estimate the effort, time, and resources required for testing.
• Techniques:
• Work Breakdown Structure (WBS): Breaks testing tasks into smaller
components.
• Expert Judgment: Uses the experience of senior testers.
• Historical Data: Uses data from previous projects.
• Delphi Technique: Consensus-based estimation.

• Factors Affecting Estimation:


• Complexity of the Application: More complex applications require
more testing.
• Team Experience: Experienced teams may require less time.
• Resource Availability: Availability of tools and environments affects
estimation.
Test Monitoring and Control

• Purpose:
• Track progress and ensure testing stays on track.

• Key Activities:
• Monitor Test Execution Progress: Track the execution of test cases.
• Track Defects: Monitor the status of defects.
• Compare Actual Progress with Plan: Ensure testing is on schedule.
• Take Corrective Actions: Address deviations from the plan.

• Metrics Used:
• Test Case Execution Rate: Percentage of test cases executed.
• Defect Detection Rate: Number of defects found.
• Test Coverage: Percentage of requirements covered by tests.
Test Execution & Reporting

• Test Log: Documents all executed test cases and their


results.

• Test Incident Report: Captures software defects and


unexpected behaviors.

• Test Summary Report: Evaluates the overall effectiveness of


the testing effort.
Test Execution and Defect
Management

• Test Execution:
• Execute test cases as per the test plan.
• Log results (Pass/Fail).

• Defect Management:
• Defect Lifecycle: New → Assigned → Open → Fixed →
Retest → Closed.
• Defect Reporting: Include severity, priority, steps to
reproduce, and screenshots.

• Tools: Jira, Bugzilla, Mantis.


Questions

•What are the key responsibilities of a test manager during


the test execution phase?
•Why is defect tracking crucial in test management, and how
does it improve software quality?
•Describe a situation where test execution might be
suspended and explain what conditions must be met for
testing to resume.
•How do test metrics help in monitoring the progress of
testing activities? Provide two examples of common test
metrics
Test Closure
• Purpose:
• Evaluate the testing process and document lessons learned.

• Key Activities:
• Ensure all test cases are executed.
• Verify all defects are resolved.
• Prepare test summary report.
• Document lessons learned.

• Test Summary Report Components:


• Test Coverage: Percentage of requirements covered.
• Defect Statistics: Number of defects found and resolved.
• Testing Metrics: Metrics used to evaluate testing progress.
• Recommendations: Suggestions for future improvements.
Tools for Test Management

• Popular Tools:
• TestRail: Test case management.
• Jira: Defect tracking and project management.
• Zephyr: Test management plugin for Jira.
• qTest: End-to-end test management.

• Features to Look For:


• Test Case Management: Organize and manage test cases.
• Defect Tracking: Track and manage defects.
• Integration with CI/CD Tools: Integrate with continuous
integration/continuous deployment tools.
• Reporting and Analytics: Generate reports and analyze test results.
Detailed Test Design and Test
Specifications

• Traceability Matrix:
• Maps requirements to test cases.
• Ensures all requirements are covered.

• Test Design Specification:


• Identifier: Unique identifier for the test design.
• Features to be Tested: List of features to be tested.
• Approach Refinements: Detailed testing approach.
• Test Case Identification: List of test cases for each feature.
• Feature Pass/Fail Criteria: Criteria for determining test success or
failure.
Test Case Specifications

• Components:
• Test Case Specification Identifier: Unique identifier for
the test case.
• Purpose: Purpose of the test case.
• Test Items Needed: References to related documents.
• Special Environmental Needs: Hardware, software, or
tools required.
• Input Specifications: Input values for the test case.
• Test Procedure: Steps to execute the test case.
• Output Specifications: Expected output values.
Test Procedure Specifications

• Purpose:
• Define the sequence of steps to execute a test case.
• Components:
• Test Procedure Identifier: Unique identifier for the test
procedure.
• Purpose: Purpose of the test procedure.
• Steps to Execute the Test: Detailed steps to execute the
test.
• Expected Results: Expected outcomes of the test.
Test Result Specifications

• Test Log:
• Record of testing events.
• Test Incident Report:
• Report unexpected events during testing.
• Test Summary Report:
• Summary of all tests executed.
Questions ?

1. What are test deliverables, and why are they necessary in the test
management process?

2. Explain the importance of test summary reports and what key


details they should include.

3. How does traceability between test cases and requirements improve


the effectiveness of testing?

4. What challenges might arise in test reporting, and how can they be
addressed?
Test Strategy

• Once you’ve established your testing mission, how will you


achieve it?

• Things to consider:
• Project Environment
• Product Elements
• Quality Criteria

• James Bach’s Heuristic Test Strategy Model can be used to


guide the evaluation of these factors:
• http://www.satisfice.com/tools/htsm.pdf
Exercise: Test Management
Objective: Manually manage a test plan using documents and spreadsheets.
• Step 1: Create a Test Plan (Word or Google Docs)
1. Write a Test Plan Document including:
1. Test objectives
2. Scope of testing
3. Features to be tested
4. Test environment details
5. Pass/Fail criteria
• Step 2: Design Test Cases (Excel or Google Sheets)
2. Create a spreadsheet with columns for:
1. Test Case ID
2. Test Scenario
3. Test Steps
4. Expected Result
5. Actual Result
6. Status (Pass/Fail)
7. Comments
• Step 3: Execute Tests and Track Defects (Excel or Docs)
3. Mark test cases as Passed or Failed in the spreadsheet.
4. For failed test cases, create a Defect Log Document to track:
1. Defect ID
2. Test Case ID
3. Description
4. Severity (Critical, Major, Minor)
5. Assigned Developer
6. Fix Status
• Step 4: Create a Test Summary Report (Docs)
5. Summarize test execution progress:
1. Total test cases executed
2. Pass/fail ratio
3. Major defects found
Test Strategy
Risk Based Test Management

• Exhaustive testing is impossible


• Testing is a ALWAYS sampling exercise
• In order to maximize test effectiveness, intelligent sample
selection is essential
• RBTM provides a means of:
• selecting and prioritizing types of testing
• selecting and prioritizing what should be tested
• guiding test design decisions
• guiding execution sequence
• reporting results
Risk Based Test Management

• Risk is a product of technical probability and business


consequence.

• We differentiate between:
• Project risks (risks that can affect the project)
• Quality risks (risks that can affect the quality of the software
we are testing)

• RBTM is concerned with Quality risks


Risk Based Test Management

Potential risks are


identified
This is a continuous
process Identification
Identified risks are
quantified in terms
of probability and
An appropriate impact, which are
control is identified used to calculate a
and implemented. “Risk Priority”

Control Analysis
Traditionally, these could be:
•Mitigate – do something to reduce the probability or impact
•Contingency – make a plan for the risk occuring
•Transfer – accept the risk and pass it on to another party (insurance is an
example)
•Ignore – accept the risk and do nothing

Testing is a form of risk mitigation


Exercise

• In this exercise you will evaluate the context of a project,


identify potential quality risks, and determine an appropriate
test strategy
Appendix

Refer this content for more information

Slides before this will come for the examination.


Process Models

• There are a number of process models that describe how


testing should proceed
• We are going to look at a generic testing process, and the “V
Model”

• “Essentially, all models are wrong, but some are useful”


- George E. P. Box
• There are a number of rational objections to both of these
models
Process Models

• A generic testing process:

Test Test Test Test


Planning Design Execution Closure

Monitoring & Control


Process Models

• The V Model:

User Acceptance
Requirements Test

System System
Requirements Test

Conceptual Integration
Design Test

Detailed
Unit Test
Design

Implementation
Process Models

• To what degree can testing


be pre-planned?
• What about tests which are
identified during execution?

• Why wait for a complete


implementation before
testing?
• What about bugs identified
during testing?
Controversies: Testing vs. Checking

• Tests, or test cases, are often defined as:


• Prerequisites
• Inputs
• Expected results

• In this model, testing is seen as little more than checking that


actual results match expected results: little more than a clerical
task

• Much (but not all) automation can be seen in a similar light


Controversies: Testing vs. Checking

• Checking is the act of confirming an existing belief


• Checks are machine-decidable

• Testing is a process of exploration, discovery, investigation, and


learning
• Testing requires sapience
• Testing might contain checking, but is not checking

• See “Testing vs. Checking”, Michael Bolton:http


://www.developsense.com/blog/2009/08/testing-vs-checking/
Controversies: Repeatability

• Any change to software introduces the risk that previously


working features are compromised, i.e. that the software
“regresses”
• In order to mitigate this risk, regression tests are run: tests are
re-executed after a change in order to detect change
• Therefore, the “repeatability” is often cited as a desirable trait
of tests
Controversies: Repeatability

• “Every method you use to prevent or find bugs leaves a residue


of subtler bugs against which those methods are ineffective“
– Boris Beizer
• As exhaustive testing is impossible, even
x
x the largest test pack will miss bugs:
x • Running the same tests over and over will
x
continue to miss the same bugs
x • This is a major downside to “repeatable”
x
x
x
regression tests
Diagram from James Bach

• Only by varying the tests performed will the missed bugs be found
• Don’t confuse repeatability and reproducibility
Controversies: Scripting

• Traditional approaches to testing involve writing step by step


scripts in advance of test execution
• Benefits that are often cited for doing so include:
• Preparing tests in advance make testing easier to manage
• Scripts ensure that tests can be repeated precisely every time they are
executed
• Less experienced testers can learn the software
• Less experienced testers can still be productive, so tests can be given to
cheaper testers - or outsourced to testers in cheaper locations
• Tests can be reviewed by other stakeholders on the project
• Let’s look at these claims…
Controversies: Scripting

• Preparing tests in advance make testing easier to manage


• Testing is learning: learning is an exploratory, iterative, process
• Imagine asking a detective to script his questions before interviewing a
suspect – how much do you think the detective would learn?

• Scripts ensure that tests can be repeated precisely every time


they are executed
• We’ve already discussed repeatability
• Can we really script with this level of precision? Can we really specify
every condition?
Controversies: Scripting

• Less experienced testers can learn the software


• Following a script is an ineffective way to learn
• Ever used a GPS to navigate? How well did you learn the route?

• Less experienced testers can still be productive, so tests can be


given to cheaper testers - or outsourced to testers in cheaper
locations
• Scripts can significantly constrain a tester’s insight (
inattentional blindness)

• Tests can be reviewed by other stakeholders on the project


• Detailed scripts tend to be less conducive to review than checklists or
visual representations
Controversies: Automation

• Agile development has helped to bring test automation into the


mainstream
• Practices such as Test Driven Development (TDD), Behaviour
Driven Development (BDD), Acceptance Test Driven
Development (ATDD) are gaining widespread popularity
• Many projects and organizations (e.g. eBay, in the US) are
setting goals to automate ALL testing

• But…
Controversies: Automation

• Not all tests or testing tasks can be automated


• Can you automate judgement? Insight?
• Automated tests can only “notice” the things they have been
programmed to notice (inattentional blindness in extremis)
• Automated tests, like scripts, must be prepared in advance: this
introduces inertia to reacting to change or new information
Controversies: Test Documentation

• Many organizations place a heavy emphasis on test


documentation
• This can include:
• Test Strategy
• Test Plan
• Test Cases, Test Scripts
• Bug Reports
• Test Completion Reports
Controversies: Test Documentation

• Testing is not the documentation


• Documentation does not directly contribute to the generation of
information, though it can play a part in its communication
• Time spent on activities that do not contribute to the generation of
information represent an opportunity cost in terms of test coverage –
and can reduce the overall value of the testing
• The Agile and Lean movements have contributed to understanding this trade-off

• Many light-weight approaches, such as mind-map based test plans, are


beginning to emerge
• Sometimes documentation may be required to comply with regulatory
requirements
Controversies: Metrics

• Software testing often makes use of a wide variety of metrics,


e.g.:
• Test pass / fail ratios
• Bug arrival rates
• Bug detection effectiveness
• Bugs per KLOC (kilo lines of code)
• Etc.
Controversies: Metrics

• Most common measurements commit reification error (treating a


construct as an objective, countable, phenomena)
• Thought exercise: 80% of the tests passed
• What does this mean?
• Which tests failed?
• How did they fail?
• What does “fail” mean anyway?
• Were the tests any good any way?
• What tests haven’t been executed – or even thought of?

• “Not everything that counts can be counted, and not everything


that can be counted counts” - Einstein
Summary

• Determining a suitable test strategy can be something of an art


form
• Process models might offer some help but can introduce
unnecessary constraints to your thinking
• There are multiple – and fundamental – disagreements
amongst both academics and practitioners as to how to
approach testing

You might also like