0% found this document useful (0 votes)
67 views

ISTQB Test Tools

The document discusses different tools used for testing software categorized by test activities such as management, static testing, test design and implementation, test execution and logging, performance measurement and dynamic analysis, and specialized testing needs. It notes benefits of tools like reduced manual work and increased consistency, but also risks like unrealistic expectations, underestimating costs and efforts, and over-reliance on automation. Effective use of tools requires considering relationships between tools, version control, and not viewing tools as replacements for testing activities.

Uploaded by

Mayur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views

ISTQB Test Tools

The document discusses different tools used for testing software categorized by test activities such as management, static testing, test design and implementation, test execution and logging, performance measurement and dynamic analysis, and specialized testing needs. It notes benefits of tools like reduced manual work and increased consistency, but also risks like unrealistic expectations, underestimating costs and efforts, and over-reliance on automation. Effective use of tools requires considering relationships between tools, version control, and not viewing tools as replacements for testing activities.

Uploaded by

Mayur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Test Tools

classification according to test activities

Tools for Tools for Tools for Tools for Tools for Tools for
management of static testing test design and test execution and performance specialized testing
testing and testware implementation logging measurement and needs
* Static analysis tools dynamic analysis
* Test management tools * Model-Based testing * Test execution tools (to * Other tools supporting
and application lifecycle tools run regression tests) * Performance testing tools more specific testing for
management tools (ALM) non-functional
* Test data preparation * Coverage tools * Dynamic analysis tools characteristics
* Requirements tools (requirements coverage,
management tools code coverage)
(traceability to test objects)
* Test harnesses –
* Defect management tools automated test
frameworks; collection of
* Configuration software and test data NB: Some tools can be intrusive: they may affect
NB: Some tools offer support that is typically more the actual outcome of the test (response times,
management tools configured to test a
appropriate for developers. program unit by running it amount of code coverage).
* Continuous integration under varying conditions This is called the probe effect.
tools and monitoring its behavior
and output

special
special
considerations
considerations

Test management tools often need to interface with other tools or spreadsheets for various Test execution tools execute test objects using automated test scripts. This type of tools often
reasons, including: requires significant effort in order to achieve significant benefits.

- to produce useful information in a format that fits the needs of the organization * Capturing test approach: Capturing tests by recording the actions of a manual tester seems
attractive, but this approach does not scale to large number of test scripts. A captured script is a
- to maintain consistent traceability to requirements in a requirements management tool linear representation with specific data and actions as part of each script. This type of script may
be unstable when unexpected events occur, and require ongoing maintenance as the system’s
- to link with test object version information in the configuration management tool user interface evolves over time.

This is particularly important to consider when using an integrated tool (ALM), which includes * Data-driven test approach: This test approach separates out the test inputs and expected
a test management module, as well as other modules (project schedule and budget results, usually into a spreadsheet, and uses a more generic test script that can read the input
information) that are used by different groupd within an organization. data and execute the same test script with different data.

* Keyword-driven test approach: This test approach, a generic script processes keywords
describing the actions to be taken (also called action words), which then calls keyword scripts to
process the associated test data.

The above approached required someone to have expertise in the scripting language (testers,
developers or specialists in test automation).

Model-Based testing tools enable a functional specification to be captures in the form of a model,
such as an activity diagram. This task is generally performed by a system designer. The MBT
tool interprets the model in order to create test case specifications which can then be saved in a
test management tool and/or executed by a test execution tool.

https://github.com/fromlegaltotech
Test Tools
Benefits and Risks. Effective Use.

Benefits Risks
- Reduction in repetitive manual work (running regression
tests, environment set up/tear down tasks, re-entering the - Expectations for the tool may be unrealistic (including functionality and ease of use).
same test data, and checking against coding standards), this - The time, cost and effort for the initial introduction of a tool may be under-estimated (including training and external expertise).
saving time.
- The time and effort needed to achieve significant and continuing benefits from the tool may be under-estimated (including the need
- Greater consistency and repeatability (test data is created in for changes in the test process and continuous improvement in the way he tool is used).
coherent manner, tests are executed by a tool in the same
- The effort required to maintain the test work products generated by the tool may be under-estimated.
order with the same frequency, and tests are consistently
derived from requirements). - The tool may be relied on too much (seen as a replacement for test design or execution, or the use of automated testing where
manual testing would be better).
- More objective assessment (static measures, coverage).
- Version control test work products may be neglected.
- Easier access to information about testing (statistics and
- Relationships and interoperability issues between critical tools may be neglected, such as requirements management tools,
graphs about test progress, defect rates and performance).
configuration management tooks, defect management tools and tools from multiple vendors.
- The tool vendor may go out of business, retire the tool, or sell the tool to a different vendor.
- The vendor may provide a poor response for support, upgrades, and defect fixes.
Main Principles for Tool Selection - An open source project may be suspended.
- A new platform or technology may not be supported by the tool.
* Assessment of the maturity of the own organization, its
strenghts and weaknesses - There may be no clear ownership of the tool (for mentoring, updates, etc.)

* Identifications of oppotunities for an improved test process


supported by tools
* Understanding of the technologies used by the test object(s), in
Pilot Project for Introducing a Tool Success Factors for Tools
order to select a tool that is compatible with that technology
After selecting a tool based on a proof-of-concept evaluation, - Rolling out the tool to the rest of the organization
* Understanding the build and continuous integration tools introduce the selected tool via starting with a pilot projects, incrementally
already in use within the organization, in order to ensure tool which has the following objectives:
compatibility and integration - Adapting and improving processes to fit with the use of the
- Gaining in-depth knowledge about the tool, understanding tool
* Evaluation of the tool against clear requirements and objective
both its strenght and weaknesses - Providing training, coaching, and mentoring for tool users
criteria
* Consideration of whether or not the tool is available for a free - Evaluating how the tool fits with existing processes and - Defining guidelines for the use of the tool (internal standards
trial period (and for how long) practices, and determining what would need to change for automation)
* Evaluation of the vendor (including training, support and - Deciding on standard ways of using, managing, storing, and - Implementing a way to gather usage information from the
commercial aspects) or support for non-commercial (open maintaining the tool and the test work products (deciding on actual use of the tool
source) tools naming conventions for files and tests, selecting coding
standards, creating libraries and defining the modularity of test - Monitoring tool use and benefits
* Identification of internal requirements for coaching and suites)
mentoring in the use of the tool - Providing support to the users of a given tool
- Assessing whether the benefits will be achieved at
* Evaluation of training needs, considering the testing (and test - Gathering lessons learned from all users
reasonable cost
automation) skills of those who will be working directly with the
tool(s) - Understanding the metrics that you wish to collect and report, - Ensuring that the tool is technically and organizationally
and configuring the tool to ensure these metrics can be integrated into the SDLC
* Consideration of pros and cons of various licensing models
(commercial or open source) captured and reported

* Estimation of a cost-benefit ratio based on a concrete business


case (if required)
https://github.com/fromlegaltotech

You might also like