Guidelines For Automation Testing
Guidelines For Automation Testing
Guidelines
for
Automated Testing
Table of Contents
1.0Introduction
This document gives in detail the necessary guidelines for carrying out an automated
testing effectively.
By using automated techniques, the tester has a very high degree of control over
which types of tests are being performed, and how the tests will be executed. Using
automated tests enforces consistent procedures that allow developers to evaluate
the effect of various application modifications as well as the effect of various user
actions.
For example, automated tests can be built that extract variable data from external
files or applications and then run a test using the data as an input value. Most
6. Results Reporting
Full-featured automated testing systems also produce convenient test reporting and
analysis. These reports provide a standardized measure of test status and results,
thus allowing more accurate interpretation of testing outcomes. Manual methods
require the user to self-document test procedures and test results.
3.0Tasks
High Path Frequency - Automated testing can be used to verify the performance of
application paths that are used with a high degree of frequency when the software is
running in full production. Examples include: creating customer records, invoicing and
other high volume activities where software failures would occur frequently.
Repetitive Testing - If a testing procedure can be reused many times, it is also a prime
candidate for automation. For example, common outline files can be created to establish
a testing session, close a testing session and apply testing values. These automated
modules can be used again and again without having to rebuild the test scripts. This
modular approach saves time and money when compared to creating a new end-to-end
script for each and every test.
A robust testing tool should have the capability to manage the testing process, provide
organization for testing components, and create meaningful end-user and management
reports. It should also allow users to include non-automated testing procedures within
automated test plans and test results. A robust tool will allow users to integrate existing
test results into an automated test plan. Finally, an automated test should be able to link
business requirements to test results, allowing users to evaluate application readiness
based upon the application's ability to support the business requirements.
Testing tools should provide tightly integrated modules that support test component
reusability. Test components built for performing functional tests should also support
other types of testing including regression and load/stress testing. All products within the
testing product environment should be based upon a common, easy-to-understand
language. User training and experience gained in performing one testing task should be
transferable to other testing tasks. Also, the architecture of the testing tool environment
should be open to support interaction with other technologies such as defect or bug
tracking packages.
Internet/Intranet Testing -
A good tool will have the ability to support testing within the scope of a web browser.
The tests created for testing Internet or intranet-based applications should be portable
across browsers, and should automatically adjust for different load times and
performance levels.
Ease of Use -
A robust testing tool should support testing with a variety of user interfaces and create
simple-to manage, easy-to-modify tests. Test component reusability should be a
cornerstone of the product architecture.
The selected testing solution should allow users to perform meaningful load and
performance tests to accurately measure system performance. It should also provide
test results in an easy-to-understand reporting format.
For those situations that require outside expertise, the testing tool vendor should be
able to provide extensive consulting, implementation, training, and assessment
services. The test tools should also support a structured testing methodology.
Note: Refer to “Test Tools Evaluation and Selection Report” template for evaluating
various tools on common parameters
Begin the automated testing process by defining exactly what tasks your application
software should accomplish in terms of the actual business activities of the end-user.
The definition of these tasks, or business requirements, defines the high-level,
functional requirements of the software system in question. These business
requirements should be defined in such a way as to make it abundantly clear that the
software system correctly (or incorrectly) performs the necessary business functions.
For example, a business requirement for a payroll application might be to calculate a
salary, or to print a salary check.
This is the most crucial phase in the Automation cycle. Script planning essentially
establishes the test architecture. All the modules are identified at this stage. The various
modules are used for navigation, manipulating controls, data entry, data validation, error
identification, and writing out logs. Reusable modules consist of commands, logic, and
data. Generic modules used throughout the testing system, such as initialization and
setup functionality, are generally grouped together in files named to reflect their primarily
function, such as "Init" or "setup". Others that are more application specific, designed to
service controls on a customer window, for example, are also grouped together and
named similarly. All the identified functions/modules/libraries/scripts are named as per
the Standard Naming Conventions followed. The test data planning is also done in this
phase.
Development of Libraries
All common functions are developed and tested to promote repeatability and
maintainability.
3.5Development of Scripts
The actual scripts are developed and tested.
The scripts and libraries are integrated to form test suites that can run independently
with minimum user intervention.
3.6Deployment/Testing of Suites
The various test suites are run on the different machine configurations. The defects are
reported and tracked with a defect-tracking tool. The suites are run again if regression
testing is needed.
Test Case
Script
Planning & Naming Conventions
Design
Test Suite
Deployment/Testi
ng
Version Control
Input
Test Repositories
Tool setup
4.0Additional Guidelines
2. Preparation Timeframe - The preparation time for automated test scripts has to be
taken into account. In general, the preparation time for automated scripts can be up to
2/3 times longer than for manual testing. In reality, chances are that initially the tool will
actually increase the testing scope. It is therefore very important to manage
expectations. An automated testing tool does not replace manual testing, nor does it
replace the test engineer. Initially, the test effort will increase, but when automation is
done correctly it will decrease on subsequent releases.
3. Return on Investment - Because the preparation time for test automation is so long, I
have heard it stated that the benefit of the test automation only begins to occur after
approximately the third time the tests have been run.
4. When is the benefit to be gained? Choose your objectives wisely, and seriously think
about when & where the benefit is to be gained. If your application is significantly
changing regularly, forget about test automation - you will spend so much time updating
your scripts that you will not reap many benefits. [However, if only disparate sections of
the application are changing, or the changes are minor - or if there is a specific section
that is not changing, you may still be able to successfully utilize automated tests]. Bear
in mind that you may only ever be able to do a complete automated test run when your
application is almost ready for release – i.e. nearly fully tested!! If your application is
very buggy, then the likelihood is that you will not be able to run a complete suite of
automated tests – due to the failing functions encountered
5. The Degree of Change – The best use of test automation is for regression testing,
whereby you use automated tests to ensure that pre-existing functions (e.g. functions
from version 1.0 - i.e. not new functions in this release) are unaffected by any changes
introduced in version 1.1. And, since proper test automation planning requires that the
test scripts are designed so that they are not totally invalidated by a simple gui change
(such as renaming or moving a particular control), you need to take into account the
time and effort required to update the scripts. For example, if your application is
significantly changing, the scripts from version 1.0. may need to be completely re-written
for version 1.1, and the effort involved may be at most prohibitive, at least not taken into
account! However, if only disparate sections of the application are changing, or the
changes are minor, you should be able to successfully utilize automated tests to regress
these areas.
1. Test Integrity - how do you know (measure) whether a test passed or failed? Just
because the tool returns a ‘pass’ does not necessarily mean that the test itself passed.
For example, just because no error message appears does not mean that the next step
in the script successfully completed. This needs to be taken into account when
specifying test script fail/pass criteria.
6.Test Independence - Test independence must be built in so that a failure in the first test
case won't cause a domino effect and either prevents, or cause to fail, the rest of the test
scripts in that test suite. However, in practice this is very difficult to achieve.
7.Debugging or "testing" of the actual test scripts themselves - time must be allowed for
this, and to prove the integrity of the tests themselves.
8.Record & Playback - DO NOT RELY on record & playback as the SOLE means to
generate a script. The idea is great. You execute the test manually while the test tool sits in
the background and remembers what you do. It then generates a script that you can run to
re-execute the test. It's a great idea - that rarely works (and proves very little).
Functional Testing Record and Playback tools Win Runner, Rational Robot,
with scripting support aid in Silk Test and QA Run. Tools
automating the functional like CA-Verify can be used in
testing of online applications the m/f environment
Test Coverage Analyzer Reports from the tool provide Rational Pure Coverage
data on coverage per unit like
Function, Program, and
Application
5.0Common Pitfalls
• The various tools used throughout the development lifecycle did not easily
integrate.
• Test tool training being given late in the process, which results in test engineers
having a lack of tool knowledge
• Testers resisting the tool. It is important to have a Tool Champion who can
advocate the features of the tool in the early stages to avoid resistance
• Reports produced by the tool being useless as the data required to produce the
report was never accumulated in the tool
• Various tool versions being in use resulting in scripts created in one tool not
running in another. One way to prevent this is to ensure that tool upgrades are
centralized and managed by a configuration management team.
• The new tool upgrade not being compatible with the existing system
engineering environment. It is first preferred to do a beta test with the new tool
before rolling out in the project
• The tool’s database not allowing for scalability. It is better to pick a tool that
allows for scalability using a robust database. Additionally, it is important to
back up the test database
6.0Templates
• Test Plan
7.0References
• Test Strategy Guidelines