Manual Testing: Faq - 1 Faq - 2 Faq - 3
Manual Testing: Faq - 1 Faq - 2 Faq - 3
Answers
FAQ- 2
Answers
FAQ - 3
Why is it often hard for organizations to get serious about quality assurance?
Who is responsible for risk management?
Who should decide when software is ready to be released?
What can be done if requirements are changing continuously?
What if the application has functionality that wasn't in the requirements?
How can QA processes be implemented without reducing productivity?
What if an organization is growing so fast that fixed QA processes are
impossible?
Will automated testing tools make testing easier?
What's the best way to choose a test automation tool?
How can it be determined if a test environment is appropriate?
What's the best approach to software test estimation?
FAQ -1 Answers
4) While all projects will benefit from testing, some projects may not require independent
test staff to succeed.
Which projects may not need independent test staff? The answer depends on the size
and context of the project, the risks, the development methodology, the skill and
experience of the developers, and other factors. For instance, if the project is a short-
term, small, low risk project, with highly experienced programmers utilizing thorough unit
testing or test-first development, then test engineers may not be required for the project
to succeed.
In some cases an IT organization may be too small or new to have a testing staff even if
the situation calls for it. In these circumstances it may be appropriate to instead use
contractors or outsourcing, or adjust the project management and development
approach (by switching to more senior developers and agile test-first development, for
example). Inexperienced managers sometimes gamble on the success of a project by
skipping thorough testing or having programmers do post-development functional
testing of their own work, a decidedly high risk gamble.
For non-trivial-size projects or projects with non-trivial risks, a testing staff is usually
necessary. As in any business, the use of personnel with specialized skills enhances an
organization's ability to be successful in large, complex, or difficult tasks. It allows for
both a) deeper and stronger skills and b) the contribution of differing perspectives. For
example, programmers typically have the perspective of 'what are the technical issues
in making this functionality work?'. A test engineer typically has the perspective of 'what
might go wrong with this functionality, and how can we ensure it meets expectations?'.
A technical person who can be highly effective in approaching tasks from both of those
perspectives is rare, which is why, sooner or later, organizations bring in test specialists.
A lot depends on the size of the organization and the risks involved. For large
organizations with high-risk (in terms of lives or property) projects, serious
management buy-in is required and a formalized QA process is necessary.
Where the risk is lower, management and organizational buy-in and QA
implementation may be a slower, step-at-a-time process. QA processes should
be balanced with productivity so as to keep bureaucracy from getting out of hand.
For small groups or projects, a more ad-hoc process may be appropriate,
depending on the type of customers and projects. A lot will depend on team
leads or managers, feedback to developers, and ensuring adequate
communications among customers, managers, developers, and testers.
The most value for effort will often be in (a) requirements management
processes, with a goal of clear, complete, testable requirement specifications
embodied in requirements or design documentation, or in 'agile'-type
environments extensive continuous coordination with end-users, (b) design
inspections and code inspections, and (c) post-mortems/retrospectives.
Other possibilities include incremental self-managed team approaches such as
'Kaizen' methods of continuous process improvement, the Deming-Shewhart
Plan-Do-Check-Act cycle, and others.
Verification typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meetings. Validation typically involves actual testing and
takes place after verifications are completed. The term 'IV & V' refers to Independent
Verification and Validation.
13) Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable. However, quality is obviously a
subjective term. It will depend on who the 'customer' is and their overall influence in the
scheme of things. A wide-angle view of the 'customers' of a software development
project might include end-users, customer acceptance testers, customer contract
officers, customer management, the development organization's
management/accountants/testers/salespeople, future software maintenance engineers,
stockholders, magazine columnists, etc. Each type of 'customer' will have their own
slant on 'quality' - the accounting department might define quality in terms of profits
while an end-user might define quality as user-friendly and bug-free.
14) 'Good code' is code that works, is reasonably bug free, and is readable and
maintainable. Some organizations have coding 'standards' that all developers are
supposed to adhere to, but everyone has different ideas about what's best, or what is
too many or too few rules. There are also various theories and metrics, such as
McCabe Complexity metrics. It should be kept in mind that excessive use of standards
and rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks' pair
programming, code analysis tools, etc. can be used to check for problems and enforce
standards.
For example, in C/C++ coding, here are some typical ideas to consider in setting
rules/standards; these may or may not apply to a particular situation:
15) 'Design' could refer to many things, but often refers to 'functional design' or 'internal
design'. Good internal design is indicated by software code whose overall structure is
clear, understandable, easily modifiable, and maintainable; is robust with sufficient
error-handling and status logging capability; and works correctly when implemented.
Good functional design is indicated by an application whose functionality can be traced
back to customer and end-user requirements. (See further discussion of functional and
internal design in 'What's the big deal about requirements?' in Part -2.) For programs
that have a user interface, it's often a good idea to assume that the end user will have
little computer knowledge and may not read a user manual or even the on-line help;
some common rules-of-thumb include:
the program should act in a way that least surprises the user
it should always be evident to the user what can be done next and how to exit
the program shouldn't let the users do something stupid without warning them.
17) The life cycle begins when an application is first conceived and ends when it is no
longer in use. It includes aspects such as initial concept, requirements analysis,
functional design, internal design, documentation planning, test planning, coding,
document preparation, integration, testing, maintenance, updates, retesting, phase-out,
and other aspects.
FAQ - 2 Answers
1) A good test engineer has a 'test to break' attitude, an ability to take the point of view
of the customer, a strong desire for quality, and an attention to detail. Tact and
diplomacy are useful in maintaining a cooperative relationship with developers, and an
ability to communicate with both technical (developers) and non-technical (customers,
management) people is useful. Previous software development experience can be
helpful as it provides a deeper understanding of the software development process,
gives the tester an appreciation for the developers' point of view, and reduce the
learning curve in automated test tool programming. Judgement skills are needed to
assess high-risk or critical areas of an application on which to focus testing efforts when
time is limited.
2) The same qualities a good tester has are useful for a QA engineer. Additionally, they
must be able to understand the entire software development process and how it can fit
into the business approach and goals of the organization. Communication skills and the
ability to understand various sides of issues are important. In organizations in the early
stages of implementing QA processes, patience and diplomacy are especially needed.
An ability to find problems as well as to see 'what's missing' is important for inspections
and reviews.
4) Generally, the larger the team/organization, the more useful it will be to stress
documentation, in order to manage and communicate more efficiently. (Note that
documentation may be electronic, not necessarily in printable form, and may be
embedded in code comments, may be embodied in well-written test cases, user stories,
etc.) QA practices may be documented to enhance their repeatability. Specifications,
designs, business rules, configurations, code changes, test plans, test cases, bug
reports, user manuals, etc. may be documented in some form. There would ideally be a
system for easily finding and obtaining information and determining what documentation
will have a particular piece of information. Change management for documentation can
be used where appropriate. For agile software projects, it should be kept in mind that
one of the agile values is "Working software over comprehensive documentation", which
does not mean 'no' documentation. Agile projects tend to stress the short term view of
project needs; documentation often becomes more important in a project's long-term
context.
5) Depending on the project, it may or may not be a 'big deal'. For agile projects,
requirements are expected to change and evolve, and detailed documented
requirements may not be needed. However some requirements, in the form of user
stories or something similar, are useful. For non-agile types of projects detailed
documented requirements are usually needed. (Note that requirements documentation
can be electronic, not necessarily in the form of printable documents, and may be
embedded in code comments, or may be embodied in well-written test cases, wiki's,
user stories, etc.) Requirements are the details describing an application's externally-
perceived functionality and properties. Requirements are ideally clear, complete,
reasonably detailed, cohesive, attainable, and testable. A non-testable requirement
would be, for example, 'user-friendly' (too subjective). A more testable requirement
would be something like 'the user must enter their previously-assigned password to
access the application'. Determining and organizing requirements details in a useful and
efficient way can be a difficult effort; different methods and software tools are available
depending on the particular project. Many books are available that describe various
approaches to this task.
Care should be taken to involve ALL of a project's significant 'customers' in the
requirements process. 'Customers' could be in-house personnel or outside personnel,
and could include end-users, customer acceptance testers, customer contract officers,
customer management, future software maintenance engineers, salespeople, etc.
Anyone who could later derail the success of the project if their expectations aren't met
should be included if possible.
In some organizations requirements may end up in high level project plans, functional
specification documents, in design documents, or in other documents at various levels
of detail. No matter what they are called, some type of documentation with detailed
requirements will be useful to testers in order to properly plan and execute tests.
Without such documentation, there will be no clear-cut way to determine if a software
application is performing correctly.
If testable requirements are not available or are only partially available, useful testing
can still be performed. In this situation test results may be more oriented to providing
information about the state of the software and risk levels, rather than providing pass/fail
results. A relevant testing approach in this situation may include an approach called
'exploratory testing'. Many software projects have a mix of documented testable
requirements, poorly documented requirements, undocumented requirements, and
changing requirements. In such projects a mix of scripted and exploratory testing
approaches may be useful.
'Agile' approaches use methods requiring close interaction and cooperation between
programmers and stakeholders/customers/end-users to iteratively develop
requirements, user stories, etc. In the XP 'test first' approach developers create
automated unit testing code before the application code, and these automated unit tests
essentially embody the requirements.
7) A software project test plan is a document that describes the objectives, scope,
approach, and focus of a software testing effort. The process of preparing a test plan is
a useful way to think through the efforts needed to validate the acceptability of a
software product. The completed document will help people outside the test group
understand the 'why' and 'how' of product validation. It should be thorough enough to be
useful but not so overly detailed that no one outside the test group will read it. The
following are some of the items that might be included in a test plan, depending on the
particular project:
Title
Identification of software including version/release numbers
Revision history of document including authors, dates, approvals
Table of Contents
Purpose of document, intended audience
Objective of testing effort
Software product overview
Relevant related document list, such as requirements, design documents,
other test plans, etc.
Relevant standards or legal requirements
Traceability requirements
Relevant naming conventions and identifier conventions
Overall software project organization and personnel/contact-info/responsibilties
Test organization and personnel/contact-info/responsibilities
Assumptions and dependencies
Project risk analysis
Testing priorities and focus
Scope and limitations of testing
Test outline - a decomposition of the test approach by test type, feature,
functionality, process, system, module, etc. as applicable
Outline of data input equivalence classes, boundary value analysis, error
classes
Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems
Test environment validity analysis - differences between the test and
production systems and their impact on test validity.
Test environment setup and configuration issues
Software migration processes
Software CM processes
Test data setup requirements
Database setup requirements
Outline of system-logging/error-logging/other capabilities, and tools such as
screen capture software, that will be used to help describe and report bugs
Discussion of any specialized software or hardware tools that will be used by
testers to help track the cause or source of bugs
Test automation - justification and overview
Test tools to be used, including versions, patches, etc.
Test script/test code maintenance processes and version control
Problem tracking and resolution - tools and processes
Project test metrics to be used
Reporting requirements and testing deliverables
Software entrance and exit criteria
Initial sanity testing period and criteria
Test suspension and restart criteria
Personnel allocation
Personnel pre-training needs
Test site/location
Outside test organizations to be utilized and their purpose, responsibilties,
deliverables, contact persons, and coordination issues
Relevant proprietary, classified, security, and licensing issues.
Open issues
Appendix - glossary, acronyms, etc.
8) A test case describes an input, action, or event and an expected response, to
determine if a feature of a software application is working correctly. A test case may
contain particulars such as test case identifier, test case name, objective, test
conditions/setup, input data requirements, steps, and expected results. The level of
detail may vary significantly depending on the organization and project context.
Note that the process of developing test cases can help find problems in the
requirements or design of an application, since it requires completely thinking through
the operation of the application. For this reason, it's useful to prepare test cases early in
the development cycle if possible.
9) The bug needs to be communicated and assigned to developers that can fix it. After
the problem is resolved, fixes should be re-tested, and determinations made regarding
requirements for regression testing to check that fixes didn't create problems elsewhere.
If a problem-tracking system is in place, it should encapsulate these processes. A
variety of commercial problem-tracking/management software tools are available The
following are items to consider in the tracking process:
Complete information such that developers can understand the bug, get an
idea of it's severity, and reproduce it if necessary.
Bug identifier (number, ID, etc.)
Current bug status (e.g., 'Released for Retest', 'New', etc.)
The application name or identifier and version
The function, module, feature, object, screen, etc. where the bug occurred
Environment specifics, system, platform, relevant hardware specifics
Test case name/number/identifier
One-line bug description
Full bug description
Description of steps needed to reproduce the bug if not covered by a test case
or if the developer doesn't have easy access to the test case/test script/test tool
Names and/or descriptions of file/data/messages/etc. used in test
File excerpts/error messages/log file excerpts/screen shots/test tool logs that
would be helpful in finding the cause of the problem
Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
Was the bug reproducible?
Tester name
Test date
Bug reporting date
Name of developer/group/organization the problem is assigned to
Description of problem cause
Description of fix
Code section/file/module/class/method that was fixed
Date of fix
Application version that contains the fix
Tester responsible for retest
Retest date
Retest results
Regression testing requirements
Tester responsible for regression tests
Regression testing results
11) The best bet in this situation is for the testers to go through the process of reporting
whatever bugs or blocking-type problems initially show up, with the focus being on
critical bugs. Since this type of problem can severely affect schedules, and indicates
deeper problems in the software development process (such as insufficient unit testing
or insufficient integration testing, poor design, improper build or release procedures,
etc.) managers should be notified, and provided with some documentation as evidence
of the problem.
13) Use risk analysis, along with discussion with project stakeholders, to determine
where testing should be focused.
Since it's rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk
analysis is appropriate to most software development projects. This requires judgement
skills, common sense, and experience. (If warranted, formal methods are also
available.) Considerations can include:
14) Consider the impact of project errors, not the size of the project. However, if
extensive testing is still not justified, risk analysis is again needed and the same
considerations as described previously in 'What if there isn't enough time for thorough
testing?' apply. The tester might then do ad hoc or exploratory testing, or write up a
limited test plan based on the risk analysis.
15) Client/server applications can be highly complex due to the multiple dependencies
among clients, data communications, hardware, and servers, especially in multi-tier
systems. Thus testing requirements can be extensive. When time is limited (as it usually
is) the focus should be on integration and system testing. Additionally,
load/stress/performance testing may be useful in determining client/server application
limitations and capabilities. There are commercial and open source tools to assist with
such testing.
16) Web sites are essentially client/server applications - with web servers and 'browser'
clients. Consideration should be given to the interactions between html pages, web
services, encrypted communications, Internet connections, firewalls, applications that
run in web pages (such as javascript, flash, other plug-in applications), the wide variety
of applications that could run on the server side, etc. Additionally, there are a wide
variety of servers and browsers, mobile platforms, various versions of each, small but
sometimes significant differences between them, variations in connection speeds,
rapidly changing technologies, and multiple standards and protocols. The end result is
that testing for web sites can become a major ongoing effort. Other considerations
might include:
What are the expected loads on the server, and what kind of performance is
required under such loads (such as web server response time, database query
response times). What kinds of tools will be needed for performance testing
(such as web load testing tools, other tools already in house that can be adapted,
load generation appliances, etc.)?
Who is the target audience? What kind and version of browsers will they be
using, and how extensively should testing be for these variations? What kind of
connection speeds will they by using? Are they intra- organization (thus with
likely high connection speeds and similar browsers) or Internet-wide (thus with a
wider variety of connection speeds and browser types)?
What kind of performance is expected on the client side (e.g., how fast should
pages appear, how fast should flash, applets, etc. load and run)?
Will down time for server and content maintenance/upgrades be allowed? how
much?
What kinds of security (firewalls, encryption, passwords, functionality, etc.) will
be required and what is it expected to do? How can it be tested?
What internationilization/localization/language requirements are there, and how
are they to be verified?
How reliable are the site's Internet connections required to be? And how does
that affect backup system or redundant connection requirements and testing?
What processes will be required to manage updates to the web site's content,
and what are the requirements for maintaining, tracking, and controlling page
content, graphics, links, etc.?
Which HTML and related specification will be adhered to? How strictly? What
variations will be allowed for targeted browsers?
Will there be any standards or requirements for page appearance and/or
graphics, 508 compliance, etc. throughout a site or parts of a site?
Will there be any development practices/standards utilized for web page
components and identifiers, which can significantly impact test automation.
How will internal and external links be validated and updated? how often?
Can testing be done on the production system, or will a separate test system
be required? How are browser caching, variations in browser option settings,
connection variabilities, and real-world internet 'traffic congestion' problems to be
accounted for in testing?
How extensive or customized are the server logging and reporting
requirements; are they considered an integral part of the system and do they
require testing?
How are flash, applets, javascripts, ActiveX components, etc. to be maintained,
tracked, controlled, and tested?
Additionally: Many organizations are able to determine who is skilled at fixing problems,
and then reward such people. However, determining who has a talent for preventing
problems in the first place, and figuring out how to incentivize such behavior, is a
significant challenge.
2) Risk management means the actions taken to avoid things going wrong on a
software development project, things that might negatively impact the scope, quality,
timeliness, or cost of a project. This is, of course, a shared responsibility among
everyone involved in a project. However, there needs to be a 'buck stops here' person
who can consider the relevant tradeoffs when decisions are required, and who can
ensure that everyone is handling their risk management responsibilities.
It is not unusual for the term 'risk management' to never come up at all in a software
organization or project. If it does come up, it's often assumed to be the responsibility of
QA or test personnel. Or there may be a 'risks' or 'issues' section of a project, QA, or
test plan, and it's assumed that this means that risk management has taken place.
The issues here are similar to those for the question "Who should decide when software
is ready to be released?" It's generally NOT a good idea for a test lead, test manager, or
QA manager to be the 'buck stops here' person for risk management. Typically QA/Test
personnel or managers are not managers of developers, analysts, designers and many
other project personnel, and so it would be difficult for them to ensure that everyone on
a project is handling their risk management responsibilities. Additionally, knowledge of
all the considerations that go into risk management mitigation and tradeoff decisions is
rarely the province of QA/Test personnel or managers. Based on these factors, the
project manager is usually the most appropriate 'buck stops here' risk management
person. QA/Test personnel can, however, provide input to the project manager. Such
input could include analysis of quality-related risks, risk monitoring, process adherence
reporting, defect reporting, and other information.
3) In many projects this depends on the release criteria for the software. Such criteria
are often in turn based on the decision to end testing, discussed in part 2 item"How can
it be known when to stop testing?" Unfortunately, for any but the simplest software
projects, it is nearly impossible to adequately specify useful criteria without a significant
amount of assumptions and subjectivity. For example, if the release criteria is based on
passing a certain set of tests, there is likely an assumption that the tests have
adequately addressed all appropriate software risks. For most software projects, this
would of course be impossible without enormous expense, so this assumption would be
a large leap of faith. Additionally, since most software projects involve a balance of
quality, timeliness, and cost, testing alone cannot address how to balance all three of
these competing factors when release decisions are needed.
A typical approach is for a lead tester or QA or Test manager to be the release decision
maker. This again involves significant assumptions - such as an assumption that the
test manager understands the spectrum of considerations that are important in
determining whether software quality is 'sufficient' for release, or the assumption that
quality does not have to be balanced with timeliness and cost. In many organizations,
'sufficient quality' is not well defined, is extremely subjective, may have never been
usefully discussed, or may vary from project to project or even from day to day.
For these reasons, it's generally not a good idea for a test lead, test manager, or QA
manager to decide when software is ready to be released. Their responsibility should be
to provide input to the appropriate person or group that makes a release decision. For
small organizations and projects that person could be a product manager, a project
manager, or similar manager. For larger organizations and projects, release decisions
might be made by a committee of personnel with sufficient collective knowledge of the
relevant considerations.
4) This is a common problem for organizations where there are expectations that
requirements can be pre-determined and remain stable. If these expectations are
reasonable, here are some approaches:
5) It may take serious effort to determine if an application has significant unexpected or
hidden functionality, and it could indicate deeper problems in the software development
process. If the functionality isn't necessary to the purpose of the application, it should be
removed, as it may have unknown impacts or dependencies that were not taken into
account by the designer or the customer. (If the functionality is minor and low risk then
no action may be necessary.) If not removed, information will be needed to determine
risks and to determine any added testing needs or regression testing needs.
Management should be made aware of any significant added risks as a result of the
unexpected functionality.
This problem is a standard aspect of projects that include COTS (Commercial Off-The-
Shelf) software or modified COTS software. The COTS part of the project will typically
have a large amount of functionality that is not included in project requirements, or may
be simply undetermined. Depending on the situation, it may be appropriate to perform
in-depth analysis of the COTS software and work closely with the end user to determine
which pre-existing COTS functionality is important and which functionality may interact
with or be affected by the non-COTS aspects of the project. A significant regression
testing effort may be needed (again, depending on the situation), and automated
regression testing may be useful.
8) Possibly. For small projects, the time needed to learn and implement them may
not be worth it unless personnel are already familiar with the tools. For larger projects,
or on-going long-term projects they can be valuable.
web test tools - to check that links are valid, HTML code
usage is correct, client-side and
server-side programs work, a web site's
interactions are secure.
Test automation is, of course, possible without COTS tools. Many successful
automation efforts utilize custom automation software that is targeted for specific
projects, specific software applications, or a specific organization's software
development environment. In test-driven agile software development environments,
automated tests are often built into the software during (or preceding) coding of the
application.
9) It's easy to get caught up in enthusiasm for the 'silver bullet' of test automation, where
the dream is that a single mouse click can initialize thorough unattended testing of an
entire software application, bugs will be automatically reported, and easy-to-understand
summary reports will be waiting in the manager's in-box in the morning.
Although that may in fact be possible in some situations, it is not the way things
generally play out.
In manual testing, the test engineer exercises software functionality to determine if the
software is behaving in an expected way. This means that the tester must be able to
judge what the expected outcome of a test should be, such as expected data outputs,
screen messages, changes in the appearance of a User Interface, XML files, database
changes, etc. In an automated test, the computer does not have human-like 'judgement'
capabilities to determine whether or not a test outcome was correct. This means there
must be a mechanism by which the computer can do an automatic comparison between
actual and expected results for every automated test scenario and unambiguously make
a pass or fail determination. This factor may require a significant change in the entire
approach to testing, since in manual testing a human is involved and can:
For those new to test automation, it might be a good idea to do some reading or training
first. There are a variety of ways to go about doing this; some example approaches are:
Obtain some test tool trial versions or low cost or open source test tools and
experiment with them
Attend software testing conferences or training courses related to test
automation
As in anything else, proper planning and analysis are critical to success in choosing and
utilizing an automated test tool. Choosing a test tool just for the purpose of 'automating
testing' is not useful; useful purposes might include: testing more thoroughly, testing in
ways that were not previously feasible via manual methods (such as load testing),
testing faster, or reducing excessively tedious manual testing. Automated testing rarely
enables savings in the cost of testing, although it may result in software lifecycle
savings (or increased sales) just as with any other quality-related initiative.
With the proper background and understanding of test automation, the following
considerations can be helpful in choosing a test tool (automated testing will not
necessarily resolve them, they are only considerations for automation potential):
Taking into account the testing needs determined by analysis of these considerations
and other appropriate factors, the types of desired test tools can be determined. For
each type of test tool (such as functional test tool, load test tool, etc.) the choices can be
further narrowed based on the characteristics of the software application. The relevant
characteristics will depend, of course, on the situation and the type of test tool and other
factors. Such characteristics could include the operating system, GUI components,
development languages, web server type, etc. Other factors affecting a choice could
include experience level and capabilities of test personnel, advantages/disadvantages
in developing a custom automated test tool, tool costs, tool quality and ease of use,
usefulness of the tool on other projects, etc.
Once a short list of potential test tools is selected, several can be utilized on a trial basis
for a final determination. Any expensive test tool should be thoroughly analyzed during
its trial period to ensure that it is appropriate and that it's capabilities and limitations are
well understood. This may require significant time or training, but the alternative is to
take a major risk of a mistaken investment.
10) This is a difficult question in that it typically involves tradeoffs between 'better' test
environments and cost. The ultimate situation would be a collection of test environments
that mimic exactly all possible hardware, software, network, data, and usage
characteristics of the expected live environments in which the software will be used. For
many software applications, this would involve a nearly infinite number of variations,
and would clearly be impossible. And for new software applications, it may also be
impossible to predict all the variations in environments in which the application will run.
For very large, complex systems, duplication of a 'live' type of environment may be
prohibitively expensive.
In reality judgements must be made as to which characteristics of a software application
environment are important, and test environments can be selected on that basis after
taking into account time, budget, and logistical constraints. Such judgements are
preferably made by those who have the most appropriate technical knowledge and
experience, along with an understanding of risks and constraints.
For smaller or low risk projects, an informal approach is common, but for larger or
higher risk projects (in terms of money, property, or lives) a more formalized process
involving multiple personnel and significant effort and expense may be appropriate.
In some situations it may be possible to mitigate the need for maintenance of large
numbers of varied test environments. One approach might be to coordinate internal
testing with beta testing efforts. Another possible mitigation approach is to provide built-
in automated tests that run automatically upon installation of the application by end-
users. These tests might then automatically report back information, via the internet,
about the application environment and problems encountered. Another possibility is the
use of virtual environments instead of physical test environments, using such tools as
VMWare or VirtualBox.
11) There is no simple answer for this. The 'best approach' is highly dependent on the
particular organization and project and the experience of the personnel involved.
For example, given two software projects of similar complexity and size, the appropriate
test effort for one project might be very large if it was for life-critical medical equipment
software, but might be much smaller for the other project if it was for a low-cost
computer game. A test estimation approach that only considered size and complexity
might be appropriate for one project but not for the other.
Metrics-Based Approach:
A useful approach is to track past experience of an organization's various projects and
the associated test effort that worked well for projects. Once there is a set of data
covering characteristics for a reasonable number of projects, then this 'past experience'
information can be used for future test project planning. (Determining and collecting
useful project metrics over time can be an extremely difficult task.) For each particular
new project, the 'expected' required test time can be adjusted based on whatever
metrics or other information is available, such as function point count, number of
external system interfaces, unit testing done by developers, risk levels of the project,
etc. In the end, this is essentially 'judgement based on documented experience', and is
not easy to do successfully.
Iterative Approach:
In this approach for large test efforts, an initial rough testing estimate is made. Once
testing begins, a more refined estimate is made after a small percentage (eg, 1%) of the
first estimate's work is done. At this point testers have obtained additional test project
knowledge and a better understanding of issues, general software quality, and risk. Test
plans and schedules can be refactored if necessary and a new estimate provided. Then
a yet-more-refined estimate is made after a somewhat larger percentage (eg, 2%) of the
new work estimate is done. Repeat the cycle as necessary/appropriate.
Percentage-of-Development Approach:
Some organizations utilize a quick estimation method for testing based on the estimated
programming effort. For example, if a project is estimated to require 1000 hours of
programming effort, and the organization normally finds that a 40% ratio for testing is
appropriate, then an estimate of 400 hours for testing would be used. This approach
may or may not be useful depending on the project-to-project variations in risk,
personnel, types of applications, levels of complexity, etc.
Successful test estimation is a challenge for most organizations, since few can
accurately estimate software project development efforts, much less the testing effort of
a project. It is also difficult to attempt testing estimates without first having detailed
information about a project, including detailed requirements, the organization's
experience with similar projects in the past, and an understanding of what should be
included in a 'testing' estimation for a project (functional testing? unit testing? reviews?
inspections? load testing? security testing?)
For an interesting view of the problem of test estimation, see the comments on Martin
Fowler's web site indicating that, for many large systems, "testing and debugging is
impossible to schedule".