Box/white Testing?: Unit. It
Box/white Testing?: Unit. It
Black-box and white-box are test design methods. Black-box test design treats the system as a "black-box", so it
doesn't explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on
testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-
box. White-box test design allows one to peek inside the "box", and it focuses specifically on using internal
knowledge of the software to guide the selection of test data. Synonyms for white-box include: structural, glass-box
and clear-box.
While black-box and white-box are terms that are still in popular use, many people prefer the terms "behavioral" and
"structural". Behavioral test design is slightly different from black-box test design because the use of internal
knowledge isn't strictly forbidden, but it's still discouraged. ln practice, it hasn't proven useful to use a single test
design method. One has to use a mixture of different methods so that they aren't hindered by the limitations of a
particular one. Some call this "gray-box" or "translucent-box" test design, but others wish we'd stop talking about
boxes altogether.
It is important to understand that these methods are used during the test design phase, and their influence is hard
to see in the tests once they're implemented. Note that any level of testing (unit testing, system testing, etc.) can
use any test design methods. Unit testing is usually associated with structural test design, but this is because
testers usually don't have well-defined requirements at the unit level to validate.
Note that the definitions of unit, component, integration, and integration testing are recursivei
Unit. The smallest compliable component. A unit typically is the work of one programmer (At least in principle).
As defined, it does not include any called sub-components (for procedural languages) or communicating
components in general.
Unit Testing: in unit testing called components (or communicating components) are replaced with stubs,
simulators, or trusted components. Calling components are replaced with drivers or trusted super-components. The
unit is tested in isolation.
Note: The reason for "one or more" as contrasted to "Two or more" is to allow for components that call themselves
recursively.
component testing: same as unit testing except that all stubs and simulators are replaced with the real thing.
Thus, components A and B are integrated to create a new, larger, component (A,B). Note that this does not conflict
with the idea of incremental integration-it just means that A is a big component and B, the component added, is a
small one.
lntegration tests (After Leung and White) for procedural languages. This is easily generalized for OO languages by
using the equivalent constructs for message passing. ln the following, the word "call" is to be understood in the
most general sense of a data flow and is not restricted to just formal subroutine calls and returns
Testing
One of the most common, but unfortunate misuse of terminology is treating "load testing" and "stress testing" as
synonymous. The consequence of this ignorant semantic abuse is usually that the system is neither properly "load
tested" nor subjected to a meaningful stress test.
Stress testinq is subjecting a system to an unreasonable load while denying it the resources
(e.9., RAM, disc, mips, interrupts, etc.) needed to process that load. The idea is to stress a
system to the breaking point in order to find bugs that will make that break potentially harmful.
The system is not expected to process the overload without adequate resources, but to behave
(e.9., fail) in a decent manner (e.9., not corrupting or losing data). Bugs and failure modes
discovered under stress testing may or may not be repaired depending on the application, the
failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is
often deliberately distorted so as to force the system into resource depletion.
Load testing is subjecting a system to a statistically representative (usually) load. The two main
reasons for using such loads is in support of software reliability testing and in performance
testing. The term "load testing" by itself is too vague and imprecise to warrant use. For example,
do you mean representative load," "overload," "high load," etc. ln performance testing, load is
varied from a minimum (zero) to the maximum level the system can sustain without running out
of resources or having, transactions >suffer (application-specific) excessive delay.
A third use of the term is as a test whose objective is to determine the maximum sustainable
load the system can handle. ln this usage, "load testing" is merely testing at the highest
transaction arrival rate in performance testing.
QA is more a preventive thing, ensuring quality in the company and therefore the product rather
than just testing the product for software bugs?
Software QA involves the entire software development PROCESS - monitoring and improving
the process, making sure that any agreed-upon standards and procedures are followed, and
ensuring that problems are found and dealt with. lt is oriented to 'prevention'.
Testing involves operation of a system or application under controlled conditions and evaluating
the results (eg, 'if the user is in interface A of the application while using hardware B, and does
C, then D should happen'). The controlled conditions should include both normal and abnormal
conditions. Testing should intentionally attempt to make things go wrong to determine if things
happen when they shouldn't or things don't happen when they should. lt is oriented to 'detection'.
Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes
they're the combined responsibility of one group or individual. Also common are project teams
that include a mix of testers and developers who work closely together, with overall QA
Testing
processes monitored by project managers. lt will depend on what best fits an organization's size
and business structure.
8. What are some recent major computer system failures caused by Software bugs?
aftereffects of the Y2K bug. The company found that many of their newer trains would
not run due to their inability to recognize the date '3111212000'; the trains were started
by altering the control system's date settings.
large mortgage lender; the vendor had reportedly delivered an online mortgage
processing system that did not meet specifications, was delivered late, and didn't
work.
9. Why is it often hard for management to get serious about quality assurance?
of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
'piece of cake'
'l can whip that out in a few hours'
'it should be easy to update that old code'
Testing
instead of:
'that adds a lot of complexity and we could end up
making a lot of mistakes'
'we have no idea if we can do that; we'll wing it'
'l can't estimate how long it will take, until I
take a close look at it'
'we can't figure out what that old spaghetti code
did in the first place'
lf there are too many unrealistic 'no problem's', the result is bugs.
often introduce their own bugs or are poorly documented, resulting in added bugs.
organizations with high-risk (in terms of lives or property) projects, serious management
buy-in is required and a formalized QA process is necessary.
the type of customers and projects. A lot will depend on team leads or managers,
feedback to developers, and ensuring adequate communications among customers,
managers, developers, and testers.
Verification typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and specifications. This can be done with checklists, issues lists, walkthroughs,
and inspection meetings. Validation typically involves actual testing and takes place after
verifications are completed. The term 'lV & V' refers to lndependent Verification and Validation.
Testing
An inspection is more formalized than a 'walkthrough', typically with 3-B people including a
moderator, reader, and a recorder to take notes. The subject of the inspection is typically a
document such as a requirements spec or a test plan, and the purpose is to find problems and
see what's missing, not to fix anything. Attendees should prepare for this type of meeting by
reading thru the document; most problems will be found during this preparation. The result of the
inspection meeting should be a written report. Thorough preparation for inspections is difficult,
painstaking work, but is one of the most cost effective methods of ensuring quality. Employees
who are most skilled at inspections are like the 'eldest brother' in the parable in 'Why is it often
hard for management to get serious about quality assurance?'. Their skill may have low visibility
but they are extremely valuable to any software development organization, since bug prevention
is far more cost-effective than bug detection.
Typically done by the programmer and not by testers, as it requires detailed knowledge of
the internal program design and code. Not always easily done unless the application has
a well-designed architecture with tight code; may require developing test driver modules
or test harnesses.
application; this type of testing should be done by testers. This doesn't mean that the
programmers shouldn't check that their code works before releasing it (which of course
applies to any stage of testing.)
performing well enough to accept it for a major testing effort. For example, if the new
software is crashing systems every 5 minutes, bogging down systems to a crawl, or
destroying databases, the software may not be in a 'sane' enough condition to warrant
further testing in its current state.
Testing
based on use by end-users/customers over some limited period of time.
under a range of loads to determine at what point the system's response time degrades
or fails.
used to describe such tests as system functional testing while under unusually heavy
loads, heavy repetition of certain actions or inputs, input of large numerical values, large
complex queries to a database system, etc.
ldeally 'performance' testing (and any other 'type' of testing) is defined in requirements
documentation or QA or Test Plans.
on the targeted end-user or customer. User interviews, surveys, video recording of user
sessions, and other techniques can be used. Programmers and testers are usually not
appropriate as usability testers.
external access, willful damage, etc; may require sophisticated testing techniques.
based on formal test plans or test cases; testers may be learning the software as they test
it.
customer.
strengths to competing
products.
design changes may still be made as a result of such testing. Typically done by end-users
or others, not by programmers or testers.
bugs and problems need to be found before final release. Typically done by end-users or
others, not by programmers or testers.
deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requiies large
com putational resou rces.
Testing
customer complains or systems crash.
common.
requirements that are agreed to by all players. Use prototypes to help nail down
requirements.
changes and additions once development has begun, and be prepared to explain
consequences. lf changes are necessary, they should be adequately reflected in
related schedule changes. lf possible, use rapid prototyping during the design phase
so that customers can see what to expect. This will provide them a higher comfort
level with their requirements decisions and minimize changes later on.
Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable. However, quality is obviously a
subjective term. lt will depend on who the 'customer' is and their overall influence in the scheme
of things. A wide-angle view of the 'customers' of a software development project might include
end-users, customer acceptance testers, customer contract officers, customer management, the
development organization's management/accountants/testers/salespeople, future software
maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will
have their own slant on 'quality' - the accounting department might define quality in terms of
profits while an end-user might define quality as user-friendly and bug-free.
Testing
standards.
For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these
may or may not apply to a particular situation:
20. What is 'good design'?
'Design' could refer to many things, but often refers to 'functional design' or 'internal design'.
Good internal design is indicated by software code whose overall structure is clear,
understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and
status logging capability; and works correctly when implemented. Good functional design is
indicated by an application whose functionality can be traced back to customer and end-user
requirements. For programs that have a user interface, it's often a good idea to assume that the
end user will have little computer knowledge and may not read a user manual or even the on-
line help; some common rules-of-thumb include:
Testing
impact of new processes and technologies can be
predicted and effectively implemented when required.
(which replaces the previous standard of 1994) concerns quality systems that are
assessed by outside auditors, and it applies to many kinds of production and
manufacturing organizations, not just software. lt covers documentation, design,
development, production, testing, installation, servicing, and other processes. The full set
of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements;
(b)O9000-2000 - Quality Management Systems: Fundamentals and Vocabulary;
(c)Q9004-2000 - Quality Management Systems: Guidelines for Performance
lmprovements. To be ISO 9001 certified, a third-party auditor assesses an organization,
and certification is typically good for about 3 years, after which a complete reassessment
is required. Note that ISO certification does not necessarily indicate quality products - it
indicates only that documented processes are followed.
the U.S.; publishes some software-related standards in conjunction with the IEEE and
ASQ (American Society for Quality).
The life cycle begins when an application is first conceived and ends when it is no longer in use.
It includes aspects such as initial concept, requirements analysis, functional design, internal
design, documentation planning, test planning, coding, document preparation, integration,
testing, maintenance, updates, retesting, phase-out, and other aspects.
worth it. For larger projects, or on-going longterm projects they can be valuable.
could click through all combinations of menu choices, dialog box choices, buttons, etc.
in an application GUI and have them 'recorded' and the results logged by a tool. The
'recording' is typically in the form of text based on a scripting language that is
Testing
interpretable by the testing tool. lf new buttons are added, or some underlying code in
the application is changed, etc. the application can then be retested by just 'playing
back' the 'recorded' actions, and comparing the logging results to check effects of the
changes. The problem with such tools is that if there are continual changes to the
system being tested, the 'recordings' may have to be changed so much that it
becomes very time-consuming to continuously update the scripts. Additionally,
interpretation of results (screens, data, logs, etc.) can be a difficult task. Note that
there are record/playback tools for text-based interfaces also, and for all types of
platforms.
web test tools - to check that links are valid, HTML code
usage is correct, client-side and
server-side programs work, a web site's
interactions are secure.
A good test engineer has a 'test to break' attitude, an ability to take the point of view of the
customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in
maintaining a cooperative relationship with developers, and an ability to communicate with both
technical (developers) and non-technical (customers, management) people is useful. Previous
software development experience can be helpful as it provides a deeper understanding of the
software development process, gives the tester an appreciation for the developers' point of view,
and reduce the learning curye in automated test tool programming. Judgment skills are needed
to assess high-risk areas of an application on which to focus testing efforts when time is limited.
The same qualities a good tester has are useful for a QA engineer. Additionally, they must be
able to understand the entire software development process and how it can fit into the business
approach and goals of the organization. Communication skills and the ability to understand
Testing 10
various sides of issues are important. ln organizations in the early stages of implementing eA
processes, patience and diplomacy are especially needed. An ability to find problems as well as
to see'what's missing' is important for inspections and reviews.
Critical. (Note that documentation can be electronic, not necessarily paper.) eA practices should
be documented such that they are repeatable. Specifications, designs, business rules,
inspection reports, configurations, code changes, test plans, test cases, bug reports, user
manuals, etc. should all be documented. There should ideally be a system for easily finding and
obtaining documents and determining what documentation will have a particular piece of
information. Change management for documentation should be used if possible
One of the most reliable methods of insuring problems, or failure, in a complex software project
is to have poorly documented requirements specifications. Requirements are the details
describing an application's externally-perceived functionality and properties. Requirements
should be clear, complete, reasonably detailed, cohesive, attainable, and testable.
ln some organizations requirements may end up in high level project plans, functional
specification documents, in design documents, or in other documents at various levels of detail.
No matter what they are called, some type of documentation with detailed requirements will be
needed by testers in order to properly plan and execute tests. Without such documentation,
there will be no clear-cut way to determine if a software application is performing correcfly.
29. what steps are needed to develop and run software tests?
necessary documents
Testing 11
required standards and processes (such as release processes, change processes, etc.)
cycle
A software project test plan is a document that describes the objectives, scope, approach, and
focus of a software testing effort. The process of preparing a test plan is a useful way to think
through the efforts needed to validate the acceptability of a software product. The completed
document will help people outside the test group understand the 'why' and 'how' of product
validation. lt should be thorough enough to be useful but not so thorough that no one outside the
test group will read it. The following are some of the items that might be included in a test plan,
depending on the particular project:
plans, etc.
capture software, that will be used to help describe and report bugs
Testing t2
help track the cause or source of bugs
The bug needs to be communicated and assigned to developers that can fix it. After the problem
is resolved, fixes should be re-tested, and determinations made regarding requirements for
regression testing to check that fixes didn't create problems elsewhere. lf a problemtracking
system is in place, it should encapsulate these processes. A variety of commercial problem-
tracking/management software tools are available. The following are items to consider in the
tracking process:
Testing 13
developer doesn't have easy access to the test case/test scripUtest tool
Configuration management covers the processes used to control, coordinate, and track: code,
requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them, and who makes the changes.
The best bet in this situation is for the testers to go through the process of reporting whatever
bugs or blockingtype problems initially show up, with the focus being on critical bugs. Since this
type of problem can severely affect schedules, and indicates deeper problems in the software
development process (such as insufficient unit testing or insufficient integration testing, poor
design, improper build or release procedures, etc.) managers should be notified, and provided
with some documentation as evidence of the problem.
This can be difficult to determine. Many modern software applications are so complex, and run in
such an interdependent environment, that complete testing can never be done. Common factors
in deciding when to stop are:
Testing l4
37. What can be done if requirements are changing continuously?
change so that alternate test plans and strategies can be worked out in advance, if
possible.
developers.
possibility of changes.
38. What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if extensive testing is
still not justified, risk analysis is again needed and the same considerations as described
previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then
do ad hoc testing, or write up a limited test plan based on the risk analysis.
39. What if the application has functionality that wasn't in the requirements?
It may take serious effort to determine if an application has significant unexpected or hidden
functionality, and it would indicate deeper problems in the software development process. lf the
functionality isn't necessary to the purpose of the application, it should be removed, as it may
have unknown impacts or dependencies that were not taken into account by the designer or the
customer. lf not removed, design information will be needed to determine added testing needs or
regression testing needs. Management should be made aware of any significant added risks as
a result of the unexpected functionality. lf the functionality only effects areas such as minor
improvements in the user interface, for example, it may not be a significant risk.
Testing ls
By implementing QA processes slowly over time, using consensus to reach agreement on
processes, and adjusting and experimenting as an organization grows and matures, productivity
will be improved instead of stifled. Problem prevention will lessen the need for problem
detection, panics and burn-out will decrease, and there will be improved focus and less wasted
efforl. At the same time, attempts should be made to keep processes simple and efficient,
minimize papenvork, promote computer-based processes and automated tracking and reporting,
minimize time required in meetings, and promote training as part of the QA process. However,
no one - especially talented technical types - likes rules or bureacracy, and in the short run
things may slow down a bit. A typical scenario would be that more days of planning and
development will be needed, but less time will be required for late-night bug-fixing and calming
of irate customers.
41. What if an organization is growing so fast that fixed QA processes are impossible?
This is a common problem in the software industry, especially in new technology areas. There is
no easy solution in this situation, other than:
customer
ClienUserver applications can be quite complex due to the multiple dependencies among clients,
data communications, hardware, and servers. Thus testing requirements can be extensive.
When time is limited (as it usually is) the focus should be on integration and system testing.
Additionally, load/stress/performance testing may be useful in determining client/server
application limitations and capabilities. There are commercial tools to assist with such testing.
Web sites are essentially client/server applications - with web servers and 'browser' clients.
Consideration should be given to the interactions between html pages, TCP/IP communications,
lnternet connections, firewalls, applications that run in web pages (such as applets, javascript,
plug-in applications), and applications that run on the server side (such as cgi scripts, database
interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a
wide variety of servers and browsers, various versions of each, small but sometimes significant
differences between them, variations in connection speeds, rapidly changing technologies, and
multiple standards and protocols. The end result is that testing for web sites can become a major
ongoing effort. Other considerations might include:
Well-engineered object-oriented design can make it easier to trace from code to internal design
to functional design to requirements. While there will be little affect on black box testing (where
an understanding of the internal design of the application is unnecessary), white-box testing can
be oriented to the application's objects. lf the application was well-designed this can simplify test
design.
Testing t6
45. What is Extreme Programming and what's it got to do with testing?
Extreme Programming (XP) is a software development approach for small teams on risk-prone
projects with unstable requirements. lt was created by Kent Beck who described the approach in
his book 'Extreme Programming Explained'. Testing ('extreme testing') is a core aspect of
Extreme Programming. Programmers are expected to write unit and functional test code first -
before the application is developed. Test code is under source control along with the rest of the
code. Customers are expected to be an integral part of the project team and to help develope
scenarios for acceptance/black box testing. Acceptance tests are preferably automated, and are
modified and rerun for each of the frequent development iterations. QA and test personnel are
also required to be an integral part of the project team. Detailed requirements documentation is
not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected.
lntroduction
This document takes you through whirl-wind tour of common software errors. This is an excellent
aid for software testing. lt helps you to identify errors systematically and increases the efficiency
of software testing and improves testing productivity. For more information, please refer Testing
Computer Software, Wiley Edition.
Type of Errors
. Error Handling
. Calculation errors
Race Conditions
Load Conditions
Hardware
Testing Errors
Testing t7
Testing 18