0% found this document useful (0 votes)
51 views

Box/white Testing?: Unit. It

Black-box testing treats the system as a black box without knowledge of internal structure, focusing on functional requirements. White-box testing uses internal knowledge to select test data, focusing on structure. Unit testing isolates units using stubs/drivers. Component testing replaces stubs with real components. Integration testing combines components. Load testing subjects the system to representative loads while stress testing subjects it to unreasonable loads denying resources to find breaking points. QA ensures quality processes while testing operates the system under controlled conditions to find bugs. Complexity, changing requirements, deadlines and ego can contribute to bugs.

Uploaded by

balatn
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

Box/white Testing?: Unit. It

Black-box testing treats the system as a black box without knowledge of internal structure, focusing on functional requirements. White-box testing uses internal knowledge to select test data, focusing on structure. Unit testing isolates units using stubs/drivers. Component testing replaces stubs with real components. Integration testing combines components. Load testing subjects the system to representative loads while stress testing subjects it to unreasonable loads denying resources to find breaking points. QA ensures quality processes while testing operates the system under controlled conditions to find bugs. Complexity, changing requirements, deadlines and ego can contribute to bugs.

Uploaded by

balatn
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

1. What is black box/white box testing?

Black-box and white-box are test design methods. Black-box test design treats the system as a "black-box", so it
doesn't explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on
testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-
box. White-box test design allows one to peek inside the "box", and it focuses specifically on using internal
knowledge of the software to guide the selection of test data. Synonyms for white-box include: structural, glass-box
and clear-box.

While black-box and white-box are terms that are still in popular use, many people prefer the terms "behavioral" and
"structural". Behavioral test design is slightly different from black-box test design because the use of internal
knowledge isn't strictly forbidden, but it's still discouraged. ln practice, it hasn't proven useful to use a single test
design method. One has to use a mixture of different methods so that they aren't hindered by the limitations of a
particular one. Some call this "gray-box" or "translucent-box" test design, but others wish we'd stop talking about
boxes altogether.

It is important to understand that these methods are used during the test design phase, and their influence is hard
to see in the tests once they're implemented. Note that any level of testing (unit testing, system testing, etc.) can
use any test design methods. Unit testing is usually associated with structural test design, but this is because
testers usually don't have well-defined requirements at the unit level to validate.

2. What are unit, component and integration testing?

Note that the definitions of unit, component, integration, and integration testing are recursivei

Unit. The smallest compliable component. A unit typically is the work of one programmer (At least in principle).
As defined, it does not include any called sub-components (for procedural languages) or communicating
components in general.

Unit Testing: in unit testing called components (or communicating components) are replaced with stubs,
simulators, or trusted components. Calling components are replaced with drivers or trusted super-components. The
unit is tested in isolation.

component: a unit is a component. The integration of one or more components is a component.

Note: The reason for "one or more" as contrasted to "Two or more" is to allow for components that call themselves
recursively.

component testing: same as unit testing except that all stubs and simulators are replaced with the real thing.

Two components (actually one or more) are said to be integrated when:

a. They have been compiled, linked, and loaded together.


b. They have successfully passed the integration tests at the interface between them.

Thus, components A and B are integrated to create a new, larger, component (A,B). Note that this does not conflict
with the idea of incremental integration-it just means that A is a big component and B, the component added, is a
small one.

lntegration testing: carrying out integration tests.

lntegration tests (After Leung and White) for procedural languages. This is easily generalized for OO languages by
using the equivalent constructs for message passing. ln the following, the word "call" is to be understood in the
most general sense of a data flow and is not restricted to just formal subroutine calls and returns

3. What's the difference between load and stress testing ?

Testing
One of the most common, but unfortunate misuse of terminology is treating "load testing" and "stress testing" as
synonymous. The consequence of this ignorant semantic abuse is usually that the system is neither properly "load
tested" nor subjected to a meaningful stress test.

Stress testinq is subjecting a system to an unreasonable load while denying it the resources
(e.9., RAM, disc, mips, interrupts, etc.) needed to process that load. The idea is to stress a
system to the breaking point in order to find bugs that will make that break potentially harmful.
The system is not expected to process the overload without adequate resources, but to behave
(e.9., fail) in a decent manner (e.9., not corrupting or losing data). Bugs and failure modes
discovered under stress testing may or may not be repaired depending on the application, the
failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is
often deliberately distorted so as to force the system into resource depletion.

Load testing is subjecting a system to a statistically representative (usually) load. The two main
reasons for using such loads is in support of software reliability testing and in performance
testing. The term "load testing" by itself is too vague and imprecise to warrant use. For example,
do you mean representative load," "overload," "high load," etc. ln performance testing, load is
varied from a minimum (zero) to the maximum level the system can sustain without running out
of resources or having, transactions >suffer (application-specific) excessive delay.

A third use of the term is as a test whose objective is to determine the maximum sustainable
load the system can handle. ln this usage, "load testing" is merely testing at the highest
transaction arrival rate in performance testing.

4. What's the difference between QA and testing?

QA is more a preventive thing, ensuring quality in the company and therefore the product rather
than just testing the product for software bugs?

TESTING means "quality control"


QUALITY CONTROL measures the quality of a product
QUALITY ASSURANCE measures the quality of processes used to create a
quality product.

6. What is Software Quality Assurance?

Software QA involves the entire software development PROCESS - monitoring and improving
the process, making sure that any agreed-upon standards and procedures are followed, and
ensuring that problems are found and dealt with. lt is oriented to 'prevention'.

7. What is Software Testing?

Testing involves operation of a system or application under controlled conditions and evaluating
the results (eg, 'if the user is in interface A of the application while using hardware B, and does
C, then D should happen'). The controlled conditions should include both normal and abnormal
conditions. Testing should intentionally attempt to make things go wrong to determine if things
happen when they shouldn't or things don't happen when they should. lt is oriented to 'detection'.

Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes
they're the combined responsibility of one group or individual. Also common are project teams
that include a mix of testers and developers who work closely together, with overall QA
Testing
processes monitored by project managers. lt will depend on what best fits an organization's size
and business structure.

8. What are some recent major computer system failures caused by Software bugs?

aftereffects of the Y2K bug. The company found that many of their newer trains would
not run due to their inability to recognize the date '3111212000'; the trains were started
by altering the control system's date settings.

large mortgage lender; the vendor had reportedly delivered an online mortgage
processing system that did not meet specifications, was delivered late, and didn't
work.

9. Why is it often hard for management to get serious about quality assurance?

Solving problems is a high-visibility process; preventing problems is low-visibility.


10. Why does Software have bugs?

or shouldn't do (the application's requirements).

comprehend for anyone without experience in modern-day software development.


Windows-type interfaces, client-server and d istributed applications, data
communications, enormous relational databases, and sheer size of applications have
all contributed to the exponential growth in software/system complexity. And the use of
object-oriented techniques can complicate instead of simplify a project unless it is
well-engineered.

may understand and request them anyway - redesign, rescheduling of engineers,


effects on other projects, work already completed that may have to be redone or
thrown out, hardware requirements that may be affected, etc. lf there are many minor
changes or any major changes, known and unknown dependencies among parts of
the project are likely to interact and cause problems, and the complexity of keeping
track of changes may result in errors. Enthusiasm of engineering staff may be
affected. ln some fast-changing business environments, continuously modified
requirements may be a fact of life. ln this case, management must understand the
resulting risks, and QA and test engineers must adapt and plan for continuous
extensive testing to keep the inevitable bugs from running out of control.

of guesswork. When deadlines loom and the crunch comes, mistakes will be made.

egos - people prefer to say things like: 'no problem'

'piece of cake'
'l can whip that out in a few hours'
'it should be easy to update that old code'

Testing
instead of:
'that adds a lot of complexity and we could end up
making a lot of mistakes'
'we have no idea if we can do that; we'll wing it'
'l can't estimate how long it will take, until I
take a close look at it'
'we can't figure out what that old spaghetti code
did in the first place'

lf there are too many unrealistic 'no problem's', the result is bugs.

poorly documented; the result is bugs. ln many organizations management provides no


incentive for programmers to document their code or write clear, understandable code. ln
fact, it's usually the opposite: they get points mostly for quickly turning out code, and
there's job security if nobody else can understand it ('if it was hard to write, it should be
hard to read').

often introduce their own bugs or are poorly documented, resulting in added bugs.

11. How can new Software QA processes be introduced in an existing organization?

organizations with high-risk (in terms of lives or property) projects, serious management
buy-in is required and a formalized QA process is necessary.

may be a slower, step-at-atime process. QA processes should be balanced with


productivity so as to keep bureaucracy from getting out of hand.

the type of customers and projects. A lot will depend on team leads or managers,
feedback to developers, and ensuring adequate communications among customers,
managers, developers, and testers.

a goal of clear, complete, testable requirement specifications or expectations.

12. What is verification? validation?

Verification typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and specifications. This can be done with checklists, issues lists, walkthroughs,
and inspection meetings. Validation typically involves actual testing and takes place after
verifications are completed. The term 'lV & V' refers to lndependent Verification and Validation.

13. What is a'walkthrough'?

A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no


preparation is usually required.

14. What's an 'inspection'?

Testing
An inspection is more formalized than a 'walkthrough', typically with 3-B people including a
moderator, reader, and a recorder to take notes. The subject of the inspection is typically a
document such as a requirements spec or a test plan, and the purpose is to find problems and
see what's missing, not to fix anything. Attendees should prepare for this type of meeting by
reading thru the document; most problems will be found during this preparation. The result of the
inspection meeting should be a written report. Thorough preparation for inspections is difficult,
painstaking work, but is one of the most cost effective methods of ensuring quality. Employees
who are most skilled at inspections are like the 'eldest brother' in the parable in 'Why is it often
hard for management to get serious about quality assurance?'. Their skill may have low visibility
but they are extremely valuable to any software development organization, since bug prevention
is far more cost-effective than bug detection.

15. What kinds of testing should be considered?

based on requirements and functionality.

Tests are based on coverage of code statements, branches, paths, conditions.

Typically done by the programmer and not by testers, as it requires detailed knowledge of
the internal program design and code. Not always easily done unless the application has
a well-designed architecture with tight code; may require developing test driver modules
or test harnesses.

is added; requires that various aspects of an application's functionality be independent


enough to work separately before all parts of the program are completed, or that test
drivers be developed as needed; done by programmers or by testers.

function together correctly. The'parts'can be code modules, individual applications, client


and server applications on a network, etc. This type of testing is especially relevant to
client/server and distributed systems,

application; this type of testing should be done by testers. This doesn't mean that the
programmers shouldn't check that their code works before releasing it (which of course
applies to any stage of testing.)

specifications; covers all combined parts of a system.

testing of a complete application environment in a situation that mimics real-world use,


such as interacting with a database, using network communications, or interacting with
other hardware, applications, or systems if appropriate.

performing well enough to accept it for a major testing effort. For example, if the new
software is crashing systems every 5 minutes, bogging down systems to a crawl, or
destroying databases, the software may not be in a 'sane' enough condition to warrant
further testing in its current state.

environment. lt can be difficult to determine how much retesting is needed, especially


near the end of the development cycle. Automated testing tools can be especially useful
for this type of testing.

Testing
based on use by end-users/customers over some limited period of time.

under a range of loads to determine at what point the system's response time degrades
or fails.

used to describe such tests as system functional testing while under unusually heavy
loads, heavy repetition of certain actions or inputs, input of large numerical values, large
complex queries to a database system, etc.

ldeally 'performance' testing (and any other 'type' of testing) is defined in requirements
documentation or QA or Test Plans.

on the targeted end-user or customer. User interviews, surveys, video recording of user
sessions, and other techniques can be used. Programmers and testers are usually not
appropriate as usability testers.

other catastrophic problems.

external access, willful damage, etc; may require sophisticated testing techniques.

ha rdwa relsoftwa reloperati n g system/netwo rkletc. envi ron m e nt.

based on formal test plans or test cases; testers may be learning the software as they test
it.

have significant understanding of the software before testing it.

customer.
strengths to competing
products.

design changes may still be made as a result of such testing. Typically done by end-users
or others, not by programmers or testers.

bugs and problems need to be found before final release. Typically done by end-users or
others, not by programmers or testers.

deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requiies large
com putational resou rces.

16. what are 5 common problems in the software development process?


poor requirements - if requirements are unclear, incomplete, too general, or not
testable, there will be problems.
unrealistic schedule - if too much work is crammed in too little time, problems are
inevitable.

Testing
customer complains or systems crash.

common.

erroneous expectations, problems are guaranteed.

17. What are 5 common solutions to software development problems?

requirements that are agreed to by all players. Use prototypes to help nail down
requirements.

testing, changes, and documentation; personnel should be able to complete the


project without burning out.

adequate time for testing and bug-fixing.

changes and additions once development has begun, and be prepared to explain
consequences. lf changes are necessary, they should be adequately reflected in
related schedule changes. lf possible, use rapid prototyping during the design phase
so that customers can see what to expect. This will provide them a higher comfort
level with their requirements decisions and minimize changes later on.

extensive use of group communication tools - e-mail, groupware, networked bug-


tracking tools and change management tools, intranet capabilities, etc.; insure that
documentation is available and up-to-date - preferably electronic, not paper; promote
teamwork and cooperation; use prototypes early on so that customers' expectations
are clarified.

18. What is software'quality'?

Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable. However, quality is obviously a
subjective term. lt will depend on who the 'customer' is and their overall influence in the scheme
of things. A wide-angle view of the 'customers' of a software development project might include
end-users, customer acceptance testers, customer contract officers, customer management, the
development organization's management/accountants/testers/salespeople, future software
maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will
have their own slant on 'quality' - the accounting department might define quality in terms of
profits while an end-user might define quality as user-friendly and bug-free.

19. What is'good code'?


'Good code' is code that works, is bug free, and is readable and maintainable. Some
organizations have coding 'standards' that all developers are supposed to adhere to, but
everyone has different ideas about what's best, or what is too many or too few rules. There are
also various theories and metrics, such as McCabe Complexity metrics. lt should be kept in mind
that excessive use of standards and rules can stifle productivity and creativity. 'Peer reviews',
'buddy checks' code analysis tools, etc. can be used to check for problems and enforce

Testing
standards.
For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these
may or may not apply to a particular situation:
20. What is 'good design'?

'Design' could refer to many things, but often refers to 'functional design' or 'internal design'.
Good internal design is indicated by software code whose overall structure is clear,
understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and
status logging capability; and works correctly when implemented. Good functional design is
indicated by an application whose functionality can be traced back to customer and end-user
requirements. For programs that have a user interface, it's often a good idea to assume that the
end user will have little computer knowledge and may not read a user manual or even the on-
line help; some common rules-of-thumb include:

21. What is SEI? CMM? ISO? IEEE? ANSI? Will it help?

U.S. Defense Department to help improve software development processes.

organizational 'maturity'that determine effectiveness in delivering quality software. lt is


geared to large organizations such as large U.S. Defense Department contractors.
However, many of the QA processes involved are appropriate to any organization, and
if reasonably applied can be helpful. Organizations can receive CMM ratings by
undergoing assessments by qualified auditors.

Level 1 - characterized by chaos, periodic panics, and heroic


efforts required by individuals to successfully
complete projects. Few if any processes in place;
successes may not be repeatable.

Level 2 - software project tracking, requirements management,


realistic planning, and configuration management
processes are in place; successful practices can
be repeated.

Level 3 - standard software development and maintenance processes


are integrated throughout an organization; a Software
Engineering Process Group is is in place to oversee
software processes, and training programs are used to
ensure understanding and compliance.

Level 4 - metrics are used to track productivity, processes,


and products. Project pedormance is predictable,
and quality is consistently high.

Level 5 - the focus is on continouous process improvement. The

Testing
impact of new processes and technologies can be
predicted and effectively implemented when required.

Perspective on cMM ratings: During 1gg7-2001, 1018 organizations


were assessed. Of those, 27o/o wara rated at Level 1,39o/o al2,
23o/o at3,60/0 at 4, and 5% at 5. (For ratings during the period
1992-96, 620/owere at Level 1,23o/o at2, 13% at3,2oh at 4, and
0.4% at 5.) The median size of organizations was 1 00 software
en g ineeri ngimaintenance person nel ; 32% of organ izations were
U.S. federal contractors or agencies. For those rated at
Level 1, the most problematical key process area was in
Software Quality Assu rance.

(which replaces the previous standard of 1994) concerns quality systems that are
assessed by outside auditors, and it applies to many kinds of production and
manufacturing organizations, not just software. lt covers documentation, design,
development, production, testing, installation, servicing, and other processes. The full set
of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements;
(b)O9000-2000 - Quality Management Systems: Fundamentals and Vocabulary;
(c)Q9004-2000 - Quality Management Systems: Guidelines for Performance
lmprovements. To be ISO 9001 certified, a third-party auditor assesses an organization,
and certification is typically good for about 3 years, after which a complete reassessment
is required. Note that ISO certification does not necessarily indicate quality products - it
indicates only that documented processes are followed.

standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI


Standard 829),'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008),
'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and
others.

the U.S.; publishes some software-related standards in conjunction with the IEEE and
ASQ (American Society for Quality).

include SPICE, Trillium, TicklT. and Bootstrap.

22.What is the'software life cycle'?

The life cycle begins when an application is first conceived and ends when it is no longer in use.
It includes aspects such as initial concept, requirements analysis, functional design, internal
design, documentation planning, test planning, coding, document preparation, integration,
testing, maintenance, updates, retesting, phase-out, and other aspects.

23. Will automated testing tools make testing easier?

worth it. For larger projects, or on-going longterm projects they can be valuable.

could click through all combinations of menu choices, dialog box choices, buttons, etc.
in an application GUI and have them 'recorded' and the results logged by a tool. The
'recording' is typically in the form of text based on a scripting language that is
Testing
interpretable by the testing tool. lf new buttons are added, or some underlying code in
the application is changed, etc. the application can then be retested by just 'playing
back' the 'recorded' actions, and comparing the logging results to check effects of the
changes. The problem with such tools is that if there are continual changes to the
system being tested, the 'recordings' may have to be changed so much that it
becomes very time-consuming to continuously update the scripts. Additionally,
interpretation of results (screens, data, logs, etc.) can be a difficult task. Note that
there are record/playback tools for text-based interfaces also, and for all types of
platforms.

code analyzers - monitor code complexity, adherence to


standards, etc.

coverage analyzers - these tools check which parts of the


code have been exercised by a test, and may
be oriented to code statement coverage,
condition coverage, path coverage, etc.

memory analyzers - such as bounds-checkers and leak detectors.

load/performance test tools - for testing clienUserver


and web applications under various load
levels.

web test tools - to check that links are valid, HTML code
usage is correct, client-side and
server-side programs work, a web site's
interactions are secure.

other tools - for test case management, documentation


management, bug reporting, and configuration
management.

24.What makes a good test engineer?

A good test engineer has a 'test to break' attitude, an ability to take the point of view of the
customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in
maintaining a cooperative relationship with developers, and an ability to communicate with both
technical (developers) and non-technical (customers, management) people is useful. Previous
software development experience can be helpful as it provides a deeper understanding of the
software development process, gives the tester an appreciation for the developers' point of view,
and reduce the learning curye in automated test tool programming. Judgment skills are needed
to assess high-risk areas of an application on which to focus testing efforts when time is limited.

25. What makes a good Software QA engineer?

The same qualities a good tester has are useful for a QA engineer. Additionally, they must be
able to understand the entire software development process and how it can fit into the business
approach and goals of the organization. Communication skills and the ability to understand

Testing 10
various sides of issues are important. ln organizations in the early stages of implementing eA
processes, patience and diplomacy are especially needed. An ability to find problems as well as
to see'what's missing' is important for inspections and reviews.

26. What makes a good QA or Test manager?

A good QA, test, or QA/Test(combined) manager should:

be familiar with the software development process


be able to maintain enthusiasm of their team and promote a positive atmosphere,
despite what is a somewhat 'negative' process (e.g., looking for or preventing
problems)
be able to promote teamwork to increase productivity
be able to promote cooperation between software, test, and eA engineers
have the diplomatic skills needed to promote improvements in QA processes
have the ability to withstand pressures and say'no'to other managers when quality is
insufficient or QA processes are not being adhered to
have people judgement skills for hiring and keeping skilred personnel
be able to communicate with technical and non-technical people, engineers,
managers, and customers.

27.What's the role of documentation in QA?

Critical. (Note that documentation can be electronic, not necessarily paper.) eA practices should
be documented such that they are repeatable. Specifications, designs, business rules,
inspection reports, configurations, code changes, test plans, test cases, bug reports, user
manuals, etc. should all be documented. There should ideally be a system for easily finding and
obtaining documents and determining what documentation will have a particular piece of
information. Change management for documentation should be used if possible

28. What's the big deal about'requirements'?

One of the most reliable methods of insuring problems, or failure, in a complex software project
is to have poorly documented requirements specifications. Requirements are the details
describing an application's externally-perceived functionality and properties. Requirements
should be clear, complete, reasonably detailed, cohesive, attainable, and testable.

ln some organizations requirements may end up in high level project plans, functional
specification documents, in design documents, or in other documents at various levels of detail.
No matter what they are called, some type of documentation with detailed requirements will be
needed by testers in order to properly plan and execute tests. Without such documentation,
there will be no clear-cut way to determine if a software application is performing correcfly.

29. what steps are needed to develop and run software tests?

The following are some of the steps to consider:

necessary documents

Testing 11
required standards and processes (such as release processes, change processes, etc.)

cycle

30. What's a 'test plan'?

A software project test plan is a document that describes the objectives, scope, approach, and
focus of a software testing effort. The process of preparing a test plan is a useful way to think
through the efforts needed to validate the acceptability of a software product. The completed
document will help people outside the test group understand the 'why' and 'how' of product
validation. lt should be thorough enough to be useful but not so thorough that no one outside the
test group will read it. The following are some of the items that might be included in a test plan,
depending on the particular project:

plans, etc.

process, system, module, etc. as applicable

configurations, interfaces to other systems

and their impact on test validity.

capture software, that will be used to help describe and report bugs

Testing t2
help track the cause or source of bugs

contact persons, and coordination issues

31. What's a 'test case'?

response, to determine if a feature of an application is working correctly. A test case


should contain particulars such as test case identifier, test case name, objective, test
conditions/setup, input data requirements, steps, and expected results.

or design of an application, since it requires completely thinking through the operation of


the application. For this reason, it's useful to prepare test cases early in the development
cycle if possible.

32. What should be done after a bug is found?

The bug needs to be communicated and assigned to developers that can fix it. After the problem
is resolved, fixes should be re-tested, and determinations made regarding requirements for
regression testing to check that fixes didn't create problems elsewhere. lf a problemtracking
system is in place, it should encapsulate these processes. A variety of commercial problem-
tracking/management software tools are available. The following are items to consider in the
tracking process:

severity, and reproduce it if necessary.

Testing 13
developer doesn't have easy access to the test case/test scripUtest tool

helpful in finding the cause of the problem

A reporting or tracking process should enable notification of appropriate personnel at various


stages. For instance, testers need to know when retesting is needed, developers need to know
when bugs are found and how to get the needed information, and reporting/summary capabilities
are needed for managers.

33. What is 'configuration management'?

Configuration management covers the processes used to control, coordinate, and track: code,
requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them, and who makes the changes.

34. What if the software is so buggy it can't really be tested at all?

The best bet in this situation is for the testers to go through the process of reporting whatever
bugs or blockingtype problems initially show up, with the focus being on critical bugs. Since this
type of problem can severely affect schedules, and indicates deeper problems in the software
development process (such as insufficient unit testing or insufficient integration testing, poor
design, improper build or release procedures, etc.) managers should be notified, and provided
with some documentation as evidence of the problem.

35. How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so complex, and run in
such an interdependent environment, that complete testing can never be done. Common factors
in deciding when to stop are:

36. What if there isn't enough time for thorough testing?

Use risk analysis to determine where testing should be focused.


Since it's rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects. This requires judgement skills, common
sense, and experience. (lf warranted, formal methods are also available.) Considerations can
include:

Which functionality is most important to the project's intended purpose?


Which functionality is most visible to the user?

Testing l4
37. What can be done if requirements are changing continuously?

A common problem and a major headache.

change so that alternate test plans and strategies can be worked out in advance, if
possible.

changes do not require redoing the application from scratch.

developers.

requirements and minimize changes.

possibility of changes.

38. What if the project isn't big enough to justify extensive testing?

Consider the impact of project errors, not the size of the project. However, if extensive testing is
still not justified, risk analysis is again needed and the same considerations as described
previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then
do ad hoc testing, or write up a limited test plan based on the risk analysis.

39. What if the application has functionality that wasn't in the requirements?

It may take serious effort to determine if an application has significant unexpected or hidden
functionality, and it would indicate deeper problems in the software development process. lf the
functionality isn't necessary to the purpose of the application, it should be removed, as it may
have unknown impacts or dependencies that were not taken into account by the designer or the
customer. lf not removed, design information will be needed to determine added testing needs or
regression testing needs. Management should be made aware of any significant added risks as
a result of the unexpected functionality. lf the functionality only effects areas such as minor
improvements in the user interface, for example, it may not be a significant risk.

40. How can Software QA processes be implemented without stifling productivity?

Testing ls
By implementing QA processes slowly over time, using consensus to reach agreement on
processes, and adjusting and experimenting as an organization grows and matures, productivity
will be improved instead of stifled. Problem prevention will lessen the need for problem
detection, panics and burn-out will decrease, and there will be improved focus and less wasted
efforl. At the same time, attempts should be made to keep processes simple and efficient,
minimize papenvork, promote computer-based processes and automated tracking and reporting,
minimize time required in meetings, and promote training as part of the QA process. However,
no one - especially talented technical types - likes rules or bureacracy, and in the short run
things may slow down a bit. A typical scenario would be that more days of planning and
development will be needed, but less time will be required for late-night bug-fixing and calming
of irate customers.

41. What if an organization is growing so fast that fixed QA processes are impossible?

This is a common problem in the software industry, especially in new technology areas. There is
no easy solution in this situation, other than:

customer

42. How does a clienUserver environment affect testing?

ClienUserver applications can be quite complex due to the multiple dependencies among clients,
data communications, hardware, and servers. Thus testing requirements can be extensive.
When time is limited (as it usually is) the focus should be on integration and system testing.
Additionally, load/stress/performance testing may be useful in determining client/server
application limitations and capabilities. There are commercial tools to assist with such testing.

43. How can World Wide Web sites be tested?

Web sites are essentially client/server applications - with web servers and 'browser' clients.
Consideration should be given to the interactions between html pages, TCP/IP communications,
lnternet connections, firewalls, applications that run in web pages (such as applets, javascript,
plug-in applications), and applications that run on the server side (such as cgi scripts, database
interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a
wide variety of servers and browsers, various versions of each, small but sometimes significant
differences between them, variations in connection speeds, rapidly changing technologies, and
multiple standards and protocols. The end result is that testing for web sites can become a major
ongoing effort. Other considerations might include:

44. How is testing affected by object-oriented designs?

Well-engineered object-oriented design can make it easier to trace from code to internal design
to functional design to requirements. While there will be little affect on black box testing (where
an understanding of the internal design of the application is unnecessary), white-box testing can
be oriented to the application's objects. lf the application was well-designed this can simplify test
design.

Testing t6
45. What is Extreme Programming and what's it got to do with testing?

Extreme Programming (XP) is a software development approach for small teams on risk-prone
projects with unstable requirements. lt was created by Kent Beck who described the approach in
his book 'Extreme Programming Explained'. Testing ('extreme testing') is a core aspect of
Extreme Programming. Programmers are expected to write unit and functional test code first -
before the application is developed. Test code is under source control along with the rest of the
code. Customers are expected to be an integral part of the project team and to help develope
scenarios for acceptance/black box testing. Acceptance tests are preferably automated, and are
modified and rerun for each of the frequent development iterations. QA and test personnel are
also required to be an integral part of the project team. Detailed requirements documentation is
not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected.

46. Common Software Errors

lntroduction

This document takes you through whirl-wind tour of common software errors. This is an excellent
aid for software testing. lt helps you to identify errors systematically and increases the efficiency
of software testing and improves testing productivity. For more information, please refer Testing
Computer Software, Wiley Edition.

Type of Errors

. User lnterface Errors

. Error Handling

. Boundary related errors

. Calculation errors

o lnitial and Later states


a Control flow errors

Errors in Handling or lnterpreting Data

Race Conditions

Load Conditions

Hardware

Source, Version and lD Control

Testing Errors

Testing t7
Testing 18

You might also like