1. a.
Explain the significance of Psychology of Testing and the Economics of testing
8Marks
b.List the Software Testing Principles and Explain any Two. 8Marks
2. Explain Error Checklist for Inspections with different types of error. 8Marks
b.Write a short notes on the following 8Marks
1. Code inspections using checklists
2. Group walkthroughs.
3. Explain (a) White Box Testing (b) Equivalence Partitioning 8+8 Marks
4. a) Explain Module Testing and Test Case design 8Marks
b) Compare the Top-down Testing and Bottom-up testing 8Marks
5. a) Write a short Notes on (a) Functional Testing and (2) System Testing 8Marks
b) What is debugging? Explain Debugging Principles 8Marks
6. a) Explain inductive Debugging Process with Neat Diagram 8Marks
b) Explain the deductive debugging process with Neat Diagram 8Marks
7. a)Define Extreme Programming and List the Practices of Extreme Programming 8Marks
b) Explain (a) Extreme Unit Testing and (b) Acceptance Testing 8Marks
8. a)ExplainThe challenges associated with testing Internet-based applications 8Marks
b) Write a short notes on (1) Business Layer Testing (2) Performance Testing 8Marks
Scheme of evaluation
Q1. Explain the significance of Psychology of Testing and the Economics of testing
Answers: Psychology of Testing
One of the primary causes of poor program testing is the fact that most programmers begin with
a false definition of the term. They might say:
• “Testing is the process of demonstrating that errors are not
present.” or
• “The purpose of testing is to show that a program performs its
intended functions correctly.”
or
• “Testing is the process of establishing confidence that a program
does what it is supposed to do.”
These definitions are upside-down. When you test a program, you want to add some value to it.
Adding value through testing means raising the quality or reliability of the program. Raising the
reliability of the program means finding and removing errors. Therefore, don’t test a program to
show that it works; rather, you should start with the assumption that the program contains errors
(a valid assumption for almost any program) and then test the program
to find as many of the errors as possible.
Thus, a more appropriate definition is this:
Testing is the process of executing a program
with the intent of finding errors.
Although this may sound like a game of subtle semantics, it’s really an important distinction.
Understanding the true definition of software testing can make a profound difference in the
success of your efforts.
Economics of testing
Given this definition of program testing, an appropriate next step is the determination of whether
it is possible to test a program to find all of its errors. We will show you that the answer is
negative, even for trivial programs. In general, it is impractical, often impossible, to find
all the errors in a program. This fundamental problem will, in turn, have implications for the
economics of testing, assumptions that the tester will have to make about the program, and the
manner in which test cases are designed.
To combat the challenges associated with testing economics, you should establish some
strategies before beginning. Two of the most prevalent strategies include black-box testing and
white-box testing,
Black-Box Testing
One important testing strategy is black-box, data-driven, or input/outputdriven testing. To use
this method, view the program as a black box. Your goal is to be completely unconcerned about
the internal behavior and structure of the program. Instead, concentrate on finding circumstances
in which the program does not behave according to its specifications.
White-Box Testing
Another testing strategy, white-box or logic-driven testing, permits you to examine the internal
structure of the program. This strategy derives test data from an examination of the program’s
logic (andoften, unfortunately, at the neglect of the specification).
Q1b. List the Software Testing Principles and Explain any Two
Principle 1: A necessary part of a test case is a definition of the expected output or result.
Principle 2: A programmer should avoid attempting to test his or her own program.
Principle 3: A programming organization should not test its own programs.
Principle 4: Thoroughly inspect the results of each test.
Principle 5: Test cases must be written for input conditions that are invalid and unexpected, as
well as for those that are valid and expected.
Principle 6: Examining a program to see if it does not do what it is supposed to do is only half
the battle; the other half is seeing whether the program does what it is not supposed to do.
Principle 7: Avoid throwaway test cases unless the program is truly a throwaway program
Principle 8: Do not plan a testing effort under the tacit assumption that no errors will be found.
Principle 9: The probability of the existence of more errors in a section of a program is
proportional to the number of errors already found in that section.
Principle 10: Testing is an extremely creative and intellectually challenging task.
Q2. . Explain Error Checklist for Inspections with different types of error
Data Reference Errors
Data-Declaration Errors
Computation Errors
Comparison Errors
Control-Flow Errors
Interface Errors
Input/Output Errors
b. Write short notes on the following
1. Code inspections using checklists
A code inspection is a set of procedures and error-detection techniques for group code reading.
Most discussions of code inspections focus on the procedures, forms to be filled out, and so on;
here, after a short summary of the general procedure, we will focus on the actual errordetection
techniques. An inspection team usually consists of four people. One of the four people plays the
role of moderator. The moderator is expected to be a competent programmer, but he or she is not
the author of the program and need not be acquainted with the details of the program.
The duties of the moderator include
• Distributing materials for, and scheduling, the inspection session
• Leading the session
• Recording all errors found
• Ensuring that the errors are subsequently corrected
The moderator is like a quality-control engineer. The second team member is the programmer.
The remaining team members usually are the program’s designer (if different from the
programmer) and a test specialist.
An Error Checklist for Inspections
Data Reference Errors
Data-Declaration Errors
Computation Errors
Comparison Errors
Control-Flow Errors
Interface Errors
Input/Output Errors
2. Group walkthroughs.
The code walkthrough, like the inspection, is a set of procedures and error-detection techniques
for group code reading. It shares much in common with the inspection process, but the
procedures are slightly different, and a different error-detection technique is employed. Like the
inspection, the walkthrough is an uninterrupted meeting of one to two hours in duration. The
walkthrough team consists of three to five people. One of these people plays a role similar to that
of the moderator in the inspection process, another person plays the role of a secretary (a person
who records all errors found), and a third person plays the role of a tester. Suggestions as to who
the three to five people should be vary. Of course, the programmer is one of those
people. Suggestions for the other participants include (1) a highly
experienced programmer, (2) a programming-language expert, (3) a new programmer (to give a
fresh, unbiased outlook), (4) the person who will eventually maintain the program, (5) someone
from a different project, and (6) someone from the same programming team as the
programmer.
3. Explain (a) White Box Testing (b) Equivalence Partitioning
Answers: (a) White Box Testing
White-box testing is concerned with the degree to which test cases exercise or cover the logic
(source code) of the program. If you back completely away from path testing, it may seem that a
worthy goal would be to execute every statement in the program at
least once. Unfortunately, this is a weak criterion for a reasonable
44 The Art of Software Testing
white-box test. This concept is illustrated in Figure 4.1. Assume that
Figure 4.1 represents a small program to be tested. The equivalent
Java code snippet follows:
public void foo(int a, int b, int x) {
if (a>1 && b==0) {
x=x/a;
}
if (a==2 || x>1) {
x=x+1;
}
}
You could execute every statement by writing a single test case that
traverses path ace. That is, by setting A=2, B=0, and X=3 at point a, every
statement would be executed once (actually, X could be assigned any
value).
(b) Equivalence Partitioning
test case as one that has a reasonable probability of finding an error, and it also discussed the fact
that an exhaustive-input test of a program is impossible. Hence, in testing a program, you are
limited to trying a small subset of all possible inputs. Of course, then, you want to select the right
subset, the subset with the highest probability of finding the most errors. One way of locating
this subset is to realize that a well-selected test case also should have two other properties:
1. It reduces, by more than a count of one, the number of other test cases that must be developed
to achieve some predefined goal of “reasonable” testing. The Art of Software Testing
2. It covers a large set of other possible test cases. That is, it tells us something about the
presence or absence of errors over and above this specific set of input values.
Identifying the Equivalence Classes The equivalence classes are identified by taking each input
condition (usually a sentence or phrase in the specification) and partitioning it
into two or more groups.
4. a) Explain Module Testing and Test Case design
Module testing (or unit testing) is a process of testing the individual subprograms, subroutines, or
procedures in a program. That is, rather than initially testing the program as a whole, testing is
first focused on the smaller building blocks of the program. The motivations for doing this are
threefold. First, module testing is a way of managing the combined elements of testing, since
attention is focused initially on smaller units of the program. Second, module testing eases the
task of debugging (the process of pinpointing and correcting a discovered error), since, when an
error is found, it is known to exist in a particular module. Finally, module testing introduces
parallelism into the program testing process by presenting us with the opportunity to test multiple
modules simultaneously.
Test Case design
You need two types of information when designing test cases for a module test: a specification
for the module and the module’s source code. The specification typically defines the module’s
input and output parameters and its function. Module testing is largely white-box oriented. One
reason is that as you test larger entities such as entire programs (which will be the case for
subsequent testing processes), white-box testing becomes less feasible. A second reason is that
the subsequent testing processes are oriented toward finding different types of errors (for
example, errors not necessarily associated with the program’s logic, such as the program’s
failing to meet its users’ requirements). Hence, the test-case-design procedure for a module test
is the following: Analyze the module’s logic using one or more of the white-box methods, and
then supplement these test cases by applying black-box methods to the module’s specification.
b) Compare the Top-down Testing and Bottom-up testing
5 a) Write a short Notes on (a) Functional Testing and (b) System Testing
a)Function testing is a process of attempting
to find discrepancies between the program and the external specification.
An external specification is a precise description of the program’s behavior from the point of
view of the end user. Except when used on small programs, function testing is normally
a black-box activity. That is, you rely on the earlier module-testing process to achieve the desired
white-box logic-coverage criteria. To perform a function test, the specification is analyzed to
derive a set of test cases. The equivalence-partitioning, boundary-value analysis, cause-effect
graphing, and error-guessing methods described in are especially pertinent to function testing.
b)System testing is the most misunderstood and most difficult testing process. System testing is
not a process of testing the functions of the\ complete system or program, because this would be
redundant with the process of function testing. , system testing has a particular purpose: to
compare the system or program to its original objectives. Given this purpose, two implications
are as follows:
1. System testing is not limited to systems. If the product is a program, system testing is the
process of attempting todemonstrate how the program, as a whole, does not meet its
objectives.
2. System testing, by definition, is impossible if there is no set of written, measurable objectives
for the product. In looking for discrepancies between the program and its objectives,
focus on translation errors made during the process of designing the external specification. This
makes the system test a vital test process, because in terms of the product, the number of errors
made, and the severity of those errors, this step in the development cycle usually is the most
error prone.
b) What is debugging? Explain Debugging Principles
in brief, debugging is what you do after you have executed a successful test case. Remember that
a successful test case is one that shows that a program does not do what it was designed to do.
Debugging is a two-step process that begins when you find an error as a result of a successful
test case. Step 1 is the determination of the exact nature and location of the suspected
error within the program. Step 2 consists of fixing the error.
Debugging Principles
Error-Locating Principles
Think
If You Reach an Impasse, Sleep on It
If You Reach an Impasse, Describe the Problem to Someone Else
Use Debugging Tools Only as a Second Resort
Avoid Experimentation—Use It Only as a Last Resort
Error-Repairing Techniques
Where There Is One Bug, There Is Likely to Be Another
Fix the Error, Not Just a Symptom of It
The Probability of the Fix Being Correct Is Not 100 Percent
The Probability of the Fix Being Correct Drops as the Size of the Program Increases
Beware of the Possibility That an Error Correction Creates a New Error
The Process of Error Repair Should Put You Temporarily Back into the Design Phase
Change the Source Code, Not the Object Code
6. a) Explain inductive Debugging Process with Neat Diagram
It should be obvious that careful thought will find most errors without the debugger even
going near the computer. One particular thought process is induction, where you move
from the particulars of a situation to the whole. That is, start with the clues (the symptoms
of the error, possibly the results of one or more test cases) and look
b) Explain the deductive debugging process with Neat Diagram The process of
deduction proceeds from some general theories or premises, using the processes of
elimination and refinement, to arrive at a conclusion (the location of the error).
7. a)Define Extreme Programming and List the Practices of Extreme Programming
In the 1990s a new software development methodology termed Extreme Programming (XP)
was born. A project manager named Kent Beck is credited with conceiving the
lightweight, agile development process, first testing it while working on a project at Daimler-
Chrysler in 1996. Although several other agile software development processes have since
been created, XP is by far the most popular. In fact, numerous open-source tools exist to
support it, which verifies XP’s popularity among developers and project managers.
XP was likely developed to support the adoption of programming languages such as Java,
Visual Basic, and C#. These object-based languages allow developers to create large,
complex applications much more quickly than with traditional languages such as C, C++,
FORTRAN, or COBOL.
b) Explain (a) Extreme Unit Testing and (b) Acceptance Testing 8Marks
Acceptance testing represents
Extreme Unit testing is the primary testing approach used in Extreme Testing and has two
simple rules: All code modules must have unit tests before coding begins, and all code modules
must pass unit tests before being released into production. At first glance this may not seem so
extreme. However, the big difference between unit testing, as previously described, and XT is
that the unit tests must be defined and created before coding the module. Initially, you may
wonder why you should, or how you can, create test drivers for code you haven’t even written.
You may also think that you do not have time to create the tests because the application must
meet a deadline. These are valid concerns, but they are easily addressed. The following list
identifies some benefits associated with writing unit tests before you start coding the application.
• You gain confidence that your code will meet its specification.
• You express the end result of your code before you start coding.
• You better understand the application’s specification and requirements.
• You may initially implement simple designs and confidently refactor the code later to improve
performance without worrying about breaking the specification.
Acceptance testing represents the second, and an equally important, type of XT that occurs in
the XP methodology. The purpose of acceptance testing is to determine whether the application
meets other requirements such as functionality and usability. You and the customer
create the acceptance tests during the design/planning phases. Unlike the other forms of testing
discussed thus far, customers, not you or your programming partners, conduct the acceptance
tests. In this manner, customers provide the unbiased verification that the
application meets their needs. Customers create the acceptance tests from user stories. The ratio
of user stories to acceptance tests is usually one to many. That is, more than one acceptance test
may be needed for each user story.
Q8 a) .ExplainThe challenges associated with testing Internet-based applications
You will face many challenges when designing and testing Internetbased
applications due to the large number of elements you cannot control and the number of
interdependent components. Adequately testing your application requires that you make some
assumptions about the environment that your customers use and how they use the site. An
Internet-based application has many failure points that you
should consider when designing a testing approach. The following list provides some examples
of the challenges associated with testing Internet-based applications:
• Large and varied user base. The users of your Website possess
different skill sets, employ a variety of browsers, and use different operating systems or devices.
You can also expect your customers to access your Website using a wide range of connection
speeds. Not everyone has T1 or broadband Internet access.
• Business environment. If you operate an e-commerce site, then you must consider issues such
as calculating taxes, determining shipping costs, completing financial transactions, and tracking
customer profiles.
• Locales. Users may reside in other countries, in which case you will have internationalization
issues such as language translation, time zone considerations, and currency conversion.
• Testing environments. To properly test your application, you will need to duplicate the
production environment. This means you should use Web servers, application servers, and
database servers that are identical to the production equipment. For the most accurate testing
results, the network infrastructure will have to be duplicated as well. This includes routers,
switches, and firewalls.
• Security. Because your site is open to the world, you must protect it from hackers. They can
bring your Website to a grinding halt with denial-of-service (DoS) attacks or rip off your
customers’ credit card information.
Q8. b) Write a short notes on (1) Business Layer Testing (2) Performance Testing
1) Business Layer Testing
Business layer testing focuses on finding errors in the business logic of your Internet
application. You will find this layer very similar to testing stand-alone applications in that you
can employ both white- and black-box techniques. You will want to create test plans and
procedures that detect errors in the application’s performance requirements, data acquisition, and
transaction processing.
Performance Testing
A poorly performing Internet application creates doubt about its robustness in your user’s mind
and often turns the person away. Lengthy page loads and slow transactions are typical examples.
To help achieve adequate performance levels, you need to ensure that operational specifications
are written during the requirementsgathering phase. Without written specifications or goals, you
do not know whether your application performs acceptably.