It6004 Software Testing MSM
It6004 Software Testing MSM
Aim
Finding defects which may get created by the programmer while developing the
software.
Gaining confidence in and providing information about the level of quality.
To prevent defects.
To make sure that the end result meets the business and user requirements.
To ensure that it satisfies the BRS that is Business Requirement Specification and
SRS that is System Requirement Specifications.
To gain the confidence of the customers by providing them a quality product.
Objective
1
Francis Xavier Engineering College, Tirunelveli –3
Department of Information Technology
Detailed Lesson Plan
Name of the Subject& Code: SOFTWARE TESTING & IT 6004
Text Book
1. Srinivasan Desikan and Gopalaswamy Ramesh, “Software Testing – Principles and
Practices”, Pearson Education, 2006.
2. Ron Patton, “Software Testing”, Second Edition, Sams Publishing, Pearson Education,
2007.
Hours Books
Sl. Cum
Unit Topic / Portions to be Covered Req/ Referr
No Hrs
Planned ed
UNIT I – INTRODUCTION
3
Hours Books
Sl. Cum
Unit Topic / Portions to be Covered Req/ Referr
No Hrs
Planned ed
Ad-hoc testing, Alpha, Beta Tests, Testing OO
25 3 1 25 T2
systems, usability and accessibility testing
26 3 Configuration testing , Compatibility testing 1 26 T2
27 3 Testing the documentation –Website testing 1 27 T2
UNIT IV – TEST MANAGEMENT
People and organizational issues in testing,
28 4 1 28 T2
organization structures for testing teams
29 4 Testing services, Test Planning 1 29 T1
30 4 Test Plan Components 1 30 T1
31 4 Test Plan Attachments 1 31 T1
32 4 Locating Test Items, test management 1 32 T1
33 4 Test process, Reporting Test Results 1 33 T1
The role of three groups in Test Planning and
34 4 1 34 T1
Policy Development
35 4 Introducing the test specialist 1 35 T1
Skills needed by a test specialist, Building a
36 4 1 36 T1
Testing Group
UNIT V - EXPERT SYSTEMS
37 5 Software test automation 1 37 T2
38 5 Skill needed for automation 1 38 T2
39 5 Scope of automation 1 39 T2
40 5 Design and architecture for automation 1 40 T2
41 5 Requirements for a test tool 1 41 T2
42 5 Challenges in automation 1 42 T2
43 5 Test metrics and measurements 1 43 T2
4
Hours Books
Sl. Cum
Unit Topic / Portions to be Covered Req/ Referr
No Hrs
Planned ed
44 5 Project, progress metrics 1 44 T2
45 5 Productivity metrics 1 45 T2
5
UNIT: 1
INTRODUCTION
PART- A
Verification Validation
6
revealing defects in software, and for establishing that the software has attained a
specified degree of quality with respect to selected attributes.
Testing Debugging
Testing as a dual purpose process Debugging or fault localization is the
Reveal defects process of
And to evaluate quality attributes Locating the fault or defect
Repairing the code, and
Retesting the code.
Correctness
Reliability
Usability
Integrity
Portability
Maintainability
Interoperability
7
6) Compare Error, Faults (Defects) and failures.(Nov 2015,2014,May 14,17, Nov 16)
A test case in a practical sense is attest related item which contains the following
information.
A set of test inputs.
Execution conditions.
Expected outputs.
8
9) List the sources of Defects or Origins of defects. Or list the classification of
defect. (May/June 2009, May 17)
Education
Communication
Oversight
Transcription
Process
Quality relates to the degree to which a system, system component, or process meets
specified requirements
Process used to reveal the defects and establishing software to attain a specified quality.
9
PART- B
10
The testing process further involves in two processes namely verification and
validation.
The technical aspects of the testing relate to the techniques, method, measurement,
and tools used to ensure that the software under test is as defect free and reliable.
The testing itself is related to two processes called verification, validation.
Validation: It is the process of evaluate a software system during or at the end of the
cycle in order to determine whether it satisfies the specified requirements.
Verification: It is process of evaluating a software system to determine whether product
of a given development phase satisfy the condition imposed at the start that phase.
Software testing: Testing is generally described as a group of procedures carried out to
evaluate source aspects of a piece of software.
Purpose of testing processes: Testing has a dual purpose process namely reveal defects
and to evaluate quality attributes of software.
Requirements
Analysis Software Development
Product Process
Process
Specification
Process Design Process
Testing Process
Verification Validation
Process Process
Fig: Example processes embedded in the software development process
Debugging: It is the process of locating the fault or defect, repairing the code and
retesting the code
11
2) List the different Software Testing Principles and explain.
(May/Jun 16,17,13,Apr/May 2015,Nov/Dec 2014,16)
Testing principles are important to test specialists because they provide the
foundation for developing testing knowledge and acquiring testing skills.
They also provide guidance for defining testing activities as performed in the
practice of a test specialist, A principle can be defined as;
A general or fundamental, law, doctrine, or assumption,
A rule or code for conduct,
The laws or facts of nature underlying the working of an artificial device.
The principles as stated below only related to execution-based testing.
Principle1: Testing is the process of exercising a software component using a
selected set of tests cases, with the internet.
Revealing defects, and
Evaluating quality.
Software engineers have made great progress in developing methods to prevent and
eliminate defects. However, defects do occur, and they have a negative impact on a
software quality. This principle supports testing as an execution-based activity to
detect defects.
The term defect as used in this and in subsequent principle represents any
deviations in the software that have negative impact on its functionality,
performance, reliability, security and other of its specified quality attributes.
Principle-2:When the test objectives is to detect defects, then a good test case is one
that has a high probability of revealing a yet undetected defects.
The goal for the test is to prove / disprove the hypothesis that is, determine if the
specific defect is present / absent.
A tester can justify the expenditure of the resources by careful test design so that
principle two is supported.
12
Principle-3: Test result should be inspected meticulously.
Tester need to carefully inspect and interpret test results. Several erroneous and
costly scenarios may occur if care is not taken.
Example: A failure may be overloaded, and the test may be granted a pass status
when in reality the software has failed the test. Testing may continue based on
erroneous test result. The defect may be revealed at some later stage of testing, but in
that case it may be make costly and difficult to locate and repair.
Principle-4: A test case must contain the expected output or result.
The test case is of no value unless there is an explicit statement of the expected
outputs or results.
Example:
A specific variable value must be observed or a certain panel button that must light
up.
Principle-5: Test cases should be developed for both valid and invalid input
conditions.
The tester must not assume that the software under test will always be provided
with valid inputs.
Inputs may be incorrect for several reasons.
Example:
Software users may have misunderstandings, or lack information about the nature
of the inputs. They often make typographical errors even when compute / correct
information are available. Device may also provide invalid inputs due to erroneous
conditions and malfunctions.
Principle-6: The probability of the existence of additional defects in a software
component is proportional to the number of defects already defected in that
component.
Example:
13
If there are two components A and B and testers have found 20 defects in A and 3
defects in B, then the probability of the existence of additional defects in A is higher
than B.
Principle-7: Testing should be carried out by a group that is independent of the
development group.
Tester must realize that
1. Developers have a great deal of pride in their work and
2. On practical level it may be difficult for them to conceptualize where
defects could be found.
Principle-8: Tests must be repeatable and reusable
This principle calls for experiments in the testing domain to require recording of
the exact condition of the test, any special events that occurred, equipment used,
and a careful accounting of the results.
This information invaluable to the developers when the code is returned for
debugging so that they can duplicate test conditions.
Principle-9: Testing should be planned.
Test plan should be developed for each level of testing, and objective for each
level should be described in the associated plan.
The objectives should be stated as quantitatively as possible plan, with their
precisely specified objectives.
Principle-10: Testing activities should be integrated into the software life cycle.
It is no longer feasible to postpone testing activities until after the code has been
written.
Test planning activities into the software lifecycle starting as early as in the
requirements analysis phases, and continue on throughout the software lifecycle in
parallel with development activities.
Principle-11: Testing is a creative and challenging task.
Difficult and challenges for the tester includes,
14
A tester needs to have comprehensive knowledge of the software engineering
discipline.
A tester needs to have knowledge from both experience and education as to how
software is specified, designed and developed.
A tester needs to be able to manage many details.
A tester needs to have knowledge of fault type and where faults of a certain type
night occur in code constructs.
A tester needs a reason like scientist and propose hypotheses that related to
presence of specific type of defects.
A tester needs to design and record test procedure for running the tests.
A tester to plan for testing and allocate the proper resources.
Origins of Defects
Defects have determined effects on software users, and software engineers work
very hard to produce high-quality software with a low number of defects.
But even under the best of development circumstances errors are made, resulting
in defects beings injected in the software during the phase of the software
lifecycle.
Defect Sources
Lack of education
Poor communication
Oversight
Transcription
Immature process
Impact of S/W artifacts
Errors,Faults,Defects,Failures
16
Test
planning
Defect
and Test Repository
case
Control
developm
ling
ent
and
monitor
Defect
ing
Preventi
on Quality Test
Test
Evaluat Process
Measur
ion Improv
ement
Control ement
Figure: Defect Repository
4) Explain in detail about Defect Classes, Defect Repository and Test Design.
(Nov/Dec 15,16,Apr/May
2015)
Defect can be classified in many ways. It is important for an organization to adapt
a single classification scheme and apply it to projects. No matter which
classification scheme is selected, some defects will fit into more than one class or
category. Because of this problem, developers, testers, and SQA staff should try to
be as consistent as possible when recording defect data.
Defect classes are classified into four types namely
requirement/specification design defect class
defect class coding defect class
testing defect class
Requirement/Specification Defect Class:
17
functional description defects- the overall description of what the product does,
and how it should behave is incorrect, ambiguous, and/or incomplete
Feature defects- distinguishing characteristics of a software component or
system.
Failure interaction defects- these are due to an incorrect description of how the
features should interact.
Interface description defects- these occur in the description of how the target
software is to interface with external software, hardware and users.
Defect Classes
Functional Description
Features, Features Interaction, Defects Repository
Interface Description Defect classes
Severity
Occurrences
18
Control logic and sequence defects- Control defects occur when logic flow in the
pseudo code is not collect.
Data Defects- These are associated with in collect design of data structures.
Module Interface Description Defects- This include in correct, missing, and /or
inconsistent defects of parameter types.
Functional Description Defects- This includes incorrect missing, and/ or unclear
defects of design elements. These defects are best defected during a design review.
External Interface Description defects- these are derived four incorrect design
description for inter faces with COTS components, external software systems,
databases, nd hardware devices.
Coding Defects
Algorithmic and processing Defects- Adding levels of programming detail to
design, code-related algorithmic and processing defects now include unchecked
overflow and underflow conditions , comparing inappropriate data types,
converting one data type to another, incorrect ordering of arithmetic operators ,
misuse or omission of parenthesis , precision loss an incorrect use of signs.
Control logic and sequence Defects- On the coding level these would include
incorrect expression of case statements incorrect iterations of loops.
Typographical Defects- These are syntax errors.
Initialization Defects- These occur when initialization statements are omitted or
are incorrect this may occur because of misunderstandings or lack of
communication between programmers and / or programmers and designers,
carelessness of the programming environment.
Data Flow Defects- These are certain reasonable operational sequence that data
should flow through.
Data Defects- These are indicated by incorrect implementation of data structures.
19
Module Interface Defects- As in the case of module design elements, interface
defects in the code may be due to using incorrect or inconsistent parameter type an
incorrect number of parameters.
Code Documentation Defects – When the documentation does not reflect what
the programs actually does, or is in complete or ambiguous, this is called a code
documentation defect.
External hardware, software interface defects – These defects arise from
problems related to system called links to database, input/output sequence ,
memory usage , interrupts and exception handling , data exchange with hardware ,
protocols , formats, interfaces with build files , and fixing sequences.
Testing Defects
Defects are not confined to code and it related artifacts. Test plans , tests cases,
test hardness and test procedures can also contain defects . Defect in test plans are best
detected using review techniques.
Test Case Design and Test Procedure Defects- These would encompass
incorrect, incomplete, missing, inappropriate test cases and test procedures.
Test hardness defect - Test Harness, also known as automated test framework
mostly used by developers. A test harness provides stubs and drivers, which will
be used to replicate the missing items, which are small programs that interact with
the software under test.
Software Quality
Quality relates to the degree to which a system, system component, or process
meets specified requirements.
Quality relates to the degree to which a system, system component, or process
meets customer or user needs, or expectations. In order to determine whether a
20
system, system component, or process is of high quality we use what are called
quality attributes. These are characteristics that reflect quality. For software
artifacts we can measure the degree to which they possess a given quality attribute
with quality metrics.
Quality metrics
A metric is a quantitative measure of the degree to which a system, system
component, or process possesses a given attribute.
There are product and process metrics. A very commonly used example of a
software product metric is software size, usually measured in lines of code (LOC).
Two examples of commonly used process metrics are costs and time required for a
given task. Quality metrics are a special kind of metric.
A quality metric is a quantitative measurement of the degree to which an item
possesses a given quality attribute.
Many different quality attributes have been described for software.
Correctness is the degree to which the system performs its intended function.
Reliability is the degree to which the software is expected to perform its required
functions under stated conditions for a stated period of time.
Usability relates to the degree of effort needed to learn, operate, prepare input, and
interpret output of the software
Integrity relates to the system’s ability to withstand both intentional and
accidental attacks
Portability relates to the ability of the software to be transferred from one
environment to another
Maintainability is the effort needed to make changes in the software
Interoperability is the effort needed to link or couple one system to another.
Another quality attribute that should be mentioned here is testability. This attribute
is of more interest to developers/testers than to clients. It can be expressed in the
following two ways:
The amount of effort needed to test the software to ensure it performs needed),
21
The ability of the software to reveal defects under testing conditions (some
software is designed in such a way that defects are well hidden during ordinary
testing conditions).
Testers must work with analysts, designers and, developers throughout the software
life system to ensure that testability issues are addressed.
Software Quality Assurance Group
The software quality assurance (SQA) group in an organization has ties to quality
issues. The group serves as the customers’ representative and advocate. Their
responsibility is to look after the customers’ interests.
The software quality assurance (SQA) group is a team of people with the
necessary training and skills to ensure that all necessary actions are taken during the
development process so that the resulting software conforms to established technical
requirements.
Reviews
In contrast to dynamic execution-based testing techniques that can be used to
detect defects and evaluate software quality, reviews are a type of static testing technique
that can be used to evaluate the quality of a software artifact such as a requirements
document, a test plan, a design document, a code component. Reviews are also a tool that
can be applied to revealing defects in these types of documents.
Definition: A review is a group meeting whose purpose is to evaluate a software artifact
or a set of software artifacts.
22
UNIT II
Test case Design Strategies – Using Black Box Approach to Test Case Design – Random
Testing –Requirements based testing – Boundary Value Analysis – Equivalence Class
Partitioning – State based testing – Cause-effect graphing – Compatibility testing – user
documentation testing – domain testing – Using White Box Approach to Test design –
Test Adequacy Criteria – static testing vs.structural testing – code functional testing –
Coverage and Control Flow Graphs – Covering Code Logic – Paths – code complexity
testing – Evaluating Test Adequacy Criteria.
PART A
1. What is the need for code functional testing to design test case?
23
3. Draw the tester’s view of black box and white box testing.
4. List the Knowledge Sources & Methods of black box and white box testing.
Test
Knowledge Sources Methods
Strategy
Static Testing code is not executed. Rather it manually checks the code,
requirement documents, and design documents to find errors. Hence, the name "static".
Structural testing, also known as glass box testing or white box testing is an
approach where the tests are derived from the knowledge of the software's structure or
internal implementation.
7. What are the factors affecting less than 100% degree of coverage?
25
K iterations of the loop where k<n
n-1 iterations of the loop
n+1 iterations of the loop
9. What are the basic primes for all structured program. (May/Jun 2013)
The complexity value is usually calculated from control flow graph(G) by the formula.
V (G) = E-N+2
Where the value E is the number of edges in the control flow graph
The value N is the number of nodes.
11. Define: Test Adequacy Criteria. (Apr/May 2015)
26
A software test adequacy criteria is a predicate that defines what properties of a
program must be exercised to constitute a thorough test.
Positive testing is that testing which attempts to show that a given module of an
application does what it is supposed to do.
Negative testing is that testing which attempts to show that the module does not
do anything that it is not supposed to do.
13. Write the samples of cause and effect notations. May / June 15
PART- B
27
1. Explain in detail about the Test case design strategies.
The smart tester is to understand the functionality, input/output domain and the
environment for use of the code being tested. For certain types of testing the user must
also understand in detail how the code is constructed.
Roles of a Smart Tester:
Reveal defects
Can be used to evaluate software performance, usability & reliability.
Understand the functionality, input/output domain and the environment for use of
the code being tested
Test Case Design Strategies and Techniques
Test Knowledge
Tester's View Techniques / Methods
Strategies sources
Requirements
Black-box
Inputs document Equivalence class partitioning
testing
Specifications Boundary value analysis
(not code-
User manual Cause effect graphing
based)
Models Error guessing
(sometimes
Outputs Domain knowledge Random testing
called
Defect analysis data State-transition testing
functional
Intuition Scenario-based testing
testing)
Experience
White-box Program code Control flow testing/coverage:
testing Control flow graphs - Statement coverage
(also called Data flow graphs - Branch (or decision) coverage
code-based Cyclomatic - Condition coverage
or structural complexity - Branch and condition
testing) High-level design coverage
28
- Modified condition/ decision
coverage
- Multiple condition coverage
Detailed design - Independent path coverage
- Path coverage
Data flow testing/ coverage
Class testing/coverage
Mutation testing
Figure: Two basic Testing Strategies
Using the Black Box Approach to Test Case Design
Black box test strategy considers only inputs and outputs as a basis for designing
test cases.
This is prohibitively expensive even if the target software is a simple software
unit. The goal for the smart tester is to effectively use the resources available by
developing a set of test cases that gives the maximum yield of defects for the time and
effort spent.
To help achieve this goal using the black box approach we can select from several
methods.
Random Testing
Equivalence Class Partitioning
Boundary Value Analysis
Other black box test design approaches
Cause-and-Effect Graphing
State Transition Testing
Error Guessing
29
Random Testing
Each software module or system has an input domain from which test input
data is selected.
If a tester randomly selects inputs from the domain, this is called random
testing.
Example)- if the valid input domain for a module is all positive integers
between 1 and 100, the tester using this approach would randomly, or
unsystematically, select values from within that domain; for example, the
values 55, 24, 3 might be chosen.
Issues in Random Testing:
Are the three values adequate to show that the module meets its specification
when the tests are run?
Should additional or fewer values be used to make the most effective use of
resources?
Are there any input values, other than those selected, more likely to reveal
defects? For example, should positive integers at the beginning or end of the
domain be specifically selected as inputs?
Should any values outside the valid domain be used as test inputs? For
example, should test data include floating point values, negative values, or
integer values greater than 100?
30
Equivalence Class Partitioning
If a tester is viewing the software-under-test as a black box with well-defined inputs and
outputs, a good approach to selecting test inputs is to use a method called equivalence
class partitioning.
Equivalence class partitioning results in a partitioning of the input domain of
the software-under- test.
It eliminates the need for exhaustive testing, which is not feasible.
It guides a tester in selecting a subset of test inputs with a high probability of
detecting a defect.
It allows a tester to cover a larger domain of inputs/outputs with a smaller
subset selected from an equivalence class. Most equivalence class partitioning
takes place for the input domain.
The tester must consider both valid and invalid equivalence classes. Invalid
classes represent erroneous or unexpected inputs.
Equivalence classes may also be selected for output conditions.
The derivation of input or outputs equivalence classes is a heuristic process.
List of conditions
‘‘If an input condition for the software-under-test is specified as a range of values,
select one valid equivalence class that covers the allowed range and two invalid
equivalence classes, one outside each end of the range.’’
For example, suppose the specification for a module says that an input, the length
of a widget in millimeters, lies in the range 1–499; then select one valid
equivalence class that includes all values from 1 to 499. Select a second
equivalence class that consists of all values less than 1, and a third equivalence
class that consists of all values greater than 499.
‘‘If an input condition for the software-under-test is specified as a number of
values, then select one valid equivalence class that includes the allowed number of
31
values and two invalid equivalence classes that are outside each end of the allowed
number.’’
‘‘If an input condition for the software-under-test is specified as a set of valid
input values, then select one valid equivalence class that contains all the members
of the set and one invalid equivalence class for any value outside the set.’’
‘‘If an input condition for the software-under-test is specified as a “must be”
condition, select one valid equivalence class to represent the “must be” condition
and one invalid class that does not include the “must be” condition.’’
‘‘If the input specification or any other information leads to the belief that an
element in an equivalence class is not handled in an identical way by the software-
under-test, then the class should be further partitioned into smaller equivalence
classes.’’
Boundary Value Analysis
3. Explain the Other Black box test design Approaches in detail.(May/Jun 2016)
The steps in developing test cases with a cause-and-effect graph are as follows:
The tester must decompose the specification of a complex software component
into lower-level units.
For each specification unit, the tester needs to identify causes and their effects. A
cause is a distinct input condition or an equivalence class of input conditions.
An effect is an output condition or a system transformation. Putting together a
table of causes and effects helps the tester to record the necessary details.
Nodes in the graph are causes and effects.
Causes are placed on the left side of the graph and effects on the right. Logical
relationships are expressed using standard logical operators such as AND, OR, and
NOT, and are associated with arcs.
The graph may be annotated with constraints that describe combinations of causes
and/or effects that are not possible due to environmental or syntactic constraints.
1. The graph is then converted to a decision table.
2. The columns in the decision table are transformed into test cases.
Example
The following example illustrates the application of this technique. Suppose we
have a specification for a module that allows a user to perform a search for a character in
33
an existing string. The specification states that the user must input the length of the string
and the character to search for.
AND ^
\\
Effect 2 occurs if cause 1 does not occur
Fig: Samples of cause-and-effect graph notations
If the string length is out-of-range an error message will appear. If the character
appears in the string, its position will be reported. If the character is not in the
string the message “not found” will be output.
34
If C1 and C2, then E2.
If C1 and not C2, then E3.
If not C1, then E1.
Based on the causes, effects, and their relationships, a cause-and-effect graph to represent
this information is shown in the following Figure.
E1
C1 ^
E2
C2
^ E3
The decision table reflects the rules and the graph and shows the effects for all
possible combinations of causes. Columns list each combination of causes, and
each column represents a test case. Given n causes this could lead to a decision
table with 2n entries, thus indicating a possible need for many test cases.
T1 5 C 3
T2 5 W Not found
T3 90 Integer not found
<TABLE- Decision table for character search example>
35
T1 T2 T3
C1 1 1 0
C2 1 0 -
E1 0 0 1
E2 1 0 0
E3 0 1 0
<Table>
4. Explain in detail about the types of white box testing & additional white box test
design approaches. (Nov / Dec 16, May 17)
36
Data Flow Testing
Add data flow information to the control flow graph
o statements that write variables (a value is assigned or changed)
o statements that read variables
Generate test cases that exercise all write-read pairs of statements for each variable
Several variants of this technique
Example
1 PROGRAM sum ( maxint, N : INT )
2 INT result := 0 ; i := 0 ;
3 IF N < 0
4 THEN N := - N ;
5 WHILE ( i < N ) AND ( result <= maxint )
6 DO i := i + 1 ;
7 result := result + i ;
37
8 OD;
9 IF result <= maxint
10 THEN OUTPUT ( result )
11 ELSE OUTPUT ( “too large” )
12 END.
38
5 WHILE ( i < N ) AND ( result <= maxint )
6 DO i := i + 1 ;
7 result := result + i ;
8 OD;
9 IF result <= maxint
10 THEN OUTPUT ( result )
11 ELSE OUTPUT ( “too large” )
12 END.
Start
result := 0;
i := 0;
No
No
Exit
40
The goal for white box testing is to ensure that the internal components of a
program are working properly. A common focus is on structural elements such as
statements and branches. The tester develops test cases that exercise these
structural elements to determine if defects exist in the program structure.
The application scope of adequacy criteria also includes:
helping testers to select properties of a program to focus on during test;
helping testers to select a test data set for a program based on the selected
properties;
supporting testers with the development of quantitative objectives for
testing;
indicating to testers whether or not testing can be stopped for that program.
A test data set is statement, or branch, adequate if a test set T for program P
causes all the statements, or branches, to be executed respectively.
“A selection criteria can be used for selecting the test cases or for checking
whether or not a selected test suite is adequate, that is, to decide whether or not the
testing can be stopped”
Adequacy criteria - Criteria to decide if a given test suite is adequate, i.e., to give
us “enough” confidence that “most” of the defects are revealed
In practice, reduced to coverage criteria
Coverage criteria
Requirements/specification coverage
At least one test case for each requirement
Cover all statements in a formal specification
Model coverage
State-transition coverage, Use-case and scenario coverage
Code coverage
Statement coverage, Data flow coverage
Fault coverage
41
Testers can use the axioms to
recognize both strong and weak adequacy criteria; a tester may decide to use a
weak criterion, but should be aware of its weakness with respect to the properties
described by the axioms; focus attention on the properties that an effective test
data adequacy criterion should exhibit;
select an appropriate criterion for the item under test;
Stimulate thought for the development of new criteria; the axioms are the
framework with which to evaluate these new criteria.
The axioms are based on the following set of assumptions:
programs are written in a structured programming language;
programs are SESE (single entry/single exit);
All input statements appear at the beginning of the program;
All output statements appear at the end of the program.
The axioms/properties described by Weyuker are the following:
Applicability Property
No exhaustive Applicability Property
Monotonicity Property
Inadequate Empty Set
Antiextensionality Property
General Multiple Change Property
Antidecomposition Property
Ant composition Property
Renaming Property
Complexity Property
42
Both Structural and Functional Technique is used to ensure adequate testing
Structural analysis basically test the uncover error occur during the coding of the
program.
Functional analysis basically test he uncover occur during implementing
requirements and design specifications.
Functional testing basically concern about the results but not the processing.
Structural testing is basically concern both the results and also the process.
Structural testing is used in all the phases where design , requirements and
algorithm is discussed.
The main objective of the Structural testing to ensure that the functionality is
working fine and the product is technically good enough to implement in the real
environment.
Functional testing is some times called as black box testing, no need to know
about the coding of the program.
Structural testing is some times called as white box testing because knowledge of
code is very much essential. We need the understand the code written by other
users.
Various Structural Testing are
Stress Testing
Execution Testing
Operations Testing
Recovery Testing
Compliance Testing
Security Testing
Static testing is a software testing method that involves examination of
the program's code and its associated documentation but does not require the program be
executed. Dynamic testing, the other main category of software testing methods, involves
43
interaction with the program while it runs. The two methods are frequently used together
to try to ensure the functionality of a program.
Static testing may be conducted manually or through the use of
various software testing tools. Specific types of static software
testing include code analysis, inspection, code reviews and
walkthroughs.
2. Static testing has more statement 2. Dynamic Testing has less statement
coverage than dynamic testing in shorter stage because it is covers limited area of
time code
5. This type of testing is done without the 5. This type of execution is done with the
execution of code. execution of code.
6. Static testing gives assessment of code as 6. Dynamic Testing gives bottlenecks of
well as documentation. the software system.
7. In Static Testing techniques a checklist 7. In Dynamic Testing technique the test
is prepared for testing process cases are executed.
44
8. Static Testing Methods include 8. Dynamic testing involves functional and
Walkthroughs, code review. nonfunctional testing
2) Manual versus Automatic Testing
Executing the test cases manually without any tool support is known as
manual testing. Taking tool support and executing the test cases by using
automation tool is known as automation testing.
Following table shows the difference between manual testing and
automation testing.
Manual Testing Automation Testing
1. Time consuming and tedious: Since
1. Fast Automation runs test cases
test cases are executed by human
significantly faster than human resources.
resources so it is very slow and tedious.
2. Huge investment in human
2. Less investment in human resources:Test
resources: As test cases need to be
cases are executed by using automation tool so
executed manually so more testers are
less tester are required in automation testing.
required in manual testing.
3. Less reliable: Manual testing is less
3. More reliable: Automation tests perform
reliable as tests may not be performed
precisely same operation each time they are
with precision each time because of
run.
human errors.
4. Non-programmable: No programming 4. Programmable: Testers can program
can be done to write sophisticated tests sophisticated tests to bring out hidden
which fetch hidden information. information.
45
UNIT – III
LEVELS OF TESTING
The need for Levels of Testing – Unit Test – Unit Test Planning – Designing the Unit
Tests – The Test Harness – Running the Unit tests and Recording results – Integration
tests – Designing Integration Tests – Integration Test Planning – Scenario testing –
Defect bash elimination System Testing – Acceptance testing – Performance testing –
Regression Testing –Internationalization testing – Ad-hoc testing – Alpha, Beta Tests –
Testing OO systems – Usability and Accessibility testing – Configuration testing –
Compatibility testing – Testing the documentation –Website testing.
PART- A
1. Define Unit Test and characterized the unit test. (May/Jun 2012)
At a unit test a single component is tested. A unit is the smallest possible testable
software component. It can be characterized in several ways
A unit in a typical procedure oriented software systems.
It performs a single cohesive function.
It can be compiled separately.
It contains code that can fit on a single page or a screen.
2. Define Alpha and Beta Test.(May/Jun 2016,May/Jun 2014, Nov / Dec 16)
Alpha test is for developer’s to use the software and note the problems.
Beta test who use it under real world conditions and report the defect to the
developing organization.
46
3. What are the approaches are used to develop the software?
There are two major approaches to software development
Bottom-Up
Top Down
47
Localization Testing
Release
48
12. Write the purpose of Defect Bash testing. (Apr/May 2015)
It is an ad hoc testing where people performing different role in an organization
test the product together at the same time. The testing by all the participants during defect
bashing is not based on written test cases. What is to be tested is left to an individual's
decision and creativity. This is usually done when the software is close to being ready to
release.
49
PART – B
50
Stress test
Configuration test
Security test
Recovery test
Acceptance test - system as a whole with customer requirements.
For tailor made software(customized software):
acceptance tests – performed by users/customers
much in common with system test
For packaged software (market made software):
alpha testing – on the developers site
beta testing – on a user site
53
Dri
ver
Call and
pass
paramet
Unit Under Rest
er
Ca Ca Acknowl
ll Su ll Su edge
b1 b2
Fig. The test harness
Running the Unit Tests And Recording Results
Unit tests can begin when
the units becomes available from the developers (an estimation of
availability is part of the test plan),
the test cases have been designed and reviewed, and
the test harness, and any other supplemental supporting tools, are available.
TABLE- Summary work sheet for unit test results
When a unit fails a test there may be several reasons for the failure. The most likely
reason for the failure is a fault in the unit implementation (the code). Other likely causes
that need to be carefully investigated by the tester are the following:
54
• a fault in the test case specification (the input or the output was not specified
correctly);
• a fault in test procedure execution (the test should be rerun);
• a fault in the test environment (perhaps a database was not set up properly);
• a fault in the unit design (the code correctly adheres to the design specification,
but the latter is incorrect).
The causes of the failure should be recorded in a test summary report, which is a
summary of testing activities for all the units covered by the unit test plan.
3. Explain in detail about Unit Test Planning. (Nov/Dec 2015, May/Jun 2013)
A general unit test plan should be prepared. It may be prepared as a component of
the master test plan or as a stand-alone plan.
It should be developed in conjunction with the master test plan and the project
plan for each project.
Documents that provide inputs for the unit test plan are the project plan, as well
the requirements, specification, and design documents that describe the target
units.
Components of a unit test plan are described in detail the IEEE Standard for
Software Unit Testing.
Phase 1: Describe Unit Test Approach and Risks
In this phase of unit testing planning the general approach to unit testing is outlined. The
test planner:
identifies test risks;
describes techniques to be used for designing the test cases for the
units;
describes techniques to be used for data validation and recording of
test results;
55
describes the requirements for test harnesses and other software that
interfaces with the units to be tested, for example, any special
objects needed for testing object-oriented units.
Phase 2: Identify Unit Features to be tested
This phase requires information from the unit specification and detailed design
description.
The planner determines which features of each unit will be tested
for example: functions, performance requirements, states, and state transitions,
control structures, messages, and data flow patterns.
Phase 3: Add Levels of Detail to the Plan
In this phase the planner refines the plan as produced in the previous two
phases. The planner adds new details to the approach, resource, and scheduling
portions of the unit test plan.
As an example, existing test cases that can be reused for this project can be
identified in this phase.
Unit availability and integration scheduling information should be included in
the revised version of the test plan.
The planner must be sure to include a description of how test results will be
recorded.
Unit Test on Class / Objects:
Unit testing on object oriented systems
Testing levels in object oriented systems
operations associated with objects
usually not tested in isolation because of encapsulation and dimension (too
small)
classes -> unit testing
clusters of cooperating objects -> integration testing
the complete OO system -> system testing
56
Complete test coverage of a class involves
Testing all operations associated with an object
Setting and interrogating all object attributes
Exercising the object in all possible states
Inheritance makes it more difficult to design object class tests as the information to
be tested is not localised
Challenges/issues of Class Testing
Some of these issues are described follow:
Issue 1: Adequately Testing Classes
The potentially high costs for testing each individual method in a class have
been described. These high costs will be particularly apparent when there are
many methods in a class; the numbers can reach as high as 20 to 30. Finally, a
tester might use a combination of approaches, testing some of the critical
methods on an individual basis as units, and then testing the class as a whole.
Issue 2: Observation of Object States and State Changes
Methods may not return a specific value to a caller. They may instead change
the state of an object. The state of an object is represented by a specific set of
values for its attributes or state variables.
Issue 3: Encapsulation
– Difficult to obtain a snapshot of a class without building extra methods
which display the classes’ state
Issue 4 :Inheritance
– Each new context of use (subclass) requires re-testing because a method
may be implemented differently (polymorphism).
– Other unaltered methods within the subclass may use the redefined
method and need to be tested
57
Issue 5:White box tests
– Basis path, condition, data flow and loop tests can all be applied to
individual methods within a class.
4. Explain in detail about Integration Test. (May/Jun 16,Apr/May 15, Nov 16)
58
Test drivers
call the target code
simulate calling units or a user
where test procedures and test cases are coded (for automatic test case
execution) or a user interface is created (for manual test case execution)
Test stubs
simulate called units
simulate modules/units/systems called by the target code
Incremental integration testing
59
– Object-oriented systems may not have such a hierarchical control structure
Top-down integration testing
60
Integration Test Planning
Planning can begin when high-level design is complete so that the system
architecture is defined.
Other documents relevant to integration test planning are the requirements
document, the user manual, and usage scenarios.
These documents contain structure charts, state charts, data dictionaries, cross-
reference tables, module interface descriptions, data flow descriptions,
messages and event descriptions, all necessary to plan integration tests.
5. Explain the different types of system testing with examples. (May/Jun 2013)
61
Equivalence class partitioning, boundary-value analysis and state-based testing
are valuable techniques
Document and track test coverage with a (tests to requirements) traceability
matrix
A defined and documented form should be used for recording test results from
functional and other system tests
Failures should be reported in test incident reports
Useful for developers (together with test logs)
Useful for managers for progress tracking and quality assurance
purposes
The tests should focus on the following goals.
All types or classes of legal inputs must be accepted by the software.
All classes of illegal inputs must be rejected (however, the system
should remain available).
All possible classes of system output must exercised and examined.
All effective system states and state transitions must be exercised and
examined.
All functions must be exercised.
Performance Testing
Goals:
See if the software meets the performance requirements
See whether there any hardware or software factors that impact on the
system's performance
Provide valuable information to tune the system
Predict the system's future performance levels
Results of performance test should be quantified, and the corresponding
environmental conditions should be recorded
Resources usually needed
62
a source of transactions to drive the experiments, typically a load
generator
an experimental test bed that includes hardware and software the system
under test interacts with
instrumentation of probes that help to collect the performance data
(event logging, counting, sampling, memory allocation counters, etc.)
a set of tools to collect, store, process and interpret data from probes
Configuration Testing
Configuration testing allows developers/testers to evaluate system performance
and availability when hardware exchanges and reconfigurations occur.
Configuration testing also requires many resources including the multiple
hardware devices used for the tests. If a system does not have specific
requirements for device configuration changes then large-scale configuration
testing is not essential.
Several types of operations should be performed during configuration test.
Some sample operations for testers are
(i) rotate and permutate the positions of devices to ensure physical/ logical
device permutations work for each device (e.g., if there are two printers A and
B, exchange their positions);
(ii) induce malfunctions in each device, to see if the system properly handles
the malfunction;
(iii) induce multiple device malfunctions to see how the system reacts. These
operations will help to reveal problems (defects) relating to hardware/ software
interactions when hardware exchanges, and reconfigurations occur.
The Objectives of Configuration Testing
Show that all the configuration changing commands and menus work properly.
Show that all the interchangeable devices are really interchangeable, and that
they each enter the proper state for the specified conditions.
63
Show that the systems’ performance level is maintained when devices are
interchanged, or when they fail.
Security Testing
Evaluates system characteristics that relate to the availability, integrity and
confidentiality of system data and services
Computer software and data can be compromised by
criminals intent on doing damage, stealing data and information, causing
denial of service, invading privacy
errors on the part of honest developers/maintainers (and users?) who
modify, destroy, or compromise data because of misinformation,
misunderstandings, and/or lack of knowledge
Both can be perpetuated by those inside and outside on an organization
Attacks can be random or systematic. Damage can be done through various means
such as:
Viruses
Trojan horses
Trap doors
illicit channels.
The effects of security breaches could be extensive and can cause:
Loss of information
corruption of information
Misinformation
privacy violations
Denial of service
Other Areas to focus on Security Testing: password checking, legal and illegal
entry with passwords, password expiration, encryption, browsing, trap doors,
viruses.
Usually the responsibility of a security specialist
64
Recovery Testing: Subject a system to losses of resources in order to determine if it can
recover properly from these losses
Especially important for transaction systems
Example: loss of a device during a transaction
Tests would determine if the system could return to a well-known state, and that
no transactions have been compromised
Systems with automated recovery are designed for this purpose
Areas to focus [Beizer] on Recovery Testing:
Restart : the ability of the system to restart properly on the last checkpoint
after a loss of a device
Switchover : the ability of the system to switch to a new processor, as a
result of a command or a detection of a faulty processor by a monitor
In each of these testing situations all transactions and processes must be carefully
examined to detect:
loss of transactions;
merging of transactions;
incorrect transactions;
an unnecessary duplication of a transaction.
A good way to expose such problems is to perform recovery testing under a stressful
load. Transaction inaccuracies and system crashes are likely to occur with the result
that defects and design flaws will be revealed.
Acceptance Test, Alpha and Beta Testing
For tailor made software(customized software):
acceptance tests are performed by users/customers
much in common with system test
For packaged software (market made software):
alpha testing will be conducted on the developers site
beta testing will be conducted on a user site
65
6. Discuss the levels of testing adapted to test OO systems (Apr/May 2015)
The shift from traditional to object-oriented environment involves looking at and
reconsidering old strategies and methods for testing the software. The traditional
programming consists of procedures operating on data, while the object-oriented
paradigm focuses on objects that are instances of classes. In object-oriented (OO)
paradigm, software engineers identify and specify the objects and services provided by
each object. In addition, interaction of any two objects and constraints on each identified
object are also determined. The main advantages of OO paradigm include increased
reusability, reliability, interoperability, and extendibility.
With the adoption of OO paradigm, almost all the phases of software development have
changed in their approach, environments, and tools. Though OO paradigm helps make
the designing and development of software easier, it may pose new kind of problems.
Thus, testing of software developed using OO paradigm has to deal with the new
problems also. Note that object-oriented testing can be used to test the object-oriented
software as well as conventional software.
OO program should be tested at different levels to uncover all the errors. At the
algorithmic level, each module (or method) of every class in the program should be tested
in isolation. For this, white-box testing can be applied easily. As classes form the main
unit of object-oriented program, testing of classes is the main concern while testing an
OO program. At the class level, every class should be tested as an individual entity. At
this level, programmers who are involved in the development of class conduct the testing.
Test cases can be drawn from requirements specifications, models, and the language
used. In addition, structural testing methods such as boundary value analysis are
extremely used. After performing the testing at class level, cluster level testing should be
performed. As classes are collaborated (or integrated) to form a small subsystem (also
known as cluster), testing each cluster individually is necessary. At this level, focus is on
testing the components that execute concurrently as well as on the interclass interaction.
Hence, testing at this level may be viewed as integration testing where units to be
66
integrated are classes. Once all the clusters in the system are tested, system level testing
begins. At this level, interaction among clusters is tested.
Usually, there is a misconception that if individual classes are well designed and have
proved to work in isolation, then there is no need to test the interactions between two or
more classes when they are integrated. However, this is not true because sometimes there
can be errors, which can be detected only through integration of classes. Also, it is
possible that if a class does not contain a bug, it may still be used in a wrong way by
another class, leading to system failure.
Developing Test Cases in Object-oriented Testing
The methods used to design test cases in OO testing are based on the conventional
methods. However, these test cases should encompass special features so that they can be
used in the object-oriented environment. The points that should be noted while
developing test cases in an object-oriented environment are listed below.
1. It should be explicitly specified with each test case which class it should test.
2. Purpose of each test case should be mentioned.
3. External conditions that should exist while conducting a test should be clearly
stated with each test case.
4. All the states of object that is to be tested should be specified.
5. Instructions to understand and conduct the test cases should be provided with each
test case.
Object-oriented Testing Methods
As many organizations are currently using or targeting to switch to the OO paradigm, the
importance of OO software testing is increasing. The methods used for performing
object-oriented testing are discussed in this section.
67
State-based testing is used to verify whether the methods (a procedure that is executed by
an object) of a class are interacting properly with each other. This testing seeks to
exercise the transitions among the states of objects based upon the identified inputs.
For this testing, finite-state machine (FSM) or state-transition diagram representing the
possible states of the object and how state transition occurs is built. In addition, state-
based testing generates test cases, which check whether the method is able to change the
state of object as expected. If any method of the class does not change the object state as
expected, the method is said to contain errors.
To perform state-based testing, a number of steps are followed, which are listed below.
1. Derive a new class from an existing class with some additional features, which are
used to examine and set the state of the object.
2. Next, the test driver is written. This test driver contains a main program to create
an object, send messages to set the state of the object, send messages to invoke methods
of the class that is being tested and send messages to check the final state of the object.
3. Finally, stubs are written. These stubs call the untested methods.
Fault-based Testing
Fault-based testing is used to determine or uncover a set of plausible faults. In other
words, the focus of tester in this testing is to detect the presence of possible faults. Fault-
based testing starts by examining the analysis and design models of OO software as these
68
models may provide an idea of problems in the implementation of software. With the
knowledge of system under test and experience in the application domain, tester designs
test cases where each test case targets to uncover some particular faults.
The effectiveness of this testing depends highly on tester experience in application
domain and the system under test. This is because if he fails to perceive real faults in the
system to be plausible, testing may leave many faults undetected. However, examining
analysis and design models may enable tester to detect large number of errors with less
effort. As testing only proves the existence and not the absence of errors, this testing
approach is considered to be an effective method and hence is often used when security
or safety of a system is to be tested.
Integration testing applied for OO software targets to uncover the possible faults in both
operation calls and various types of messages (like a message sent to invoke an object).
These faults may be unexpected outputs, incorrect messages or operations, and incorrect
invocation. The faults can be recognized by determining the behavior of all operations
performed to invoke the methods of a class.
Scenario-based Testing
Scenario-based testing is used to detect errors that are caused due to incorrect
specifications and improper interactions among various segments of the software.
Incorrect interactions often lead to incorrect outputs that can cause malfunctioning of
some segments of the software. The use of scenarios in testing is a common way of
describing how a user might accomplish a task or achieve a goal within a specific context
or environment. Note that these scenarios are more context- and user specific instead of
being product-specific. Generally, the structure of a scenario includes the following
points.
1. A condition under which the scenario runs.
2. A goal to achieve, which can also be a name of the scenario.
3. A set of steps of actions.
4. An end condition at which the goal is achieved.
5. A possible set of extensions written as scenario fragments.
69
Scenario- based testing combines all the classes that support a use-case (scenarios are
subset of use-cases) and executes a test case to test them. Execution of all the test cases
ensures that all methods in all the classes are executed at least once during testing.
However, testing all the objects (present in the classes combined together) collectively is
difficult. Thus, rather than testing all objects collectively, they are tested using either top-
down or bottom-up integration approach.
This testing is considered to be the most effective method as scenarios can be organized
in such a manner that the most likely scenarios are tested first with unusual or exceptional
scenarios considered later in the testing process. This satisfies a fundamental principle of
testing that most testing effort should be devoted to those paths of the system that are
mostly used.
Challenges in Testing Object-oriented Programs
Traditional testing methods are not directly applicable to OO programs as they involve
OO concepts including encapsulation, inheritance, and polymorphism. These concepts
lead to issues, which are yet to be resolved. Some of these issues are listed below.
1. Encapsulation of attributes and methods in class may create obstacles while
testing. As methods are invoked through the object of corresponding class, testing cannot
be accomplished without object. In addition, the state of object at the time of invocation
of method affects its behavior. Hence, testing depends not only on the object but on the
state of object also, which is very difficult to acquire.
2. Inheritance and polymorphism also introduce problems that are not found in
traditional software. Test cases designed for base class are not applicable to derived class
always (especially, when derived class is used in different context). Thus, most testing
methods require some kind of adaptation in order to function properly in an OO
environment.
Cause-Effect Graph graphically shows the connection between a given outcome and all
issues that manipulate the outcome. Cause Effect Graph is a black box testing technique.
It is also known as Ishikawa diagram because of the way it looks, invented by Kaoru
Ishikawa or fish bone diagram. It is generally uses for hardware testing but now adapted
to software testing, usually tests external behavior of a system. It is a testing technique
that aids in choosing test cases that logically relate Causes (inputs) to Effects (outputs) to
produce test cases. A “Cause” stands for a separate input condition that fetches about an
71
internal change in the system. An “Effect” represents an output condition, a system
transformation or a state resulting from a combination of causes.
Cyclomatic complexity
V(G) = 9 - 7 + 2 = 4
V(G) = 3 + 1 = 4 (Condition nodes are 1,2 and 3 nodes)
Basis Set - A set of possible execution path of a program
1, 7
1, 2, 6, 1, 7
1, 2, 3, 4, 5, 2, 6, 1, 7
1, 2, 3, 5, 2, 6, 1, 7
73
UNIT IV
TEST MANAGEMENT
People and organizational issues in testing – Organization structures for testing teams –
testing services –Test Planning – Test Plan Components – Test Plan Attachments –
Locating Test Items –test management – test process – Reporting Test Results – The role
of three groups in Test Planning and Policy Development – Introducing the test specialist
– Skills needed by a test specialist – Building a Testing Group.
PART- A
3. Define Milestones.
Milestones are tangible events that are expected to occur at a certain time in the
Project’s lifetime. Managers use them to determine project status.
74
The first is the "80 hour rule" which means that no single activity or group of
activities at the lowest level of detail of the WBS to produce a single
deliverable should be more than 80 hours of effort.
The second rule of thumb is that no activity or group of activities at the lowest
level of detail of the WBS should be longer than a single reporting period.
Thus if the project team is reporting progress monthly, then no single activity
or series of activities should be longer than one month long.
The last heuristic is the "if it makes sense" rule. Applying this rule of thumb,
one can apply "common sense" when creating the duration of a single activity
or group of activities necessary to produce a deliverable defined by the WBS.
6. What is the function of Test Item Transmittal Report or Locating Test Items?
(May/Jun 2013)
Suppose a tester is ready to run tests on the data described in the test plan. We
needs to be able to locate the item and have knowledge of its current status. This is the
function of the Test Item Transmittal Report. Each Test Item Transmittal Report has a
unique identifier.
75
7. Define Test incident Report
The tester should record in attest incident report (sometimes called a problem
report) any event that occurs during the execution of the tests that is
unexpected ,unexplainable, and that requires a follow- up investigation.
Project Manager
Tester Programmers
76
12. Write the Components of test plan. (Nov/Dec 2014)
a. Test plan identifier ,
b. Introduction
c. Item to be tested
d. Features to be tested
e. Approach
f. Pass/fail criteria
13. What role do user/clients play in the development of test plan for the
projects? (Nov/Dec 2015)
77
78
PART- B
79
All of the quality and testing plans should also be coordinated with the overall
software project plan.
A sample plan hierarchy is shown in the following Figure. At the top of the plan
hierarchy there may be a software quality assurance plan.
This plan gives an overview of all verification and validation activities for the
project, as well as details related to other quality issues such as audits, standards,
configuration control, and supplier control.
Unit Test plan Integration test plan System test plan Acceptance test plan
Below that in the plan hierarchy there may be a master test plan that includes an
overall description of all execution-based testing for the software system.
A master verification plan for reviews inspections/walkthroughs would also fit in
at this level.
The master test plan itself may be a component of the overall project plan or exist
as a separate document.
2. Briefly Explain about the Test Plan Components. (May/Jun 2016, Nov / Dec 16)
80
Test plan identifier
Can serve to identify it as a configuration item
Introduction (why)
Overall description of the project, the software system being developed or
maintained, and the software items and/or features to be tested
Overall description of testing goals (objectives) and the testing approaches
to be used
References to related or supporting documents
Test items (what)
List the items to be tested: procedures, classes, modules, libraries,
components, subsystems, systems, etc.
Include references to documents where these items and their behaviors are
described (requirements and design documents, user manuals, etc.)
List also items that will not be tested
Features to be tested (what)
Features are distinguishing characteristics (functionalities, quality
attributes). They are closely related to the way we describe software in
terms of its functional and quality requirements
Identify all software features and combinations of software features to be
tested. Identify the test design specification associated with each feature
and each combination of features.
Features not to be tested (what)
Identify all features and significant combinations of features that will not be
tested and the reasons.
Approach (how)
81
Description of test activities, so that major testing tasks and task durations
can be identified
For each feature or combination of features, the approach that will be taken
to ensure that each is adequately tested
Tools and techniques
Expectations for test completeness (such as degree of code coverage for
white box tests)
Testing constraints, such as time and budget limitations
Stop-test criteria
Item pass-fail criteria
Given a test item and a test case, the tester must have a set of criteria to
decide whether the test has been passed or failed upon execution
The test plan should provide a general description of these criteria
Failures to a certain severity level may be accepted
Suspension criteria and resumption requirements
Specify the criteria used to suspend all or a portion of the testing activity on
the test items associated with this plan
Specify the testing activities that must be repeated, when testing is resumed
Testing is done in cycles: test – fix - (resume) test (suspend) – fix - ...
Tests may be suspended when a certain number of critical defects has been
observed
Test deliverables
Test documents (possibly a subset of the ones described in the IEEE
standard)
Test harness (drivers, stubs, tools developed especially for this project, etc.)
Testing Tasks
Identify all test-related tasks, inter-task dependencies and special skills
required
82
Environmental needs
Software and hardware needs for the testing effort
Responsibilities
Roles and responsibilities to be fulfilled
Actual staff involved
Staffing and training needs
Description of staff and skills needed to carry out test-related
responsibilities
Scheduling
Task durations and calendar
Milestones
Schedules for use of staff and other resources (tools, laboratories, etc.)
Risks and contingencies
Risks should be (i) identified, (ii) evaluated in terms of their probability of
occurrence, (iii) prioritized, and (iv) contingency plans should be developed
that can be activated if the risk occurs
Example of a risk: some test items not delivered on time to the testers
Example of a contingency plan: flexibility in resource allocation so that
testers and equipment can operate beyond normal working hours (to
recover from delivery delays)
Testing costs (not included in the IEEE standard)
Kinds of costs:
costs of planning and designing the tests
costs of acquiring the hardware and software necessary
costs of executing the tests
costs of recording and analyzing test results
tear-down costs to restore the environment
Cost estimation may be based on:
83
Models (such as COCOMO for project costs) and heuristics (such as
50% of project costs)
Test tasks and WBS
Developer/tester ratio (such as 1 tester to 2 developers)
Test impact items (such as number of procedures) and test cost
drivers (or factors, such as KLOC)
Expert judgment(Delphi)
Approvals
Dates and signatures of those that must approve the test plan
85
pending modifications to documentation
Approvals
The test plan and its attachments are test-related documents that are prepared prior
to test execution. There are additional documents related to testing that are
prepared during and after execution of the tests.
The IEEE Standard for Software Test Documentation describes the following
documents:
Test Log
Records detailed results of test execution
Contents
Test log identifier
Description
Identify the items being tested including their version/revision levels
Identify the attributes of the environments in which the testing is
conducted
Activity and event entries
Execution description
Procedure results
Environmental information
Anomalous events
Incident report identifiers
Test Incident Report
Also called a problem report
Contents:
Test incident report identifier
86
Summary
Summarize the incident
Identify the test items involved indicating their version/revision level
References to the appropriate test procedure specification, test case
specification, and test log
Incident description
inputs, expected results, actual results, anomalies, date and time,
procedure step, environment, attempts to repeat, testers, observers
any information useful for reproducing and repairing
Impact
If known, indicate what impact this incident will have on test plans,
test design specifications, test procedure specifications, or test case
specifications
severity rating
88
5. What are the skills Needed by test specialist and Explain.(Nov/Dec 2015,16)
Given the nature of technical and managerial responsibilities assigned to the tester that
are listed in Section 8.0, many managerial and personal skills are necessary for success in
the area of work. On the personal and managerial level a test specialist must have:
91
Figure - Organization structure of a multi-product company.
The CTO's office sets the high-level technology directions for the company. A business
unit is in charge of each product that the company produces. (Sometimes the business
unit may also handle related products to form a product line.) A product business unit is
organized into a product management group and a product delivery group. The product
management group has the responsibility of merging the CTO's directions with specific
market needs to come out with a product road map. The product delivery group is
responsible for delivering the product and handles both the development and testing
functions. We use the term “project manager” to denote this head. Sometimes the term
“development manager” or “delivery manager” is also used.
The figure above shows a typical multi-product organization. The internal organization of
the delivery teams varies with different scenarios for single-and multi-product
companies, as we will discuss below.
Testing Team Structures for Single-Product Companies
Most product companies start with a single product. During the initial stages of evolution,
the organization does not work with many formalized processes. The product delivery
team members distribute their time among multiple tasks and often wear multiple hats.
All the engineers report into the project manager who is in charge of the entire project,
with very little distinction between testing function and development functions. Thus,
there is only a very thin line separating the “development team and “testing team.”
92
The model in Figure given below is applicable in situations where the product is in the
early stages of evolution. A project manage handles part or all of a product.
93
Figure - Separate groups for testing and development.
1. There is clear accountability for testing and development. The results and the
expectations from the two teams can be more clearly set and demarcated.
2. Testing provides an external perspective. Since the testing and development teams
are logically separated, there is not likely to be as much bias as in the previous
case for the testers to prove that the product works. This external perspective can
lead to uncovering more defects in the product.
3. Takes into account the different skill sets required for testing. As we have seen in
the earlier chapters, the skill sets required for testing functions are quite different
from that required for development functions. This model recognizes the
difference in skill sets and proactively address the same.
There are certain precautions that must be taken to make this model effective. First, the
project manager should not buckle under pressure and ignore the findings and
recommendations of the testing team by releasing a product that fails the test criteria.
Second, the project manager must ensure that the development and testing teams do not
view each other as adversaries. This will erode the teamwork between the teams and
ultimately affect the timeliness and quality of the product. Third, the testing team must
participate in the project decision making and scheduling right from the start so that they
do not come in at the “crunch time” of the project and face unrealistic schedules or
expectations.
Component-Wise Testing Teams: Even if a company produces only one product, the
product is made up of a number of components that fit together as a whole. In order to
provide better accountability, each component may be developed and tested by separate
94
teams and all the components integrated by a single integration test team reporting to the
project manager. The structure of each of the component teams can be either a coalesced
development-testing team (as in the first model above) or a team with distinct
responsibilities for testing and development. This is because not all components are of the
same complexity, not all components are at the same level of maturity. Hence, an
informal mix-and-match of the different organization structures for the different
components, with a central authority to ensure overall quality will be more effective. The
figure given below depicts this model.
96
6. The CTO's team can evolve a consistent, cost-effective strategy for test
automation.
7. As the architecture and testing responsibilities are with the same person, that is the
CTO, the end-to-end objectives of architecture such as performance, load
conditions, availability requirements, and so on can be met without any ambiguity
and planned upfront.
In this model, the CTO handles only the architecture and test teams. The actual
development team working on the product code can report to a different person, who has
operational responsibilities for the code. This ensures independence to the testing team.
This group reporting to the CTO addresses issues that have organization-wide
ramifications and need proactive planning. A reason for making them report to the CTO
is that this team is likely to be cross-divisional, and cross-functional. This reporting
structure increases the credibility and authority of the team. Thus, their decisions are
likely to be accepted with fewer questions by rest the of the organization, without much
of a “this decision does not apply to my product as it was decided by someone else” kind
of objections.
This structure also addresses career path issues of some of the top test engineers.
Oftentimes, people perceive a plateau in the testing profession and harbor a
misconception that in order to move ahead in their career, they have to go into
development. This model, wherein a testing role reports to the CTO and has high
visibility, will motivate them to have a good target to aim for.
In order that such a team reporting to the CTO be effective,
1. It should be small in number;
2. It should be a team of equals or at most very few hierarchies;
3. It should have organization-wide representation;
4. It should have decision-making and enforcing authority and not just be a
recommending committee; and
5. It should be involved in periodic reviews to ensure that the operations are in line
with the strategy.
97
Single Test Team for All Products
It may be possible to carry out the single-testing-team company model of a single-
product company into a multi-product company. Earlier in this section, we discussed
some criteria of how to organize testing teams. Based on those criteria, a single testing
team for all the products would be possible when the line between the products is
somewhat thin.
This model is similar to the case of a single-product team divided into multiple
components and each of the components being developed by an independent team. The
one major difference between the two is that in the earlier model, the project manager to
whom the testing teams reports has direct delivery responsibilities whereas in the case of
a multi-product company, since different groups/individuals have delivery
responsibilities for different products, the single testing team must necessarily report to a
different level. There are two possibilities.
1. The single testing team can form a “testing business unit” and report into this unit.
This is similar to the “testing services” model to be discussed in the next section.
2. The testing team can be made to report to the “CTO think-tank” discussed earlier.
This may make the implementation of standards and procedures somewhat easier
but may dilute the function of the CTO think-tank to be less strategic and more
operational.
Testing Teams Organized by Product
In a multi-product company, when the products are fairly independent of one another,
having a single testing team may not be very natural. Accountability, decision making,
and scheduling may all become issues with the single testing team. The most natural and
effective way to organize the teams is to assign complete responsibility of all aspects of a
product to the corresponding business unit and let the business unit head figure out how
to organize the testing and development teams. This is very similar to the multi-
component testing teams model.
Depending on the level of integration required among the products, there may be need for
a central integration testing team. This team handles all the issues pertaining to the
98
integration of the multiple products. Such an integration team should be cross-product
and hence ideally report into the CTO think-tank.
Separate Testing Teams for Different Phases of Testing
Testing is not a single, homogeneous activity. Because
There are different types of testing that need to be done—such as black box
testing, system testing, performance testing, integration testing,
internationalization testing, and so on.
The skill sets required for performing each of these different test types are quite
different from each other. For example, for white box testing, an intimate
knowledge of the program code and programming language are needed. For black
box testing, knowledge of external functionality is needed.
Each of these different types of tests may be carried out at different points in time.
For example, within internationalization testing, certain activities (such as
enabling testing) are carried out early in the cycle and fake language testing is
done before the product is localized.
As a result of these factors, it is common to split the testing function into different types
and phases of testing. Since the nature of the different types of tests are different and
because the people who can ascertain or be directly concerned with the specific types of
tests are different, the people performing the different types of tests may end up reporting
into different groups.
Such an organization based on the testing types presents several advantages.
1. People with appropriate skill sets are used to perform a given type of test.
2. Defects can get detected better and closer to the point of injection.
3. This organization is in line with the V model and hence can lead to effective
distribution of test resources.
The challenge to watch out for is that the test responsibilities are now distributed and
hence it may seem that there is no single point of accountability for testing. The key to
address this challenge is to define objectively the metrics for each of the phases or groups
and track them to completion.
99
Hybrid Models
The above models are not mutually exclusive or disjoint models. In practice, a
combination of all these models are used and the models chosen change from time to
time, depending on the needs of the project. For example, during the crunch time of a
project, when a product is near delivery, a multi-component team may act like a single-
component team. During debugging situations, when a problem to do with the integration
of multiple products comes up, the different product teams may work as a single team
and report to the CTO/CEO for the duration of that debugging situation. The various
organization structures presented above can be viewed as simply building blocks that can
be put together in various permutations and combinations, depending on the need of the
situation. The main aim of such hybrid organization structures should be effectiveness
without losing sight of accountability.
100
Managers Developers/Testers Users/ Clients
Task forces, policies, standards Apply black box and white box methods
Specify requirements clearly
Planning, Assist with test planning Support with operational profile
Resource allocation Test at all levels Participate in usability test
Support for education and training Train and mentor Participate in acceptance test planning
Interact with users and clients Participate in task forces
Interact with users and client
102
UNIT V
TEST AUTOMATION
Software test automation – skill needed for automation – scope of automation – design
and architecture for automation – requirements for a test tool – challenges in automation
– Test metrics and measurements – project, progress and productivity metrics.
PART – A
1. Define: Test automation.
A software is developed in order to test the software. This is termed as test automation.
103
6. What are the disadvantages of first generation automation?
Scripts hold hard coded values
Test maintenance cost is maximized
Level Description
104
At this level testing activities take place at all stages of the life
Level 4 –
cycle, including reviews of requirements and designs. Quality
Management and
criteria are agreed for all products of an organisation (internal and
measurement
external).
105
Releases of said software. Each new Release may have many new features (ie.
deliverables) within it.
106
PART- B
1. Briefly explain about Software test Automation and Skills needed for
automation. (May/Jun 16,Nov/Dec 2015,17 )
Test Automation: Automate running of most of the test cases that are repetitive in
nature. Developing software to test the software is called test automation.
Automation saves time as software can execute test cases faster than
human do.
Test automation can free the test engineers from mundane tasks and
make them focus on more creative tasks.
Automated tests can be more reliable.
Automated helps in immediate testing.
Automated can protect an organization against attrition of test engineers.
Test automation opens up opportunities for better utilization of
global resources.
Certain types of testing cannot be executed without
automation.
Automation means end-to-end, not test execution alone.
Terms Used in Automation: A test case is a set of sequential steps to execute a
test operating on a set of predefined inputs to produce certain expected outputs. There
are 2 types of test cases
automated (executed using automation)
manual(executed manually)
Skills Needed for Automation
The automation of testing is broadly classified into three generations.
107
First generation: Record and playback
Record and playback avoids the repetitive nature of executing tests. Almost all the test
tools available in the market have the record and playback feature. A test engineer
records the sequence of actions by keyboard characters or mouse clicks and those
recorded scripts are played back later, in the same order as they were recorded. When
there is frequent change, the record and playback generation of test automation tools may
not be very effective.
Second generation: Data –driven This method helps in developing test scripts that
generates the set of input conditions and corresponding expected output. This enables the
tests to be repeated for different input and output conditions. This generation of
automation focuses on input and output conditions using the black box testing approach.
Third generation: Action driven This technique enables a layman to create automated
tests; there are no input and expected output condition required for running the tests. All
action that appear on application are automatically tested based on a generic set of
controls defined for automation e input and output condition are automatically generated
and used the scenarios for test execution can be dynamically changed using the test
framework that available in this approach of automation hence automation in the third
generation involves two major aspects test case automation and frame work design
109
Tools and results modules: When a test framework performs its operations, there are a
set of tools that may be required. For example, when test cases are stored as source code
files in TCDB, they need to be extracted and compiled by build tools. In order to run the
compiled code, certain runtime tools and utilities may be required. The results that come
out of the test must be stored for future analysis. The history of all the previous tests run
should be recorded and kept as archives. This results help the test engineer to execute the
test cases compared with the previous test run. The audit of all tests that are run and the
related information are stored in the module of automation. This can also help in
selecting test cases for regression runs.
Report generator and reports /metrics modules : Once the results of a test run are
available, the next step is to prepare the test reports and metrics. Preparing reports is a
complex work and hence it should be part of the automation design. The periodicity of
the reports is different, such as daily, weekly, monthly, and milestone reports. Having
reports of different levels of detail can address the needs of multiple constituents and thus
provide significant returns. The module that takes the necessary inputs and prepares a
formatted report is called a report generator. Once the results are available, the report
generator can generate metrics. All the reports and metrics that are generated are stored in
the reports/metrics module of automation for future use and analysis.
3. Explain in detail about requirements for a test tool and challenges in automation.
No hard coding in the test suite.
Test case/suite expandability.
Reuse of code for different types of testing, test cases.
Automatic setup and cleanup.
Independent test cases.
Test case dependency
Insulating test cases during execution
Coding standards and directory structure.
110
Selective execution of test cases.
Random execution of test cases.
Parallel execution of test cases.
Looping the test cases
Grouping of test scenarios
Test case execution based on previous results.
Remote execution of test cases.
Automatic archival of test data.
Reporting scheme.
Independent of languages
Probability to different platforms.
Process Model for Automation : The work on automation can go simultaneously with
product development and can overlap with multiple releases of the product. One specific
requirement for automation is that the delivery of the automated tests should be done
before the test execution phase so that the deliverables from automation effort can be
utilized for the current release of the product. Test automation life cycle activities bear a
strong similarity to product development activities. Just as product requirements need to
be gathered on the product side, automation requirements too need to be gathered.
Similarly, just as product planning, design and coding are done, so also during test
automation are automation planning, design and coding. After introducing testing
activities for both the product and automation, the above figure includes two parallel sets
of activities for development and testing separately.
Selecting a test tool: Having identified the requirements of what to automate, a related
question is the choice of an appropriate tool for automation. Selecting the test tool is an
important aspect of test automation for several reasons given below:
1. Free tools are not well supported and get phased out soon.
2. Developing in-house tools take time.
3. Test tools sold by vendors are expensive.
111
4. Test tools require strong training.
5. Test tools generally do not meet all the requirements for automation.
6. Not all test tools run on all platform.
For all the above strong reasons, adequate focus needs to be provided for selecting
the right tool for automation.
Criteria for selecting test tools: Categories for classifying the criteria are
1. Meeting requirements
2. Technology expectations
3. Training/skills and
4. Management aspects.
Meeting requirements: There are plenty of tools available in the market, but they do not
meet all the requirements of a given product. Evaluating different tools for different
requirements involves significant effort, money and time. Secondly, test tools are usually
one generation behind and may not provide backward or forward go through the same
amount of evaluation for new requirements. Finally, a number of test tools cannot
differentiate between a product failure and a test failure. So the test tool must have some
intelligence to proactively find out the changes that happened in the product and
accordingly analyze the results.
Technology expectations
Extensibility and customization are important expectations of a test tool.
A good number of test tools require their libraries to be liked with product
binaries.Test tools are not 100% cross platform. When there is an impact analysis
of the product on the network, the first suspect is the test tool and it is uninstalled
when such analysis starts.
Training skills: Test tools expect the users to learn new language/scripts and may not
use standard languages/scripts. This increases skill requirements for automation and
increases the need for a learning curve inside the organization.
Management aspects
Test tools requires system upgrades.
112
Migration to other test tools difficult
Deploying tool requires huge planning and effort.
Steps for tool selection and deployment
1. Identify your test suite requirements among the generic requirements discussed.
2. Make sure experiences discussed in previous sections are taken care of.
3. Collect the experiences of other organizations which used similar test tools.
4. Keep a checklist of questions to be asked to the vendors on cost/effort/support.
5. Identify list of tools that meet the above requirements.
6. Evaluate and shortlist one/set of tools and train all test developers on the tool.
7. Deploy the tool across the teams after training all potential users of the tool.
Challenges in Automation
The most important challenge of automation is the management commitment.
Automation takes time and effort and pays off in the long run. Management should have
patience and persist with automation. Successful test automation endeavors are
characterized unflinching management commitment, a clear vision of goals that track
progress with respect to the long-term vision.
4. Explain in detail about the terms used in automation and scope of automation.
(May/Jun
2014)
Terms used in automation : A test case is a set of sequential steps to execute a test
operating on a set of predefined inputs to produce certain expected outputs. There are two
types of test cases namely automated and manual.
A test case can be documented as a set of simple steps, or it could be an assertion
statement or a set of assertions. An example of assertion is ―Opening a file, which is
already opened, should fail.
Scope of Automation: The specific requirements can vary from product to product, from
situation to situation, from time to time. The following gives some generic tips for
identifying the scope of automation. Identifying the types of testing amenable to
113
automation Stress, reliability, scalability, and performance testing. These types of testing
require the test cases to be run from a large number of different machines for an extended
period of time, such as 24 hours, 48 hours, and so on. Test cases belonging to these
testing types become the first candidates for automation.
Regression tests: Regression tests are repetitive in nature. Given the repetitive nature of
the test cases, automation will save significant time and effort in the long run.
Functional tests: These kinds of tests may require a complex set up and thus required
specialized skill, which may not be available on an ongoing basis. Automating these
once, using the expert skill tests, can enable using less-skilled people to run these tests on
an ongoing basis.
Automating areas less prone to change: User interfaces normally go through
significant changes during a project. To avoid rework on automated test cases, proper
analysis has to be done to find out the areas of changes to user interfaces, and automate
only those areas that will go through relatively less change. The non-user interface
portions of the product can be automated first. This enables the non-GUI portions of the
automation to be reused even when GUI goes through changes.
Automate tests that pertain to standards: One of the tests that products may have to
undergo is compliance to standards. For example, a product providing a JDBC interface
should satisfy the standard JDBC tests. Automating for standards provides a dual
advantage. Test suites developed for standards are not only used for product testing but
can also be sold as test tools for the market. Testing for standards have certain legal
requirements. To certify the software, a test suite is developed and handed over to
different companies. This is called certification testing and requires perfectly compliant
results every time the tests are executed.
Management aspects in automation: Prior to starting automation, adequate effort has to
be spent to obtain management commitment. The automated test cases need to be
maintained till the product reaches obsolescence. Since automation involves effort over
an extended period of time, management permissions are only given in phases and part
114
by part. It is important to automate the critical and basic functionalities of a product first.
To achieve this, all test cases need to be prioritized
as high, medium, and low, based on customer expectations. Automation should start from
high priority and then over medium and low-priority requirements.
117
programmer as “Get it in on time even if it isn’t tested.”
Methods for Establishing a Testing Policy
The following three methods can be used to establish a testing policy:
1. Management directive. One or more senior IT managers write the policy. They
determine what they want from testing, document that into a policy, and issue it to
the department. This is an economical and effective method to write a testing
policy; the potential disadvantage is that it is not an organizational policy, but
rather the policy of IT management.
2. Information services consensus policy. IT management convenes a group of the
more senior and respected individuals in the department to jointly develop a
policy. While senior management must have the responsibility for accepting and
issuing the policy, the development of the policy is representative of the thinking
of all the IT department, rather than just senior management. The advantage of this
approach is that it involves the key members of the IT department. Because of this
participation, staff is encouraged to follow the policy. The disadvantage is that it is
an IT policy and not an organizational policy
3. Users’ meeting. Key members of user management meet in conjunction with the
IT department to jointly develop a testing policy. Again, IT management has the
final responsibility for the policy, but the actual policy is developed using people
from all major areas of the organization. The advantage of this approach is that it
is a true organizational policy and involves all of those areas with an interest in
testing. The disadvantage is that it takes time to follow this approach, and a policy
might be developed that the IT department is obligated to accept because it is a
consensus policy and not the type of policy that IT itself would have written.
118
7. a. Discuss the different test process activities of software testing in detail.
(Apr/May 15)
Testing is a process rather than a single activity. This process starts from test planning
then designing test cases, preparing for execution and evaluating status till the test
closure. So, we can divide the activities within the fundamental test process into the
following basic steps:
1) Planning and Control
2) Analysis and Design
3) Implementation and Execution
4) Evaluating exit criteria and Reporting
5) Test Closure activities
1) Planning and Control:
Test planning has following major tasks:
i. To determine the scope and risks and identify the objectives of testing.
ii. To determine the test approach.
iii. To implement the test policy and/or the test strategy
iv. To determine the required test resources like people, test environments, PCs,
etc.
v. To schedule test analysis and design tasks, test implementation, execution and
evaluation.
vi. To determine the Exit criteria we need to set criteria such as Coverage
criteria.
Test control has the following major tasks:
o To measure and analyze the results of reviews and testing.
o To monitor and document progress, test coverage and exit criteria.
o To provide information on testing.
o To initiate corrective actions.
119
o To make decisions.
2) Analysis and Design:
Test analysis and Test Design has the following major tasks:
o To review the test basis.
o To identify test conditions.
o To design the tests.
o To evaluate testability of the requirements and system.
o To design the test environment set-up and identify and required
infrastructure and tools.
3) Implementation and Execution:
During test implementation and execution, we take the test conditions into test cases and
procedures and other testware such as scripts for automation, the test environment and
any other test infrastructure.
Test implementation has the following major task:
To develop and prioritize our test cases by using techniques and create test
data for those tests.
To create test suites from the test cases for efficient test execution.
To implement and verify the environment.
Test execution has the following major task:
o To execute test suites and individual test cases following the test
procedures.
o To re-execute the tests that previously failed in order to confirm a fix. This
is known as confirmation testing or re-testing.
o To log the outcome of the test execution and record the identities and
versions of the software under tests. The test log is used for the audit trial.
o To Compare actual results with expected results.
o Where there are differences between actual and expected results, it report
discrepancies as Incidents.
120
4) Evaluating Exit criteria and Reporting:
Based on the risk assessment of the project we will set the criteria for each test level
against which we will measure the “enough testing”. These criteria vary from project to
project and are known as exit criteria.
Exit criteria come into picture, when:
— Maximum test cases are executed with certain pass percentage.
— Bug rate falls below certain level.
— When achieved the deadlines.
Evaluating exit criteria has the following major tasks:
o To check the test logs against the exit criteria specified in test planning.
o To assess if more test are needed or if the exit criteria specified should be
changed.
o To write a test summary report for stakeholders.
5) Test Closure activities:
Test closure activities are done when software is delivered. The testing can be closed for
the other reasons also like:
When all the information has been gathered which are needed for the testing.
When a project is cancelled.
When some target is achieved.
When a maintenance release or update is done.
Test closure activities have the following major tasks:
o To check which planned deliverables are actually delivered and to ensure
that all incident reports have been resolved.
o To finalize and archive testware such as scripts, test environments, etc. for
later reuse.
o To handover the testware to the maintenance organization. They will give
support to the software.
121
o To evaluate how the testing went and learn lessons for future releases and
projects.
123
Content should be meaningful. All the anchor text links should be working properly. Images
should be placed properly with proper sizes.
These are some of basic important standards that should be followed in web development. The
task is to validate all for UI testing.
3) Interface Testing: The main interfaces are:
Web server and application server interface
Application server and Database server interface.
Check if all the interactions between these servers are executed and errors are handled properly.
If database or web server returns any error message for any query by application server then
application server should catch and display these error messages appropriately to the users.
Check what happens if user interrupts any transaction in-between? Check what happens if
connection to the web server is reset in between?
4) Compatibility Testing:
Compatibility of your website is a very important testing aspect. See which compatibility test to
be executed:
Browser compatibility
Operating system compatibility
Mobile browsing
Printing options
Browser compatibility:
In my web-testing career I have experienced this as the most influencing part on web site testing.
Some applications are very dependent on browsers. Different browsers have different
configurations and settings that your web page should be compatible with. Your website coding
should be a cross browser platform compatible. If you are using java scripts or AJAX calls for UI
functionality, performing security checks or validations then give more stress on browser
compatibility testing of your web application.Test web application on different browsers like
Internet explorer, Firefox, Netscape navigator, AOL, Safari, Opera browsers with different
versions.
OS compatibility: Some functionality in your web application is that it may not be compatible
with all operating systems. All new technologies used in web development like graphic designs,
interface calls like different API’s may not be available in all Operating Systems.Hence test your
124
web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with
different OS flavors.
Mobile browsing: Test your web pages on mobile browsers. Compatibility issues may be there
on mobile devices as well.
Printing options: If you are giving page-printing options then make sure fonts, page alignment,
page graphics etc., are getting printed properly. Pages should fit to the paper size or as per the
size mentioned in the printing option.
5) Performance testing:
Web application should sustain to heavy load. Web performance testing should include:
Web Load Testing
Web Stress Testing
Test application performance on different internet connection speed.
Web load testing: You need to test if many users are accessing or requesting the same page. Can
system sustain in peak load times? Site should handle many simultaneous user requests, large
input data from users, simultaneous connection to DB, heavy load on specific pages etc.
Web Stress testing: Generally stress means stretching the system beyond its specified limits.
Web stress testing is performed to break the site by giving stress and its checked as how the
system reacts to stress and how it recovers from crashes. Stress is generally given on input
fields, login and sign up areas.
In web performance testing website functionality on different operating systems and different
hardware platforms is checked for software and hardware memory leakage errors.
6) Security Testing:
Following are some of the test cases for web security testing:
Test by pasting internal URL directly onto the browser address bar without login. Internal
pages should not open.
If you are logged in using username and password and browsing internal pages then try
changing URL options directly. I.e. If you are checking some publisher site statistics with
publisher site ID= 123. Try directly changing the URL site ID parameter to different site
ID which is not related to the logged in user. Access should be denied for this user to
view others stats.
125
Try some invalid inputs in input fields like login username, password, input text boxes
etc. Check the systems reaction on all invalid inputs.
Web directories or files should not be accessible directly unless they are given download
option.
Test the CAPTCHA for automates script logins.
Test if SSL is used for security measures. If used proper message should get displayed
when user switch from non-secure http:// pages to secure https:// pages and vice versa.
All transactions, error messages, security breach attempts should get logged in log files
somewhere on the web server.
7. c With examples explain the following black box testing
Requirements based testing
Positive and Negative Testing
Sate based testing
User documentation and compatability
126
maintained which will give us the overall status of the project.
Requirements Testing process:
Testing must be carried out in a timely manner.
Testing process should add value to the software life cycle, hence it needs to be effective.
Testing the system exhaustively is impossible hence the testing process needs to be
efficient as well.
Testing must provide the overall status of the project, hence it should be manageable.
Positive Testing: - When tester test the application from positive point of mind than it is known
as positive testing. Testing the application with valid input and data is known as positive testing.
A test which is designed to check that application is correctly working. Here the aim of tester is
to pass affecting application, sometimes it is obviously called as clean testing, and that is “test to
pass”.
Negative Testing: - When tester test the application from negative point of mind than it is
known as negative testing.Testing the application always with invalid input and data is known as
negative testing.
Example of positive testing is given below:
Considering example length of password defined in requirements is 6 to 20 characters, and
whenever we check the application by giving alphanumeric characters on password field
“between” 6 to 20 characters than it is positive testing, because we test the application with valid
data/ input.
Example of negative testing is given below:
Considering example as we know phone no field does not accept the alphabets and special
characters it obviously accepts numbers, but if we type alphabets and special characters on phone
number field to check it accepts the alphabets and special characters or not than it is negative
testing.
State Based Testing:-
State Based means change of sate from one state to another.State based Testing is useful to
generate the test cases for state machines as it has a dynamic behavior (multiple state) in its
system. we Can explain this using state transition diagram.It is a graphic representation of a state
machine.
For eg we can take the behavior of mixer grinder. The state transition for this will be like
127
switch on -- turn towards 1 then 2 then 3 then turn backwards to 2 then 1 then off
switch on - directly turn backwards to 3 then turn towards to off then turn towards 1 then
2 then 3 then turn backwards to 2 then 1 then off
Each represents a state of machine. Like this we can draw state transition diagram. Valid test
cases can be generated by:
Start from the start state
Choose a path that leads to the next state
If you encounter an invalid input in a given state, generate an error condition test case
Repeat the process till you reach the final state
128
Industrial / practical connectivity of the subject
There are several important trends in software testing world that will alter the landscape
that testers find themselves in today:
129
QUESTION BANK
PART B - (5 x 16 = 80 Marks)
11. (a) “Principles play an important role in all engineering disciplines and are usually
introduced as part of an educational background in each branch of engineering”.
List and discuss the software testing principles related execution-based testing.
130
[Page No 12]
(16)
OR
(b) What is a defect ? List the origins of defects and discuss the developer / tester
support for developing a defect repository. [Page No 15]
12. (a) Consider the following set of requirements for the triangle problem:
R1: If x < y + z or y <x + z or z < x + y then it is a triangle
R2: If x = y and x # z and y # z then it is a scalene triangle
R3: If x = y or x = z or y = z then it is an isosceles triangle
R4: If x = y and y = z and z = x then it is an equilateral triangle
R5: If x > y + z or y > x + z or z > x + y then it is impossible to construct a
triangle. Now, consider the following causes and effects for the triangle problem :
Causes (inputs) :
C1 : Side “x” is less than sum of “y” and “z”
C2 : Side “y” is less than sum of “x” and “z”
C3 : Side “z” is less then sum of “x” and “y”
C4 : Side “x” is equal to side “y”
C5 : Side “x” is equal to side “z”
C6 : Side “y” is equal to side “z”
Effects:
E1 : Not a triangle
E2 : Scalene triangle
E3 : Isosceles triangle
E4 : Equilateral triangle
E5 : Impossible
What is a cause-effect graph ? Model a cause-effect graph for the above.
[Page No 70]
(16)
131
OR
(b) Consider the following fragment of code :
i = 0;
while (i < n – 1) do
j = i + 1;
while (j < n) do
if A [i] < A [j] then
swap (A[i], A [j]);
end do;
i = i + 1;
end do;
Identify bug (s) if any in the above program segment, modify the code if you have
identified bug (s). Construct a control flow graph and compute Cyclomatic
complexity. [Page No 72] (16)
13. (a) What is unit testing? Explain with an example the process of designing the
unit tests, running the unit tests and recording results. [Page No 51] (16)
OR
(b) What is integration testing ? Explain with examples the different types of
integration testing. [Page No 58] (16)
14. (a) What is a test plan ? List and explain the test plan components.[Page No 78]
(16)
OR
(b) Explain the role played by the managers, developers/testers, and users/clients
in testing planning and test policy development. [Page No 99] (16)
15. (a) What is software test automation ? State the major objectives of software test
automation and discuss the same. [Page No 105] (16)
OR
132
(b)Discuss with diagrammatic illustration the testing maturity model.
[Out of syllabus as per regulation 2013]
(16)
__________________
133
10. Write the types of reviews. [Page No
104]
PART B – (5 x 16 = 80 Marks)
11. (a)Write the technological developments that cause organizations to revise their
approach to testing; also write the criteria and methods involved while
establishing a testing policy. [Page No 115] (16)
Or
(b) Explain the four steps involved in developing a test strategy, and with an
example create a sample test strategy. (16)
12. (a) Compare functional and structural testing with its advantages and
disadvantages. [Page No 42] (16)
Or
(b) (i) Draw the flowchart for testing technique/tool selection process. (8)
(ii) Explain the following testing concepts : [Page No 44]
(1) Dynamic versus Static testing (4)
(2) Manual versus Static testing. (4)
13. (a) Write the importance of security testing. What are the consequences of
security breaches? Also write the various areas which has to be focused on
during security testing. [Page No 64] (16)
Or
(b) Explain the phases involved in unit test planning and how will you design
the unit test. [Page No 51] (16)
14. (a) Write the various personal, managerial and technical skills needed by a
Test specialist. [Page No 88] (16)
Or
(b) Write the essential high level items that are included during test planning;
also write the hierarchy of test plans. [Page No 78] (16)
15. (a) Explain about SCM and its activities. [Not in 2013 regulation] (16)
Or
134
(b) Explain the five steps in software quality metrics methodology adopted
from IEEE standard. [Page No 21] (16)
____________________
____________________________
137
Question Paper Code : 80592
B.E./B.Tech. DEGREE EXAMINATION, NOVEMBER/DECEMBER 2016
Sixth Semester
Computer Science and Engineering
IT6004– SOFTWARE TESTING
(Common to Information Technology)
(Regulations 2013)
Time : Three Hours Maximum : 100 Marks
Answer All Questions.
PART A - (10 x 2 = 20 Marks)
1. Mention the objectives of software Testing? [Page No 9]
2. Define Defects with example. [Page No 8]
3. Sketch the control flow graph for an ATM withdrawal system. [Page No 108]
4. Give a note on the procedure to computer cyclomatic complexity [Page No 26]
5. List out types of system testing. [Page No 48]
6. Compare and contrast Alpha Testing and Beta Testing. [Page No 46]
7. Discuss on the role of manager in the test group. [Page No 100]
8. What are the issues in testing object oriented system? [Page No 70]
9. Mention the criteria for selecting test tool. [Page No 108]
10. Distinguish between milestone and deliverable. [Page No 104 ]
PART B – (5 x 16 = 80 Marks)
11. (a)Elaborate on the principles of software testing and summarize the tester role
in the software development Organization. [Page No 12] (16)
Or
(b) Explain in detail processing and monitoring of the defects with defect
repository. [Page No 17] (16)
12. (a) Demonstrate the various black box test cases using Equivalence class
partitioning and boundary values analysis to test a module for payroll system.
[Page No 30] (16)
138
Or
(b) (i) Explain the various white box techniques with suitable test cases.
[Page No 36] (8)
(ii) Discuss in detail about code coverage testing.[Page No 38] (8)
13. (a) Explain the different integration strategies for procedure & functions with
suitable diagrams. [Page No 58] (16)
Or
(b) How would you identify the Hardware and Software for configuration
testing and explain what testing techniques applied for website testing. (16)
[Page No 63]
14. (a) i. What are the skills needed by a Test specialist. [Page No 88] (8)
ii. Explain the organizational structure for testing teams in single product
companies. [Page No 89] (8)
Or
(b) i. Explain the components of test plan in detail. [Page No 80] (8)
ii. Compare the contrast role of debugging goals and polices in testing. (8)
15. (a) Explain the design and architecture for automation and outline the
challenges . [Page No 106] (16)
Or
(b) What are the metrics and measurements? Illustrate the types of product metrics.
[Page No 113](16)
139
140
141