0% found this document useful (0 votes)
650 views34 pages

Unit Iii: Software Testing and Maintenance

The document discusses the fundamentals of software testing including its purposes, principles, and differences between verification and validation. Software testing is presented as a critical process for evaluating software quality and finding defects before delivery to users.

Uploaded by

Krishna Kishor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
650 views34 pages

Unit Iii: Software Testing and Maintenance

The document discusses the fundamentals of software testing including its purposes, principles, and differences between verification and validation. Software testing is presented as a critical process for evaluating software quality and finding defects before delivery to users.

Uploaded by

Krishna Kishor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

UNIT III

SOFTWARE TESTING AND MAINTENANCE

3.1 SOFTWARE TESTING FUNDAMENTALS

Now a days, Software has grown in complexity and size. The software is developed based on the Software
Requirements Specification which is always domain dependent. Accordingly every software product has a target audience.
For example banking software is different from videogame software. Therefore when a corporate organization invests large
sum in making a software product it must ensure that the software product must be acceptable to the end users or its target
audience. This is where the software testing comes into play. Software testing is not merely finding defects or bugs in the
software, it is completely dedicated discipline of evaluating the quality of the software. Good testing is at least as difficult as
good design.

With the current state of the art, we are not able to develop and deliver fault free software inspite of tools and
techniques that we make use of during development.

Quality is not absolute; it is value to some person or product. With this in mind, testing can never completely
establish the correctness of arbitary computer software; testing furnishes a criticism or comparison that compares the state
and behavior of the product against specification. An important point to be noted at this juncture is that software testing is a
separate discipline when compared to Software Quality Assurance (SQA) which encampases all business process areas not
just testing. Software testing may be viewed as a sub-field SQA.

Dr. Dave Gelperin and William C.Hetzel in their classic article in communication of ACM (1988) classified the
phases and goals of software testing as follows.

“Until 1956 it was the debugging oriented period when testing was often associated to debugging; there was no clear
difference between testing and debugging.

From 1957 – 1978 there was the demonstration oriented period when debugging and testing was distinguished now
– in this period it was shown, that software satisfies the requirements.

The time between 1978 – 1982 is announced as the destruction oriented period where the goal was to find errors.

1983-1987 is classified as the evaluation oriented period. Intention here is that during the software life cycle, a
product evaluation is provided and measuring quality.

From 1988 onwards it was seen as prevention oriented period where testes were to demonstrate that software
satisfies its specification, to detect faults and to prevent faults. In general, software engineers distinguish software faults from
software failures. In case of a failure the software does not do what the user expects.

A fault is a programming error that way or may not actually manifest as a failure. A fault can also be described as an
error in the correctness of the semantic of a computer program. A fault will become a failure if the exact computation
conditions are met, one of them being that the faulty portion of a computer software executes on the CPU. A fault can also
turn into a failure when the software is ported to a different hardware platform or a different compiler or when the software
gets extended.

Thus software testing is a process of executing a program or a system with the intent of funding errors. Or it
involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it needs its
required results.

1
TESTING PRINCIPLES

A common practice of Software Testing is that it is performed by an independent group of testers after the
functionality is developed but before it is delivered to the customer. This practice often results in the testing phase being used
as project buffer to compensate for project delays thereby compromising the time devoted to testing. Another practice is to
start software testing at the same moment the project starts and it is continuous process until the project finishes. This is
highly problematic in terms of controlling changes to software, if faults or failures are found part way into the project, the
decision to correct the software needs to be taken on the basis of whether or not these defects will delay the reminder of the
project. If the software does need correction, this needs to be vigorously controlled using a version numbering system and the
software testers need to be accurate in knowing that they are testing the correct version and will need to re-test the part of the
software where in the defects are found. The correct start point needs to be identified for retesting. There are added risks in
that new defects may be introduced as part of the corrections and the original requirement can also change partway through in
which instance previous successful tests may no longer meet the requirement and will need to be specified and redone.
Clearly the possibilities for projects being delayed and running over budget are significant. It is commonly believed that the
earlier a defect is found the cheaper it is to fix it. This is reasonable based on the role of any given defect contributing to or
being confused with further defects later in the system or process. In particular if a defect errormiously changes the state of
the data or which the software is operating that data is no longer reliable and therefore any testing after that point cannot be
relied if there are no further actual software defects.

Before applying the methods of design effective test cases, a developer must understand the basic principles that guide the
Software Testing process. Davis (1995) suggested a set of principles which are given below.
 All tests should be traceable to customer requirements
 Test should be planned long before the testing begins.
 The pareto principle applies to the software
 Testing should begin in small and progress towards testing in the large.
 Exhaustive testing is not possible
 To be most effective, testing should be done by the third party.

Software Testing Axioms

1. It is impossible to test a program completely


2. Software testing is risk based exercise
3. Testing cannot show that bugs don’t exist
4. The more bugs you find, the more bugs there are
5. Not all the bugs you find will be fixed
6. Product specifications never fail.

PURPOSE OF SOFTWARE TESTING


Regardless of the limitations, testing is an integral part of the software development. It is broadly deployed in every phase of
the software development cycle. Typically more than 50% of the development time is spent in testing. Testing is usually
performed for the following purposes.
 To improve quality
 For verification and validation
 For Software Reliability estimation
Quality means the conformance to the specified design requirement. The minimum requirement of quality means performing
as required under specified circumstances. Debugging a narrow view of software testing is performed heavily to find out
design defects by the programmer. The imperfection of human nature makes it almost impossible to make a moderately
complex program correct for the first time. Finding the problem and get them fixed is the purpose of debugging in
programming phase.

Typical software Quality factors can be as follows.

2
Good testing provides measures for all relevant factors. The importance of any particular factor varies from application to
application. Any system where human lives are at stake must place extreme emphasis on reliability and Integrity.
In the typical business system, usability and maintainability are the key factors while for a one time scientific program neither
may be significant. Our testing to be fully effective, must be geared to measuring each relevant factors and thus forcing any
quality to become tangible and visible.

Another important purpose of testing is verification and validation. Verification is the checking of or testing of items
including software for conformance and consistency with an associated specification. Software Testing is just one kind of
verification which also uses techniques such as reviews, inspections and walkthrough. Validation is the process of checking
what has been specified and what the user actually wanted.

Verification: Have we built the software right? (i.e. does it match the specification)
Verification is a quality process that is used to evaluate whether a not a product, service a system complies with a regulation,
specification or conditions imposed at the start of a development phase. Verification can be in development scale up, or
production. This is often an internal process.

Validation: Have we built the right software ( i.e. is this what the customer wants?)

Validation is the process of establishing documented evidence that provides a high degree of assurance that a product, service
or system accomplishes its intended requirements.
This often involves acceptance and suitability with external customers. The comparison between verification and validation is
given in the table below.

Table 4.1 Verification and Validation

Software reliability has important relations with many aspects of software including the structure and the amount of testing it
has been subjected to based on operational people, testing can serve as a statistical sampling method to gain failure data for
reliability estimation.

3
V & V Planning and Documentation

Similar to other phases of software development, the testing activities need to be carefully planned and documented. As we
have seen from the conventional software development life cycle, testing activity is only parallel activity and it starts as soon
as the requirements analysis phase is completed. Since test activities start early in the development life cycle and covers all
subsequent phases, timely attention to the planning of these activities is of paramount importance. Precise description various
activities, responsibilities and procedures must be clearly documented. This document is called software verification
and validation plan. We shall follow IEEE standard 1012 where v &v activities for Waterfall – like life cycle is given as
follows.

 Concept Phase
 Requirements Phase
 Design Phase
 Implementation Phase
 Test Phase
 Installation & Checkout Phase
 Operation & Maintenance Phase

Sample contents of the Verification & Validation plan according to IEEE STD 1012 is given below.

The test design documentation specifies for each software features or combination of such features the details of the test
approach and identifies the associated tests. The test case documentation specifies inputs, predicted outputs and execution
condition for each test item. The test procedure documentation specifies the sequence of actions for the execution of each
test. Finally the test report documentation provides information on the results of testing tasks.

4
TESTING PRINCIPLES

Davis (1995) suggests a set of testing techniques.

1. All tests should be traceable to customer requirements.

2. Test should be planned long before testing begins. In fact test plan can begin as soon as the requirements
phase is complete.

3. The pareto principle applies to software testing. This principle implies that 80 percent of all errors
uncovered during testing will likely to be traceable to 20 per cent of all program modules.

4. Testing should begin in the small and progress towards testing in the large. Initially testing starts with
module testing and is subsequently extended to integration testing.

5. Exhaustive testing is not possible. Such testing can never be performed in practice.
Thus we need testing strategies that is some criteria for selecting significant test cases. A significant test
case is a test case that has a high potential to uncover the presence of an error. Thus the successful
execution of a significant test case increases the confidence in the correctness of the programme.

The importance of significant test cases has been discussed earlier. The next question is how to design a test case and what
are the attributes of a test case. Test case design methods must provide a mechanism that can help to ensure the completeness
of tests and provide the highest likelihood for uncovering errors in software. Any product that has been engineered can be
tested in one of the two ways.

1. Knowing the specified functions that a product has been designed to perform, tests can be conducted to find out the
functionality of each function and search for possible errors in each function.
2. Knowing the internal working of the product, the tests can be conducted whether the internal operation performs
according to specification and all internal components have been adequately exercised. The first approach is called
black box testing and second one is called white box testing which are discussed in detail in subsequent sections.

Structured Approach to Testing

The technical view of the development life cycle places testing immediately prior to operation and maintenance. In this
strategy an error discovered in the later parts of the life cycle must be paid for different items.

1. Cost for developing the program erroneously which may include wrong specification and coding.
2. The system must be tested to detect the errors.
3. Wrong specification and coding to be removed and modified specification, coding and documentation to be added.
4. The system must be retested.

Studies have shown that the majority of system errors occur in the design phase approximately 64% and the remaining 36%
occurs in the coding phase.

This means that almost two-thirds of the errors must be specified and coded into program before they can be detected.

The recommended testing process is given below as a life cycle chart showing the verification activities for each phase.

5
At every phase, the structures produced at each phase are analyzed for internal testability and adequacy. Test data sets are
based on the structures. In addition to this the following should be done at design and programming.
 Determine the structures that are consistent with the structures produced during previous phases.
 Refine and redefine test data generated earlier.
Generally, people test a program until they have confidence in it. This is a nebulous concept. Generally you will find many
errors when you begin testing an individual module or collection of modules. The detected error rate drops as you continue
testing and fixing bugs. Finally the error rate is low enough that you feel confident that you have caught all the major
problems. How you test your software depends on the situation.

SOFTWARE TEST PLANS


Large projects usually test their product in accordance with a software test plan. Or, at least they say they do. The test plan is
filled with “motherhood” statements saying that each module will be thoroughly tested, with special emphasis on values just
inside and outside the nominal input limits, and values clearly outside.

A test plan is a mandatory document. A good test plan goes a long way towards reducing risks associated with software
development. By identifying areas that are riskier than others we can concentrate our testing efforts there. Historical data and
bug and testing reports from similar products or previous releases will identify areas to explore. Bug report from customers
are important but also look at bugs reported by the developers themselves. The following are the components of test plan.
 Test Plan
 Test Case
 Test Script
 Test Scenario
 Test run
Test Plan covers the following:
o Scope, objectives and the approach to testing
o People and requirement dedicated/allocated to testing
o Tools that will be used
o Dependencies and risks
o Categories of defects
o Test entry and exit criteria
o Measurement to be captured
o Reporting and Communication process
o Schedules and mile stones

6
 Test case is a document that defines a test item and specifies a set of test inputs or data execution conditions and
expected results. The inputs/ data used by a test case should be both normal and intended to produce a good result
and intentionally erroneous and intended to produce an error. A test case is generally executed manually but many
test cases can be combined for automated execution.
 Test script is a step by step procedure for using a test case to test a specific unit of code, function or capability.
 Test Scenario is a chronological record of the details of the execution of a test script. It captures the specification,
tested activities and outcomes. This is used to identify defects.
 Test run is nothing but a series of logically related groups of test cases or conditions.

3.2 SOFTWARE TESTING STRATEGIES

A testing strategy is a general approach to the testing process which integrates software test case design methods into various
steps that result in the quality software product. It is a roadmap for the software developer as well as the customer. It provides
a framework to perform software testing in an organized manner. In view of this any testing strategy should have
 Test planning
 Test case design
 Test execution
 Data collection and evaluation
Whenever a testing strategy is adopted it is always sensible to adopt an incremental approach to system testing. In stead of
the integrating all the components and then perform system integration testing straight away, it is better to test the system
incrementally. The software testing strategy should be flexible enough to promote the customization that are necessary for all
components of larger system. For this reason a template for software testing is a set of steps in to which we can place specific
test cases design methods should be defined for the software engineering processes. Different strategies may be needed for
different parts of the system at different stages of software testing processes.

The flows and deficiencies in the requirements can surface only at the implementation stage. The testing after the system
implementation checks conformance with the requirements and assess the reliability of the system. It is to be noted that
verification and validation encompasses wide array of activities that include:

 Formal Technical Reviews  Database review


 Quality and Configuration  Algorithm analysis
 Performance monitoring  Development testing
 Simulation  Qualification testing
 Feasibility study  Installation testing
 Documentation review

 Top down testing: Where testing starts with the most abstract component and work backwards.
 Bottom up testing: Testing starts with the fundamentals components and work upwards.

Whatever the testing strategy is adopted, it is better to follow an incremental approach (various stages) to testing. Each
module is to be tested independently before the next module is added to the system.

Testing Level
Granularity levels of testing
o Unit testing: checking the behavior of single modules
o Integration testing: checking the behavior of module cooperation.
o Acceptance testing: the software behavior is compared with end user requirements
o System testing: the software behavior is compared with the requirements specifications
o Regression testing: to check the behavior of new releases

7
Unit testing: Under unit testing various tests are conducted on:

 Interfaces
 Local data structures
 Boundary conditions
 Independent paths
 Error handling paths
Module interface is tested to examine whether the information flows into and out of the module under test. The data starts
temporally in the data structure main’s its integrity during all steps in an algorithmic execution is examined in the tests on
local data structure.

Boundary conditions are tested to ensure that the module operates properly at the boundaries established to limit or restricted
processing at independent paths (basis paths). Finally all error handling paths are tested.

In addition to local data structure, the impact of global data on a module should be ascertained during unit testing.

Meyer (1979) has given a check list of the parameters to be examined under various tests. For details Refer to Pressman
(2007).

Selective testing of execution paths is an essential task during the unit test. Proper test cases should be designed to uncover
errors due to erroneous computations, incorrect comparisons or improper control flow.

The most common error in computations are:


 Incorrect arithmetic procedure
 Mixed mode operations
 Incorrect initialization
 Precision inaccuracy
 Incorrect symbolic representation.

Test cases should uncover the errors such as


 Comparison of different data types
 Incorrect logical termination
 Non existent loop termination
 Failure to exit when divergent iteration in encountered.
 Improperly modified loop variables

Unit test case designs begins invariably after the modules has been developed reviewed and verified for correct syntax. Since
a module is not a standalone program, driver and /or stub software must be developed for each unit test.

The driver is nothing more than a ‘main program’ that accepts test case data, passes such data to the module to be tested and
prints relevant results.

Stubs serve to replace modules that are subordinate to (called by) the module to be tested.

A stub is a dummy program that uses the subordinate module interface, do minimal data manipulations, verifications of entry
and return. These drivers and stub cause some overheads in testing process. These drivers and stubs must be removed from
the final product delivered to the customer.

Integration Testing

Integration testing is a systematic technique for constructing the program structure while conducting tests to uncover errors
associated with interfacing. Modules are integrated by moving downwards through the control hierarchy.
8
1. Non-incremental integration
All modules are combined in advance. The entire program is tested as a whole. A set of errors are encountered.

2. Incremental integration
The program is constructed and tested in small segments. Errors are easier to locate and correct.

Top-down integration [incremental approach]


Modules are integrated by moving downward the control hierarchy, beginning with the main control module (main program).
Modules sub-ordinate to the main control module are incorporated into the structure in either Depth-first manner (or) Breadth
first manner.
Depth – first manner
It would integrate all modules on a major control path of the structure. Selection of a major path depends on
application- specific characteristics. In the figure given above, depth-first integration suggests that the modules M1,
M2 and M5 are integrated first after selecting the left hand path. Then M6 or M8 are integrated later.

Breadth –first integration


It would integrate all modules directly subordinate at each level moving across the structure horizontally. From the
figure M2, M3 and M4 would be integrated first. The next control level M5, M6 and so on.

Steps involved in the integration process


1. The main control module is used as a test driver. Stubs replace all modules directly subordinate to the main control
module.
2. Depending on integration approach selected, subordinate stubs are replaced one at a time with actual module.
3. Tests are conducted as each module is integrated
4. On completion of each set of test, another stub is replaced with real module.
5. regression testing may be conducted to ensure that new errors have not been introduced.

Top down strategy appear to be simple and straight forward, but in practice logical problems can arise.

Problems involved in integration process


Problem occurs when processing at low levels in the hierarchy is required to test upper levels. Since stubs replace low level
modules at the beginning of top-down testing, no significant data can flow upward in the program structure.

Three choices to solve the problem


1. To delay many tests until stubs are replaced with actual modules.
2. To develop stubs that perform limited functions that simulate the actual module.
3. To integrate the software from the bottom of the hierarchy upward.

First approach causes to lose control over correspondence between specific tests and incorporation of specific modules.
Second approach is workable but stubs become more and more complex.

Bottom-up integration
Bottom up integration testing begins construction and testing with atomic modules (modules at lowest levels in the program
structure).

9
Steps involved in bottom up integration

1. low level modules are combined into clusters or builds that perform a specific software sub function.
2. A driver is written to co-ordinate testcase input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.

Advantages and disadvantages of


1. Top-Down Integration Strategy
Advantage is testing the major control function early. Disadvantage is the need for stubs and testing difficulties associated
with them.
2. Bottom-up Integration Strategy
Advantage is easy test case design and also there is no need for stubs as module subordinate to a given level is available.
Disadvantage is that the program as an entity does not exist until the last module is added.

Regression Testing
Regression testing is the activity that helps to ensure that changes do not introduce unintended behaviour or additional errors.

The regression test suit contains 3 different classes of test cases.


1. A representative sample of tests that will exercise all software functions.
2. Additional tests that focus on software functions that are likely to be affected by the changes.
3. Tests that focus on the software components that have been changed.
Regression test suit should be designed to include only those tests that address one or more classes of errors in each of the
major program functions.

Integration test documentation


An overall plan for integration of the software and a description of specific tests are documented in TEST SPECIFICATION.
It includes:
 Scope of Testing
 Test Plan
 Interface integrity
 Functional Validity
 Performance
 Information Content
 Test procedure
 Actual Test Results
 References and Appendices

Validation Testing
Validation test succeeds when software functions in a manner that can be reasonably expected by the customers.

Validation test criteria


Both test plan and test procedures are designed to ensure that all functional requirements are met and all performance
requirements are achieved. Documentation is correct and human engineered Other requirements (transportability,
maintainability) are met.

The intent of the review is to ensure that all elements of the software configuration have been properly developed and are
catalogued. This should have details to support the maintenance phase of software life cycle.
Acceptance/Qualification Testing has the following
– Installation Testing
– Alpha Testing
– Beta Testing

10
Alpha and Beta Testing
The ALPHA TEST is conducted at the developer’s site by the customer. Alpha tests are conducted in a controlled
environment.
The BETA TEST is conducted at one or more customer sites by the end user(s) of the software. The developer is generally
not present. This is a ‘live’ application of the software in an environment that cannot be controlled by the developer.

System Testing
System Testing is actually a series of different tests whose primary purpose is to fully exercise the computer –based system.
They all work to verify that all system elements have been properly integrated and perform allocated functions.
System testing has the following
– Recovery testing
– Security testing
– Stress testing
– Performance Testing
Recovery Testing
This is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. If
recovery requires human intervention, the mean time to repair is evaluated.

Security Testing
Any computer based system that manages sensitive information or causes actions that improperly harm (or benefit)
individuals is a target for improper or illegal penetration. Security testing attempts to verify that protection mechanisms built
into a system will in fact protect it from improper penetration.

Stress Testing
Stress tests are designed to confront program functions with abnormal situations. Stress testing executes a system in a manner
that demands resources in abnormal quantity, frequency, or volume. For example, (1) special tests may be designed that
generate 10 interrupts are seconds, when one or two is the average rate; (2) input data rates may be increased by an order of
magnitude to determine how input functions will respond; (3) test cases that require maximum memory or other resources
may be executed; (4) test cases that may cause excessive hunting for disk resident data may be created; or (5) test cases that
may cause thrashing in a virtual operating system may be designed. The testers attempt to break the program.

The Primary intent or goal of the test: The type of bug the test is trying to find.

The scope of the software being tested- a portion of the software, a subsystem, or the entire system.

11
Test Architecture

3.3 Testing Strategies - BLACK BOX TESTING / Partition Testing

Black box and white-box are test design methods. Black box test design treats the system as a “black-box”, so it doesn’t
explicitly use the knowledge of the internal structure. Black-box test design is usually described as focusing on testing
functional requirements.

Synonyms for black box include: behavioral, functional, opague-box, and closed box.

White box test design allows one to peek inside the “box”, and it focuses specifically on using internal knowledge of the
software to guide the selection of test data.
Synonyms for white box include: structural, glass-box and clear box.

12
While black box and white box are terms that are still in popular use, many people prefer the terms “behavioral” and
“structural”. Behavioral test design is slightly different from black box test design because the use of internal knowledge isn’t
strictly forbidden, but its still discouraged. In practice, it hasn’t proven useful to use a single test design method. One has to
use a mixture of different methods to that they aren’t hindered by the limitations of a particular one. Some call this “gray-
box” or “translucent box” test design, but others wish we’d stop talking about boxes altogether.

It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the
tests once they are implemented. Note that any level of testing (unit testing, system testing, etc) can use any test design
methods. Unit testing is usually associated with structural test design, but this is because testers usually don’t have well-
defined requirements at the unit level to validate.

Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not
an alternative to white box testing. This type of testing attempts to find errors in the following categories.

1. incorrect or missing functions


2. interface errors,
3. errors in data structures or external database access,
4. performance errors, and
5. initialization and termination errors.

Tests are designed to answer the following questions:

1. How is the function’s validity tested?


2. What classes of input will make good test cases?
3. Is the system particularly sensitive to certain input values?
4. How are the boundaries of a data class isolated?
5. What data rates and data volume can the system tolerate?
6. What effect will specific combinations of data have on system operation?

White box testing should be performed early in the testing process, while black box testing tends to be applied during later
stages. Test cases should be derived which

1. reduce the number of additional test cases that must be designed to achieve reasonable testing, and
2. Tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific
test at hand.

Equivalence Partitioning
This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence
partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It
is based on an evaluation of equivalence classes for an input condition. An equivalence class may be defined according to the
following guidelines:

1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
4. If an input condition is Boolean, then one valid and one invalid equivalence class are defined.

Boundary Value Analysis (BVA)


This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it
selects test cases at the edges of the class. Rather than focusing on input conditions solely, BVA derives test cases from the
output domain also. BVA guidelines include:

13
1. For input ranges bounded by a and b, test cases should include values of a and b just above and just below a and b
respectively.
2. If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum
numbers and values just above and below these limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structures at its
boundary.

Cause Effect Graphing Techniques


Cause effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions.
There are four steps:

1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
2. A cause effect graph is developed
3. The graph is converted to a decision table.
4. Decision tables rules are converted to test cases.

Unit, Component and Integration Testing


The definitions of unit, component, and integration testing are recursive.
Unit : The smallest compliable component. A unit typically is the work of one programmer (At least in principle). It does not
include any called sub-components (for procedural languages) or communicating components in general.

Unit testing
In unit testing, the called components (or communicating components) are replaced with stubs, simulators or trusted
components. Calling components are replaced with drivers or trusted super –components. This unit is tested in isolation.

Component
A unit is a component. The integration of one or more components is a component.

Component testing is the same as unit testing except that all stubs and drivers are replaced with the real thing.

Two components (actually one or more) are said to be integrated when: they have been compiled, linked, and loaded
together. They have successfully passed the integration tests at the interface between them.

Thus components A and B are integrated to create a new and larger component (A,B). Note that this does not conflict with
the idea of incremental integration –it just means that A is a big component and B, the component added, is a small one.

Integration testing: Carrying out integration tests.


Integration tests for procedural languages:

This is easily generalized for OO languages by using the equivalent constructs for message passing. In the following, the
word “call” is to be understood in the most general sense of a data flow and is not restricted to just formal subroutine calls
and returns –for example, passage of data through global data structures and / or the use of pointers.

Let A and B be two components in which A calls B.


Let Ta be the component level tests of A
Let Tb be the component level tests of B
Tab the tests in A’s suite that cause A to call B
Tbsa: The tests in B’s suite for which it is possible to sensitize A – the inputs are to A, not B
Tbsa + Tab = = the integration test suite (+ - union)

14
3.4. Testing strategies - WHITE BOX TESTING

White box testing is a test case design method that uses the control structure of the procedural design to derive test cases.
Test cases can be derived that

1. Guarantee that all independent paths within a module have been exercised at least once.
2. Exercise all logical decisions on their true and false sides,
3. Execute all loops at their boundaries and within their operational bounds and
4. Exercise internal data structures to ensure their validity.

The Nature of Software Defects


Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed.
We often believe that a logical path is not likely to be executed when it may be executed on a regular basis. Our unconscious
assumptions about control flow and data lead to design errors that can only be detected by path testing.

Basic Path Testing


This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for
defining a basis set of execution paths. Test cases that exercise the basis set are guaranteed to execute every statement in the
program at least once during testing.

Flow Graphs
Flow graphs can be used to represent control flow in a program and can help in the derivation of the basis set. Each flow
graph node represents one or more procedural statements. The edges between nodes represent flow of control. An edge must
terminate at a node, even if the node does not represent any useful procedural statements. A region in a flow graph is an area
bounded by edges and nodes. Each node that contains a condition is called a predicate node.

Deriving Test Cases


1. From the design or source code, derive a flow graph.
2. Determine the cyclomatic complexity of this flow graph.
 Even without a flow graph, V(G) can be determined by counting the number of conditional statements in the
code.
3. Determine a basis set of linearly independent paths.
 Predicate nodes are useful for determining the necessary paths.
4. Prepare test cases that will force execution of each path in the basis set.
 Each test case is executed and compared to the expected results.

LOOP TESTING
This white box testing technique focuses exclusively on the validity of loop constructs.
15
Four different classes of loops can be defined:
1. Simple loops
2. Nested loops
3. Concatenated loops and
4. unstructured loops

SIMPLE LOOPS
The following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop:
1. Skip the loop entirely,
2. Only pass once through the loop,
3. m passes through the loop where m < n,
4. n-1,n,n+1 passes through the loop.

NESTED LOOPS
The testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically
increasing number of test cases. One approach for nested loops:

1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-
range or excluded values.
3. Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops
to typical values.
4. Continue until all loops have been tested.

CONCATENATED LOOPS
Concatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g
the loop counter for one is the loop counter for the other), then the nested approach can be used.

UNSTRUCTURED LOOPS
This type of loop should be redesigned not tested.

OTHER WHITE BOX TECHNIQUES

Other white box testing techniques include:


1. Condition testing - Exercises the logical conditions in a program
2. Data flow testing - Selects test paths according to the locations of definitions and uses of variables in the program

16
3.5 SYSTEM TESTING

System testing is the testing to ensure that by putting the software in different environments (e.g., Operating Systems) it still
works. System testing is done with full system implementation and environment. It falls under the class of black box testing.
System testing is defined as testing the behavior of a system/software as per software requirement specification

Testing the fully integrated applications including external peripherals in order to check how components interact with one
another.

System testing enables us to test, Verify and validate both the business requirements as well as the applications architecture
The application is tested thoroughly to verify that it meets the technical and functional specifications

Types of system testing


There are more than 50 types in system testing. The mainly using types are
1. Functional Testing - Validates functional requirements
2. Performance Testing - Validates non-functional requirements
3. Acceptance Testing - Validates clients expectations
Functional Testing
Goal: Test functionality of system
• Test cases are designed from the requirements analysis document (better: user manual) and centered around
requirements and key functions (use cases)
• The system is treated as black box
• Unit test cases can be reused, but new test cases have to be developed as well.
Performance Testing
Goal: Try to violate non-functional requirements
Test how the system behaves when overloaded.
Can bottlenecks be identified? (First candidates for redesign in the next iteration)
Try unusual orders of execution . Call a receive() before send()
Check the system’s response to large volumes of data. If the system is supposed to handle 1000 items, try it
with 1001 items.
What is the amount of time spent in different use cases?
Are typical cases executed in a timely fashion?
Types of Performance Testing
Stress Testing - Stress limits of system
Volume testing - Test what happens if large amounts of data are handled
Configuration testing - Test the various software and hardware configurations
Compatibility test - Test backward compatibility with existing systems
Timing testing - Evaluate response times and time to perform a function
Security testing - Try to violate security requirements
Environmental test - Test tolerances for heat, humidity, motion
Quality testing - Test reliability, maintain- ability & availability
Recovery testing - Test system’s response to presence of errors or loss of data
Human factors testing - Test with end users.
Acceptance Testing
Goal: Demonstrate system is ready for operational use.
Choice of tests is made by client.
Many tests can be taken from integration testing.
Acceptance test is performed by the client, not by the developer.
Alpha test:
Client uses the software at the developer’s environment.
Software used in a controlled setting, with the developer always ready to fix bugs.
Beta test:
Conducted at client’s environment (developer is not present)
17
Software gets a realistic workout in target environ- ment

System Testing Integration Testing

1. From requirement specification 1. From interface specification

2. No visibility of code 2. Visibility of integration structure

3.It is a high level testing 3. It is a low level testing

4. The complete system is configured in a 4. Test cases are developed with the express purpose of
controlled environment exercising the interface between components

3.6 Object-Oriented Software Testing

Research confirms that testing methods proposed for procedural approach are not adequate for OO approach
OO software testing poses additional problems due to the distinguishing characteristics of OO
Testing time for OO software found to be increased compared to testing procedural software
Use of “stubs” in software testing

a. If two units are supposed to interact with each other, and only one unit has been developed, then a
“template” or “stub” of the other unit can be developed just to test the interactions
b. Advantage: the developer of the first unit need not wait until the second unit is developed
c. Disadvantage: testing must be repeated after the actual second unit is developed

Class (Unit) Testing

Smallest testable unit is the encapsulated class


Test each operation as part of a class hierarchy because its class hierarchy defines its context of use
Approach:
Test each method (and constructor) within a class
Test the state behavior (attributes) of the class between methods
How is class testing different from conventional testing?
Conventional testing focuses on input-process-output, whereas class testing focuses on each method, then designing
sequences of methods to exercise states of a class
But white-box testing can still be applied

Challenges of Class Testing

Encapsulation:
Difficult to obtain a snapshot of a class without building extra methods which display the classes’ state
Inheritance and polymorphism:
Each new context of use (subclass) requires re-testing because a method may be implemented differently
(polymorphism).
18
Other unaltered methods within the subclass may use the redefined method and need to be tested
White box tests:
Basis path, condition, data flow and loop tests can all apply to individual methods, but don’t test interactions
between methods

Random Class Testing


1. Identify methods applicable to a class
2. Define constraints on their use – e.g. the class must always be initialized first
3. Identify a minimum test sequence – an operation sequence that defines the minimum life history of the class
4. Generate a variety of random (but valid) test sequences – this exercises more complex class instance life histories
Example:
1. An account class in a banking application has open, setup, deposit, withdraw, balance, summarize and
close methods
2. The account must be opened first and closed on completion
3. Open – setup – deposit – withdraw – close
4. Open – setup – deposit –* [deposit | withdraw | balance | summarize] – withdraw – close. Generate random
test sequences using this template

Integration Testing
OO does not have a hierarchical control structure so conventional top-down and bottom-up integration tests have little
meaning
Integration applied three different incremental strategies:
Thread-based testing: integrates classes required to respond to one input or event
Use-based testing: integrates classes required by one use case
Cluster testing: integrates classes required to demonstrate one collaboration

What integration testing strategies will you use?


Random Integration Testing / Multiple Class Random Testing

1. For each client class, use the list of class methods to generate a series of random test sequences.
Methods will send messages to other server classes.
2. For each message that is generated, determine the collaborating class and the corresponding method in the server
object.
3. For each method in the server object (that has been invoked by messages sent from the client object), determine
the messages that it transmits
4. For each of the messages, determine the next level of methods that are invoked and incorporate these into the test
sequence

Validation Testing
Are we building the right product?
Validation succeeds when software functions in a manner that can be reasonably expected by the customer.
Focus on user-visible actions and user-recognizable outputs
Details of class connections disappear at this level
Apply:
Use-case scenarios from the software requirements spec
Black-box testing to create a deficiency list
Acceptance tests through alpha (at developer’s site) and beta (at customer’s site) testing with actual customers
How will you validate your term product?
System Testing

Software may be part of a larger system. This often leads to “finger pointing” by other system dev teams
Finger pointing defence:
1. Design error-handling paths that test external information
19
2. Conduct a series of tests that simulate bad data
3. Record the results of tests to use as evidence
Types of System Testing:
 Recovery testing: how well and quickly does the system recover from faults
 Security testing: verify that protection mechanisms built into the system will protect from unauthorized
access (hackers, disgruntled employees, fraudsters)
 Stress testing: place abnormal load on the system
 Performance testing: investigate the run-time performance within the context of an integrated system

3.7 TESTING TOOLS

Given is the list of some of the software testing tools available.

20
Test coverage

Test coverage measures the degree to which the specification or code of a software program has been exercised by tests
Code coverage measures the degree to which the source code of a program has been tested.
Code coverage criteria include:
– equivalence testing
– boundary testing
– control-flow testing
– state-based testing

3.8 State Based Testing

A program moves from state to state. In a given state, some inputs are valid, and others are ignored or rejected. In response to
a valid input, the program under test does something that it can do and does not attempt something that it cannot do. In state-
based testing, you walk the program through a large set of state transitions (state changes) and check the results carefully,
every time.

A state machine is …
A system whose output is determined by both current state and past input. Previous inputs are represented in the current
state.
Here Identical inputs are not always accepted"
When accepted, they may produce different outputs"

Building blocks of a state machine

State - An abstraction that summarizes past inputs, and determines behaviour on subsequent inputs"
Transition - An allowable two-state sequence. Caused by an event"
Event - An input or a time interval"
Action - The output that follows an event"
Guard - predicate expression associated with an event, stating a Boolean restriction for a transition to fire

There are several types of state machines:

Finite automaton (no guards or actions)


Mealy machine (no actions associated with states)
Moore machine (no actions associated with transitions)
Statechart (hierarchical states: common superstates)

state transition diagram is the graphic representation of a state machine


state transition table is the tabular representation of a state machine.

You need to identify all possible valid and invalid transitions in the system and test them. (State machine behavior)

1. Begin in the initial state"


2. Wait for an event
3. An event comes in
a. If not accepted in the current state, ignore"
21
b. If accepted, a transition fires, output is produced (if any), the resultant state of the transition becomes
the current state"
4. Repeat from step 2 unless the current state is a final state".

Any two states are equivalent

 If all possible event sequences applied to these states result in identical behaviour"
 By looking at the output cannot determine from which state machine was started"
 Can extend to any pair of states

Minimal machine has no equivalent states

A model with equivalent states is redundant then it means the system is either

i. Probably incorrect or
ii. Probably incomplete

State Sf is reachable from state St

If there is a legal event sequence that moves the machine from Sf to St


Just stating a state is reachable implies reachable from the initial state.

Problems in Reachability

Dead state - Cannot leave - Cannot reach a final state


Dead loop - Cannot leave - Cannot reach a final state
Magic state - Cannot enter – no input transitions
Can go to other states - Extra initial state

Guarded transitions

The stack example state machine is ambiguous.


There are two possible reactions to push and pop in the Loaded state.
Guards can be added to transitions.
A guard is a predicate associated with the event.
A guarded transition cannot fire unless the guard predicate evaluates to true.

General properties of state machines

typically incomplete

– just the most important states, events and transitions are given
– usually just legal events are associated with transitions; illegal events (such as p1_Start from state Player
1 served) are left undefined

may be deterministic or nondeterministic

– deterministic: any state/event/guard triple fires a unique transition


– nondeterministic: the same state/event/guard triple may fire several transitions, and the firing transition may differ
in different cases

may have several final states (or none: infinite computations)

may contain empty events (default transitions)

may be concurrent: the machine (statechart) can be in several different states at the same time

22
The role of state machines in software testing

State machine is a framework for model testing, where an executable model (state machine) is executed or simulated with
event sequences as test cases, before starting the actual implementation phase.

It support for testing the system implementation (program) against the system specification (state machine)

It support for automatic generation of test cases for the implementation there must be an explicit mapping between the
elements of the state machine (states, events, actions, transitions, guards) and the elements of the implementation (e.g.,
classes, objects, attributes, messages, methods, expressions)

The current state of the state machine underlying the implementation must be checkable, either by the runtime environment
or by the implementation itself (built-in tests with, e.g., assertions and class invariants)

Validation of state machines


Checklist for analyzing that the state machine is complete and consistent enough for model or implementation testing:
 one state is designated as the initial state with outgoing transitions
 At least one state is designated as a final state with only incoming transitions; if not, the conditions for
termination shall be made explicit.
 There are no equivalent states (states for which all possible outbound event sequences result in identical action
sequences)
 Every state is reachable from the initial state
 At least one final state is reachable from all the other states
 Every defined event and action appears in at least one transition (or state)
 Except for the initial and final states, every state has at least one incoming transition and at least one outgoing
transition

Control faults
When testing an implementation against a state machine, one shall study the following typical control faults (incorrect
sequences of events, transitions, or actions):

 missing transition (nothing happens with an event)


 incorrect transition (the resultant state is incorrect)
 missing or incorrect event
 missing or incorrect action (wrong things happen as a result of a transition)
 extra, missing or corrupt state
 sneak path (an event is accepted when it should not be)
 trap door (the implementation accepts undefined events)

Test design strategies for state-based testing

Test cases for state machines and their implementations can be designed using the same notion of coverage as in white-box
testing:

test case = sequence of input events


all-events coverage: each event of the state machine is included in the test suite (is part of at least one test case)
all-states coverage: each state of the state machine is exercised at least once during testing, by some test case in the test suite
all-actions coverage: each action is executed at least once
all-transitions: each transition is exercised at least once
– implies (subsumes) all-events coverage, all-states coverage, and all-actions coverage
– ”minimum acceptable strategy for responsible testing of a state machine”
all n-transition sequences: every transition sequence generated by n events is exercised at least once
23
– all transitions = all 1-transition sequences
– all n-transition sequences implies (subsumes) all (n-1)-transition sequences
all round-trip paths: every sequence of transitions beginning and ending in the same state is exercised at least once
exhaustive: every path over the state machine is exercised at least once
– usually totally impossible or at least unpractical

Eg: Consider stack example.

Components : Stack
States : Empty, Full, Holding
Actions : Push, Pop
States in stacks

– Initial: before creation


– Empty: number of elements = 0
– Holding: number of elements >0, but less than the max capacity
– Full: number elements = max
– Final: after destruction

Examples of Transitions in stacks

• Initial -> Empty: action = “create” e.g. “s = new Stack()” in Java


• Empty -> Holding: action = “add”
• Empty -> Full: action = “add” . if max_capacity = 1
• Empty -> Final: action = “destroy” e.g. destructor call in C++, garbage collection in Java
• Holding -> Empty: action = “delete”

FSM-based Testing

Each valid transition should be tested . Verify the resulting state using a state inspector that has access to the internals of the
class.
Each invalid transition should be tested to ensure that it is rejected and the state does not change
e.g. Full -> Full is not allowed: we should call add on a full stack

This State based Testing approach has become popular with object-oriented systems.

The state of an object is defined as a constraint on the values of object’s attributes. Because the methods use the attributes in
computing the object’s behavior, the behavior depends on the object state

Testing of Inheritance

Inherited methods should be retested in the context of a subclass


• Example 1: if we change some method m() in a superclass, we need to retest m() inside all subclasses that inherit it
• Example 2: if we add or change a subclass, we need to retest all methods inherited from a superclass in the context
of the new/changed subclass

24
3.8 TEST CASE MANAGEMENT

One of the most important elements in today’s product development life cycle is a well organized test phase. Anyone who
has used computers for any length of time can testify to the amount of poorly tested software generally available today.
Anyone who has worked in the software industry, particularly in Quality Assurance, verification or testing, knows only too
well the pitfalls that accompany a poorly managed test phase.

As an example, here two test case management tools has been discussed.

 T-Plan Professional
 Kind of Tool

T-Plan Professional

Software Description

T-Plan since 1990 has supplied solutions for Test Process Management.

The T-Plan method and tools allowing both the business unit manager and the IT manager to: Manage Costs, Reduce
Business Risk and Regulate the Process.

The T-Plan Product Suite allows you to manage every aspect of the Testing Process, providing a consistent and structured
approach to testing at the project and corporate level. By providing order, structure and visibility throughout the development
lifecycle from planning to execution, acceleration of the “time to market” for business solutions can be delivered.

In today’s competitive market, as the scope and impact of projects increase, so do the associated risks of failure and the
importance of an auditable test process becomes all the more paramount. Using the T-Plan Suite gives management control
over the quality of the software by taking into account business risks and priorities. Errors can be detected early thus avoiding
costly waste and delay. In addition, the impact of project change can now be fully explored, defined, managed and reported
throughout the system development lifecycle.

Platforms
All Windows based platforms
QMTest

Kind of Tool
A cost-effective general purpose testing solution (freeware)

Software Description

Code Sourcery’s QMTest provides a cost-effective general purpose testing solution that allows an organization to implement
a robust, easy-to-use testing program tailored to its needs. QMTest works with most varieties of UNIX, including
GNU/Linux, and with Microsoft Windows.

QMTest’s extensible architecture allows it to handle a wide range of application domains: everything from compilers to
graphical user interfaces to web-based applications.

Platforms
GNU/Linux, Windows NT/2000/XP, IRIX 6.5, Most UNIX-like operating systems

BASIS MEASURES
A large variety of coverage measures exist. Here is a description of some fundamental measures and their strength and
weakness.
25
Statement Coverage

This measure reports whether each executable statement is encountered. Also known as: line coverage, segment coverage and
basic block coverage. Basic block coverage is the same as statement coverage except the unit of code measured is each
sequence of non-branching statements.

The chief advantage of this measure is that it can be applied directly to object code and does not require processing source
code. Performance profilers commonly implement this measure.

The chief disadvantages of statement coverage is that it is insensitive to some control structures.

If statements are very common, statement coverage does not report whether loops reach their termination condition only
whether the loop body was executed.

Statement coverage is completely insensitive to the logical operators.

Statement coverage cannot distinguish consecutive switch labels.

Test cases generally correlate more to decisions than to statements. You probably would not have to separate test cases for a
sequence of 10 non-branching statements; you would have only one test case. Basic block coverage eliminates this problem.

One argument in favour of statement coverage over other measures is that faults are evenly distributed through code;
therefore the percentage of executable statements covered reflects the percentage of faults discovered.

Decision Coverage
This measure reports whether Boolean expressions tested in control structures (such as the if-statement and while statement)
evaluated to both true or false. The entire Boolean expression is considered one true-or-false predicate regardless of whether
it contains logical and or logical or operators.

Condition Coverage
Condition coverage reports the true or false outcome of each Boolean sub-expression separated by logical-and logical-or if
they occur. Condition coverage measures the sub expressions independently of each other. This measure is similar to decision
coverage but has better sensitivity to the control flow.

Multiple Condition Coverage


Multiple condition coverage reports whether every possible combination of Boolean sub-expressions occurs. As with
condition coverage, the sub expressions are separated by logical and and logical or when present.

A disadvantage of this measures is that it can be tedious to determine the minimum set of test cases required, especially for
very complex Boolean expressions. An additional disadvantage of this measure is that the number of test cases required could
vary substantially among conditions that have similar complexity.

Condition /Decision Coverage


Condition/ Decision coverage is a hybrid measure composed by the union of condition coverage and decision coverage.

It has the advantage of simplicity but without the short comings of its components measures.

Path Coverage
This measure reports whether each of the possible paths in each function has been followed. A path is a unique sequence of
branches from the function entry to the exit, Also known as predicate coverage.

Path coverage has the advantage of requiring very thorough testing. Path coverage has two severe disadvantages. The first is
that the number of paths is exponential to the number of branches. For example, a function containing 10 if-statements has
26
1024 paths to test. Adding just one more if-statement doubles the count to 2048. The second disadvantage is that many paths
are impossible to exercise due to relationships of data.

Function Coverage
This measure reports whether you invoked each function or procedure. It is useful during preliminary testing to assure at least
some coverage in all areas of the software. Broad, shallow testing finds gross deficiencies in a test suite quickly.

Call Coverage
This measure reports whether you executed each function call. The hypothesis is that faults commonly occur in interfaces
between modules.

Data flow Coverage


This variations of path coverage considers only the sub-paths from variable assignments to subsequent references of the
variables. The advantage of this measure is the paths reported have direct relevance to the way the program handles data. One
disadvantage is that this measure does not include decision coverage. Another disadvantage is complexity.

Object Code Branch Coverage


This measure gives results that depend on the compiler rather than on the program structure since compiler code generation
and optimization techniques can create object code that bears little similarity to the original source code structure.

Loop Coverage
This measure reports whether you executed each loop body Zero times, exactly once, and more than once. The valuable
aspect of this measure is determining whether while-loops and for-loops execute more than once, information not reported by
others measures.

Race Coverage
This measure reports whether multiple threads execute the same code at the same time. It helps detect failure to synchronize
access to resources. It is useful for testing multithreaded programs as in an operating system.

Relational Operator Coverage


This measure reports whether boundary situations occur with relational operators (<,< =, > ,> =).

Weak Mutation Coverage


This measure is similar to relational operator coverage but much more general. It reports whether test cases occur which
would expose the use of wrong operators and also wrong operands. It works by reporting coverage of conditions derived by
substituting (mutating) the program’s expressions with alternate operators, such as “-” substituted for “+” and with alternate
variables substituted.

Table Coverage
This measure indicates whether each entry in a particular array has been referenced. This is useful for programs that are
controlled by a finite state machine.

Comparing measures
You can compare relative strengths when a stronger measure includes a weaker measure.
 Decision coverage includes statement coverage since exercising every branch must lead to exercising statements.
 Condition/Decision coverage includes decision coverage and condition coverage (by definition).
 Path coverage includes decision coverage.
 Predicate coverage includes path coverage and multiple condition coverage, as well as most other measures.

Coverage Goal for Release


Each project must choose a minimum percent coverage for release criteria based on available testing resources and the
importance of preventing post-release failures. Clearly, safety –critical software should have a high coverage goal. You might

27
set a higher coverage goal for unit testing than for system testing since a failure in lower level code may affect multiple high
level callers.

Using statement coverage, decision coverage, or condition/decision coverage you generally want to attain 80% - 90%
coverage or more before releasing. Some people feel that setting any goal less than 100% coverage does not assure quality.
However, you expend a lot of effort attaining coverage approach 100%. The same effort might find more faults in a different
testing activity, such as formal technical review. Avoid setting a goal lower than 80%.

Intermediate Coverage Goals


Choosing good intermediate coverage goals can greatly increase testing productivity. Our highest level of testing productivity
occurs when you find the most failures with the least effort. Effort is measured by the time required to create test cases, add
them to your test suite and run them. It follows that you should use coverage analysis strategy that increases coverage as fast
as possible. This gives you the greatest probability of finding failures sooner rather than later.

3.10 SOFTWARE MAINTENANCE ORGANIZATION

Software maintenance is defined in the IEEE Standard for Software Maintenance, IEEE 1219, as the modification of a
software product after delivery to correct faults, to improve performance or other attributes, or to adapt the product to a
modified environment. The standard also addresses maintenance activities prior to delivery of the software product, but only
in an information appendix of the standard.

Nature of Maintenance

Software maintenance sustains the software product throughout its operational life cycle. Modification requests are logged
and tracked, the impact of proposed changes is determined, code and other software artifacts are modified, testing is
conducted, and a new version of the software product is released. Also, training and daily support are provided to users.

Maintainers can learn from the developer's knowledge of the software. Contact with the developers and early involvement by
the maintainer helps reduce the maintenance effort. In some instances, the software engineer cannot be reached or has moved
on to other tasks, which creates an additional challenge for the maintainers. Maintenance must take the products of the
development, code, or documentation, for example, and support them immediately and evolve/maintain them progressively
over the software life cycle.

Need for Maintenance

Maintenance is needed to ensure that the software continues to satisfy user requirements. Maintenance is applicable to
software developed using any software life cycle model (for example, spiral). The system changes due to corrective and non-
corrective software actions. Maintenance must be performed in order to:

 Correct faults
 Improve the design
 Implement enhancements
 Interface with other systems
 Adapt programs so that different hardware, software, system features, and telecommunications facilities can be used
 Migrate legacy software
 Retire software

The maintainer's activities comprise four key characteristics:

 Maintaining control over the software's day-to-day functions


 Maintaining control over software modification
 Perfecting existing functions
 Preventing software performance from degrading to unacceptable levels

Majority of Maintenance Costs

28
presents some of the technical and non-technical factors affecting software maintenance costs, as follows:

 Application type
 Software novelty
 Software maintenance staff availability
 Software life span
 Hardware characteristics

Categories of Maintenance

three categories of maintenance: corrective, adaptive, and perfective. This definition was later updated in the Standard for
Software Engineering-Software Maintenance, ISO/IEC 14764 to include four categories, as follows:

 Corrective maintenance: Reactive modification of a software product performed after delivery to correct discovered
problems
 Adaptive maintenance: Modification of a software product performed after delivery to keep a software product
usable in a changed or changing environment
 Perfective maintenance: Modification of a software product after delivery to improve performance or maintainability
 Preventive maintenance: Modification of a software product after delivery to detect and correct latent faults in the
software product before they become effective faults

Key Issues in Software Maintenance

A number of key issues must be dealt with to ensure the effective maintenance of software. It is important to understand that
software maintenance provides unique technical and management challenges for software engineers. Trying to find a fault in
software containing 500K lines of code that the software engineer did not develop is a good example. Similarly, competing
with software developers for resources is a constant battle. Planning for a future release, while coding the next release and
sending out emergency patches for the current release, also creates a challenge. The following section presents some of the
technical and management issues related to software maintenance. They have been grouped under the following topic
headings:

 Technical issues
 Management issues
 Cost estimation and
 Measures

Technical Issues

Limited understanding - Limited understanding refers to how quickly a software engineer can understand where to make a
change or a correction in software which this individual did not develop.

Testing - The cost of repeating full testing on a major piece of software can be significant in terms of time and money.

Impact analysis - Impact analysis describes how to conduct, cost effectively, a complete analysis of the impact of a change in
existing software.

Maintainability - Maintainability sub-characteristics must be specified, reviewed, and controlled during the software
development activities in order to reduce maintenance costs.

Management Issues

Alignment with organizational objectives - Organizational objectives describe how to demonstrate the return on investment
of software maintenance activities.

Staffing -Staffing refers to how to attract and keep software maintenance staff. Maintenance is often not viewed as glamorous
work. software maintenance personnel are frequently viewed as "second-class citizens" and morale therefore suffer.

29
Process - Software process is a set of activities, methods, practices, and transformations which people use to develop and
maintain software and the associated products.

Organizational aspects of maintenance - Organizational aspects describe how to identify which organization and/or function
will be responsible for the maintenance of software.

Outsourcing - Large corporations are outsourcing entire portfolios of software systems, including software maintenance.
More often, the outsourcing option is selected for less mission-critical software, as companies are unwilling to lose control of
the software used in their core business.

Cost Estimation

Software engineers must understand the different categories of software maintenance, discussed above, in order to address
the question of estimating the cost of software maintenance. For planning purposes, estimating costs is an important aspect of
software maintenance.

Maintenance cost estimates are affected by many technical and non-technical factors. ISO/IEC14764 states that "the two
most popular approaches to estimating resources for software maintenance are the use of parametric models and the use of
experience". Most often, a combination of these is used.

1. Parametric models

Some work has been undertaken in applying parametric cost modeling to software maintenance. [Boe81, Ben00] Of
significance is that data from past projects are needed in order to use the models. Jones [Jon98] discusses all aspects of
estimating costs, including function points (IEEE14143.1-00), and provides a detailed chapter on maintenance estimation.

2. Experience

Experience, in the form of expert judgment (using the Delphi technique, for example), analogies, and a work breakdown
structure, are several approaches which should be used to augment data from parametric models. Clearly the best approach to
maintenance estimation is to combine empirical data and experience. These data should be provided as a result of a
measurement program.

Software Maintenance Measurement

The Practical Software and Systems Measurement project describes an issue-driven measurement process that is used by
many organizations and is quite practical.

There are software measures that are common to all endeavors, the following categories of which the Software Engineering
Institute has identified: size; effort; schedule; and quality. These measures constitute a good starting point for the maintainer.

Specific Measures

The maintainer must determine which measures are appropriate for the organization in question. suggests measures which are
more specific to software maintenance measurement programs. That list includes a number of measures for each of the four
sub-characteristics of maintainability:

 Analyzability: Measures of the maintainer's effort or resources expended in trying to diagnose deficiencies or causes
of failure, or in identifying parts to be modified
 Changeability: Measures of the maintainer's effort associated with implementing a specified modification
 Stability: Measures of the unexpected behavior of software, including that encountered during testing
 Testability: Measures of the maintainer's and users' effort in trying to test the modified software

Maintenance Process

30
The Maintenance Process subarea provides references and standards used to implement the software maintenance process.
The Maintenance Activities topic differentiates maintenance from development and shows its relationship to other software
engineering activities.

The need for software engineering process is well documented. CMMI® models apply to software maintenance processes,
and are similar to the developers' processes Software Maintenance Capability Maturity models which address the unique
processes of software maintenance.

The maintenance process model described in the Standard for Software Maintenance (IEEE1219) starts with the software
maintenance effort during the post-delivery stage and discusses items such as planning for maintenance. That process is
depicted in Figure 2.

Figure 2 The IEEE1219-98 Maintenance Process Activities

The maintenance process activities developed by ISO/IEC are shown in Figure 3.

31
Figure 3 ISO/IEC 14764-00 Software Maintenance Process

Each of the ISO/IEC 14764 primary software maintenance activities is further broken down into tasks, as follows.

 Process Implementation
 Problem and Modification Analysis
 Modification Implementation
 Maintenance Review/Acceptance
 Migration
 Software Retirement

Maintenance Activities

As already noted, many maintenance activities are similar to those of software development. Maintainers perform analysis,
design, coding, testing, and documentation. They must track requirements in their activities just as is done in development,
and update documentation as baselines change. ISO/IEC14764 recommends that, when a maintainer refers to a similar
development process, he must adapt it to meet his specific needs [ISO14764-99:s8.3.2.1, 2]. However, for software
maintenance, some activities involve processes unique to software maintenance.

Unique activities

There are a number of processes, activities, and practices that are unique to software maintenance, for example:

 Transition: a controlled and coordinated sequence of activities during which software is transferred progressively
from the developer to the maintainer
 Modification Request Acceptance/Rejection: modification request work over a certain size/effort/complexity may be
rejected by maintainers and rerouted to a developer
 Modification Request and Problem Report Help Desk: an end-user support function that triggers the assessment,
prioritization, and costing of modification request
 Impact Analysis
 Software Support: help and advice to users concerning a request for information (for example, business rules,
validation, data meaning and ad-hoc requests/reports)
 Service Level Agreements (SLAs) and specialized (domain-specific) maintenance contracts which are the
responsibility of the maintainers

Supporting activities

Maintainers may also perform supporting activities, such as software maintenance planning, software configuration
management, verification and validation, software quality assurance, reviews, audits, and user training.

Another supporting activity, maintainer training, is also needed.

Maintenance planning activity - An important activity for software maintenance is planning, and maintainers must address
the issues associated with a number of planning perspectives:

 Business planning (organizational level)


 Maintenance planning (transition level)
 Release/version planning (software level)
 Individual software change request planning (request level)

At the individual request level, planning is carried out during the impact analysis . The release/version planning activity
requires that the maintainer :

 Collect the dates of availability of individual requests


 Agree with users on the content of subsequent releases/versions
 Identify potential conflicts and develop alternatives

32
 Assess the risk of a given release and develop a back-out plan in case problems should arise
 Inform all the stakeholders

Whereas software development projects can typically last from some months to a few of years, the maintenance phase usually
lasts for many years. Making estimates of resources is a key element of maintenance planning. Those resources should be
included in the developers' project planning budgets. Software maintenance planning should begin with the decision to
develop a new system and should consider quality objectives. A concept document should be developed, followed by a
maintenance plan.

The concept document for maintenance should address:

 The scope of the software maintenance


 Adaptation of the software maintenance process
 Identification of the software maintenance organization
 An estimate of software maintenance costs

The next step is to develop a corresponding software maintenance plan. This plan should be prepared during software
development, and should specify how users will request software modifications or report problems.

Finally, at the highest level, the maintenance organization will have to conduct business planning activities (budgetary,
financial, and human resources) just like all the other divisions of the organization.

Techniques for Maintenance

This subarea introduces some of the generally accepted techniques used in software maintenance.

Program Comprehension

Programmers spend considerable time in reading and understanding programs in order to implement changes. Code browsers
are key tools for program comprehension. Clear and concise documentation can aid in program comprehension.

Reengineering

Reengineering is defined as the examination and alteration of software to reconstitute it in a new form, and includes the
subsequent implementation of the new form.

Reverse engineering

Reverse engineering is the process of analyzing software to identify the software's components and their interrelationships
and to create representations of the software in another form or at higher levels of abstraction. Reverse engineering is passive;
it does not change the software, or result in new software.

3.11 SOFTWARE MAINTENANCE


The term software maintenance usually refers to changes that must be made to software after they have been delivered to the
customer or user. The definition of software maintenance by IEEE (1993) is as follows.

“The modification of a software product after delivery to correct faults to improve performance of other attributes or to adopt
the product to modified environment”.

These are four types of software maintenance


􀂃 Corrective Maintenance
􀂃 Adaptive Maintenance
􀂃 Perfective Maintenance
􀂃 Preventive Maintenance

33
Corrective Maintenance deals with the repairs of faults or defects found. A defect can result from design errors, logic errors
and coding errors. Design errors occur when for example, changes made to the software are incorrect, incomplete, wrongly
communicated of the change request is misunderstood. Logic errors results from invalid tests and conclusions and incorrect
implementation of design specification, faulty logic flow or incomplete test of data. Coding errors are caused by incorrect
implementation of detailed logic design and incorrect use of source code logic design. Defects are also caused by data
processing errors and system performance errors. All this errors sometime called “residual errors”. Or bugs prevent the
software from conforming to its agreed specifications. The need for corrective maintenance is usually initiated by bug reports
drawn up by the endusers.

Adaptive maintenance consists of adapting software to changes in the environment such as hardware or the operating system.
The term environment within context refers to the totality of all conditions and influences which act from outside upon the
system. For example business rule, government policies , work patterns, software and hardware platforms. The need for
adaptive maintenance can only be recognized by monitoring the environment. A case study on the adaptive maintenance of
an Internet application “B4U Call”, B4UCall is an internet application that helps compare mobile phone packages offered by
different service providers and adding or removing a complete new service provider to the Internet application requires
adaptive maintenance on the system .

Perfective Maintenance concerns with functional enhancements to the system, and activities to increase the system
performance or to enhance its user interface – A successful piece of software tends to be subjected to a succession of
changes, resulting in an increase in the number of requirements. This is based on the premise that as the software becomes
useful, the users tend to experiment with new cases beyond the scope for which it was initially developed. Examples of
perfective maintenance include adding a new report in the sales analysis system, improving a terminal dialogue to make it
more user friendly and adding an online HELP command.

Preventive maintenance Concern activities aimed at increasing the system maintainability such as updating documentation,
adding commits and improving the modular structure of the system. The long term effect of corrective, adaptive and
perfective changes increases the system complexity. As a large programme is continuously changed its complexity which
reflects deterioting structure increases unless work is done to maintain or reduce it. This work is known as preventive change.
The change is usually initiated from within the maintenance organization with the intention of making program easier to
understand and hence facilitating future maintenance work. Examples of preventive change include restructing and
optimizing code and updating documentation.

Among these four types of maintenance only corrective maintenance is traditional maintenance. The other types can be
considered software evolution. Software evolution is now widely used in the software maintenance community.

In order to increase the maintainability of software we need to know what characteristics of a product affects its
maintainability. The factors that affect maintenance include system size, system age, number of input/output data items,
application type, programming language and the degree of structure.

Longer system require more maintenance effort than do smaller system because there is a greater learning curve associated
with longer systems and larger systems are more complex in terms of the variety of functions they perform.

For example a 10% change in a module of 200 lines of code is more expensive then 20% change in a module of 100 lines of
code. The factors that decrease maintenance effort are
1. Use of structured techniques
2. Use of automated tools
3. Use of data-base techniques
4. Good data administration
5. Experienced Maintenance

34

You might also like