0% found this document useful (0 votes)
65 views

Unit 4

The document discusses coding, testing, and documentation in software development. It provides details on: 1. The coding process which involves transforming design documents into code and unit testing. Coding standards are used to ensure uniformity. 2. Code review and testing techniques like walkthroughs and inspections which aim to find logical errors. 3. The importance of documentation for understandability, user guidance, and managing changes. Internal documentation includes comments and headers, external documents specify requirements and design. 4. The goal of testing is to identify defects by executing the program with test cases and observing the outputs. Key terms like failures, test cases, and test suites are defined.

Uploaded by

Biswajit Mishra
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views

Unit 4

The document discusses coding, testing, and documentation in software development. It provides details on: 1. The coding process which involves transforming design documents into code and unit testing. Coding standards are used to ensure uniformity. 2. Code review and testing techniques like walkthroughs and inspections which aim to find logical errors. 3. The importance of documentation for understandability, user guidance, and managing changes. Internal documentation includes comments and headers, external documents specify requirements and design. 4. The goal of testing is to identify defects by executing the program with test cases and observing the outputs. Key terms like failures, test cases, and test suites are defined.

Uploaded by

Biswajit Mishra
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

UNIT-4

Coding and Testing:


CODING
Coding starts once the design phase is complete and design documents have been
successfully reviewed. The input to the coding phase is the design document. During
coding phase different modules identified in the design document are coded
according to module specifications. So the objective of coding phase is to transform
the design of a system into a high level language code and to unit test this code.

CODING STANDARDS

Good software development organizations normally require their programmers to


adhere to some well-defined and standard style of coding called coding standards.

Many software development organizations formulate their own coding standards


that suit their engineers due to the following reasons:-

1. A coding standard gives a uniform appearance to the codes written by different


engineers.
2. It enhances code understanding.
3. It encourages good programming practices.
4. A coding standard lists several rules to be followed during coding, such as the
way variables are to be named, the way the code is to be laid out, error return
conventions, etc.
Representative Coding Standards

1. Rules for limiting the use of global: These rules list what types of data can be
declared global and what cannot.

2. Contents of the headers preceding codes for different modules: The


information contained in the headers of different modules should be standard for
an organization. The exact format in which the header information is organized in
the header can also be specified. The following are some standard header data:

• Name of the module.

• Date on which the module was created.

• Author’s name.
• Modification history.

• Synopsis of the module.

• Different functions supported, along with their input/output parameters.

• Global variables accessed/modified by the module.

3. Naming conventions for global variables, local variables, and constant


identifiers: A possible naming convention can be that global variable names always
start with a capital letter, local variable names are made of small letters, and
constant names are always capital letters.

4. Error return conventions and exception handling mechanisms: The way error
conditions are reported by different functions in a program are handled should be
standard within an organization. For example, different functions while encountering
an error condition should either return a 0 or 1 consistently.

CODE REVIEW
Code review for a module (that is, a unit) is undertaken after the module successfully
compiles. That is, all the syntax errors have been eliminated from the module.
Obviously, code review does not target to detect syntax errors in a program, but is
designed to detect logical, algorithmic, and programming errors. Code review has
been recognised as an extremely cost-effective strategy for eliminating coding errors
and for producing high quality code.

There two types of code reviews: -

1. Code Walkthroughs
2. Code Inspection
1. CODE WALK THROUGHS
Code walk through is an informal code analysis technique. In this technique, after a
module has been coded, successfully compiled and all syntax errors eliminated. A
few members of the development team are given the code few days before the walk
through meeting to read and understand code. Each member selects some test cases
and simulates execution of the code by hand (i.e. trace execution through each
statement and function execution). The main objectives of the walk through are to
discover the algorithmic and logical errors in the code. The members note down their
findings to discuss these in a walk through meeting where the coder of the module
is present.
Even though a code walk through is an informal analysis technique, several
guidelines have evolved over the years for making this naïve but useful analysis
technique more effective. Of course, these guidelines are based on personal
experience, common sense, and several subjective factors. Therefore, these
guidelines should be considered as examples rather than accepted as rules to be
applied dogmatically.

Some of these guidelines are the following.

• The team performing code walk through should not be either too big or too small.
Ideally, it should consist of between three to seven members.

• Discussion should focus on discovery of errors and not on how to fix the discovered
errors.

• In order to foster cooperation and to avoid the feeling among engineers that they
are being evaluated in the code walk through meeting, managers should not attend
the walk through meetings.

2. CODE INSPECTION

The aim of code inspection is to discover some common types of errors caused due
to oversight and improper programming. In addition adherence to coding standards
is also checked during code inspection. Good software development companies
collect statistics to identify the type of errors must frequently committed by their
engineers. These collected errors can be used to look out for possible errors.

The following lists some classical programming errors checked during code
inspection :-

o Use of un-initialized variables.


o Jumps into loops.
o Non terminating loops.
o Incompatible assignments.
o Array indices out of bounds.
o Improper storage allocation and de-allocation.
o Mismatches between actual and formal parameter in procedure calls.
o Use of incorrect logical operators or incorrect precedence among operators.
o Improper modification of loop variables.
o Comparison of equally of floating point variables, etc.
3. Cleanroom Technique
Cleanroom technique was pioneered at IBM. This technique relies heavily on
walkthroughs, inspection, and formal verification for bug removal. The
programmers are not allowed to test any of their code by executing the code other
than doing some syntax testing using a compiler. It is interesting to note that the
term cleanroom was first coined at IBM by drawing analogy to the semiconductor
fabrication units where defects are avoided by manufacturing in an ultra-clean
atmosphere.
SOFTWARE DOCUMENTATION
When a software is developed, in addition to the executable files and the source
code, several kinds of documents such as users’ manual, software requirements
specification (SRS) document, design document, test document, installation
manual, etc., are developed as part of the software engineering process.
Good documents are helpful in several ways:
 Good documents help to enhance understandability of a piece of code
 Documents help the users to understand and effectively use the system.
 Good documents help to effectively tackle the manpower turnover1
problem.
 Production of good documents helps the manager to effectively track the
progress of the project
Different types of software documents can broadly be classified into the following
internal documentation and external documentation.
1. Internal Documentation
Internal documentation is the code comprehension features provided in the source
code itself. Internal documentation can be provided in the code in several forms. The
important types of internal documentation are the following:
 Comments embedded in the source code
 Use of meaningful variable names
 Module and function headers
 Code indentation
 Code structuring (i.e., code decomposed into modules and functions)
 Use of enumerated types
 Use of constant identifiers
 Use of user-defined data types
2. External Documentation
External documentation is provided through various types of supporting documents such as
users’ manual, software requirements specification document, design document, test
document, etc. A systematic software development style ensures that all these documents are
of good quality and are produced in an orderly fashion.

TESTING
The aim of Testing process is to identify all defects present in a software product. So
Testing provides a practical way of reducing defects in a software & increasing the users’
confidence in a developed software.

How to test a Program

In a testing process a set of test inputs (or test cases) injected to the program and observing, if
the program behaves as expected. If the program fails to behave as expected, then the
conditions under which failure occurs are noted for later debugging and correction.

Terminologies:-- In the following, we discuss a few important terminologies that have been
standardised by the IEEE Standard Glossary of Software Engineering Terminology [IEEE, 1990]:

• Failure: it is an occurrence of an error (or defect or bug). But, the only presence of an error
may not necessarily lead to a failure.

• Test Case: This is the triplet [I, S, O], where I is the data input to the software, S is the state
of the software at which the data is input, and O is the expected output of the software.

• Test Suite: It is the set of all test cases with which a given software product is to be tested.

 A mistake is essentially any programmer action that later shows up as an incorrect result during
program execution. A programmer may commit a mistake in almost any of the development
activities
 An error is the result of a mistake committed by a developer in any of the development activities.
Mistakes can give rise to an extremely large variety of errors.
 A failure of a program essentially denotes an incorrect behaviour exhibited by the program during
its execution. An incorrect behaviour is observed either as production of an incorrect result or as an
inappropriate activity carried out by the program.
 A test scenario is an abstract test case in the sense that it only identifies the aspects of the program
that are to be tested without identifying the input, state, or output.
 A test script is an encoding of a test case as a short program. Test scripts are developed for
automated execution of the test cases.

Verification Vs Validation (V & V)


Verification is the process of determining whether the output of one phase of software development
conforms to that of its previous phase; whereas validation is the process of determining whether a
fully developed software conforms to its requirements specification. Thus, the objective of verification
is to check if the work products produced during a phase of development conform to those produced
during the preceding phase. For example, a verification step can be to check if the design documents
produced after the design step conform to the requirements specification. On the other hand,
validation is applied to the fully developed and integrated software to check if it satisfies the
customer’s requirements.
The primary techniques used for verification include review, simulation, formal verification, and
testing. Review, simulation, and testing are usually considered as informal verification techniques.
Formal verification usually involves use of theorem proving techniques or use of automated tools such
as a model checker. On the other hand, validation techniques are primarily based on testing.
Verification is carried out during the development process to check if the development activities are
proceeding alright, whereas validation is carried out to check if the right as required by the customer
has been developed.

Testing Activities
Testing involves performing the following major activities:
Test suite design: The test suite is designed possibly using several test case design techniques. We
discuss a few important test case design techniques later in this Chapter.
Running test cases and checking the results to detect failures:
Each test case is run and the results are compared with the expected results. A mismatch between the
actual result and expected results indicates a failure.
The test cases for which the system fails are noted down for later debugging. Locate error:
In this activity, the failure symptoms are analysed to locate the errors. For each failure observed during
the previous activity, the statements that are in error are identified.
Error correction:
After the error is located during debugging, the code is appropriately changed to correct the error
SOFTWARE TESTING STRATEGY / LEVELS OF SOFTWARE TESTING
STRATEGY Means “An elaborate and systematic plan of action “

1. The software Testing Strategy provides a road map that describes the steps to be followed
as part of Testing.
2. When these steps are planned and then undertaken then how much effort, time and
resources will be required.
3. So any Testing Strategy must contain Test Planning, Test Case Design, Test Execution and
resultant data collection and evaluation. Strategy are :--------
1. Testing begins at the component level (module) and works "outward" toward the
integration of the entire computer-based system.
2. Different testing techniques are appropriate at different points in time.
3. Testing is conducted by the developer of the software and (for large projects) an
Independent Test group (ITG).
4. Testing and Debugging are different activities, but Debugging must be accommodated
in any testing Strategy.
WHAT IS THE OVERALL STRATEGY FOR SOFTWARE TESTING?
We can divide testing strategy into a series of 4 steps that are implemented sequentially:-

1. Unit Testing
2. Integration Testing
3. Validation Testing
4. System Testing

1. Initially tests focused on each component (Module) individually ensuring that it functions
properly as a unit.
2. Unit Testing Uses heavy use of White Box Testing.
3. After testing all the components (modules) individually the modules are slowly integrated
and tested each level of integration.
4. So integration Testing focuses on verification and Program construction. Here Black Box
testing and White Box testing both are applied.
5. After integration, Validation Testing is conducted.
6. Validation testing provides final assurance that software meets all functional, behavioral
and performance requirements.
7. Black Box Testing are used exclusively for validation Testing.
8. Finally fully integrated system tested as a whole called System Testing.
9. Here Software must be combined with other system elements (e.g., hardware, people,
databases).
10. System testing verifies that all elements added properly and that overall system
function/performance is achieved.

1. UNIT TESTING(MODULE TESTING OR COMPONENT TESTING)

1. Unit Testing applies on Smallest Unit of Software Design called Module or Component.
2. Here with the help of detailed design description, important control paths are tested to
uncover errors within the boundary of the module.
3. The Unit Testing is a white Box oriented and the step can be conducted in parallel for
multiple modules.
4. The tests that occur as part of Unit Testing are:-
1. Interface 2. Local data structure

3. Boundary conditions 4.Independent Paths 5.Error Handling paths

5. Among the more common errors in computation are :---


(1) Misunderstood or incorrect arithmetic precedence
(2) Mixed mode operations
(3) Incorrect initialization
(4) Precision inaccuracy
(5) Incorrect symbolic representation of an expression
UNIT TEST PROCEDURES (HOW TO TEST A MODULE- STEPS)

1. In order to test a single module we need a complete environment to provide all that is
necessary for execution of the module.
2. As a module is not a stand alone program ,Driver and Stub software must be
developed for each unit test.
3. A driver is nothing more than a main program that accepts test case data passes such
data to the module to be tested and prints relevant result.
4. Stubs serve to replace modules that are subordinate the component to be tested.
2. INTEGRATION TESTING

1. The primary objective of Integration Testing is to test the module interfaces in order

to ensure that there are no errors in the parameter passing,when one module calls another
module.

2. The integration plan specifies the steps & the order in which modules are combined to
realize the full software. After each integration step the partially integrated software is tested.

3. Approaches for integration testing :------------

1. Big –Bang Integration Approach

2. Incremental Integration Approach

(i)Top down Integration

(ii)Bottom up Integration

3. Regression Testing

4. Mixed Mode Integration Testing.

5. Smoke Testing

1. Big Bang Approach:- It is the simplest integration Testing Approach. Where all the modules
making of software are integrated in a single step that is all the modules are simply put
together and tested. This technique is useful for small software. But the problem is once an
error is found during integration testing, it is very difficult to localize the error which may
belong to any of the module being integrated.

2.Incremental Integration Testing Approach

The program is constructed and tested in small increments, where errors are easier to isolate
and correct interfaces are more likely to be tested completely.

It is following Types:--

[A] Top Down Integration

1. Modules are integrated by moving downward through the control hierarchy, beginning
with the main control module (main program). Modules subordinate (and ultimately
subordinate) to the main control module are inserted into the structure in either a depth-first
or breadth-first manner.

2. Depth-first Integration would integrate all modules on a major control path of the
structure. Selection of a major path is somewhat arbitrary and depends on application-specific
characteristics. For example, selecting the left hand path, components M1, M2 , M5 would be
integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be
integrated. Then, the central and right hand control paths are built. (Figure Given Below)

3. Breadth-first Integration incorporates all components directly subordinate at each level,


moving across the structure horizontally. From the figure, components M2, M3, and M4 (a
replacement for stub S4) would be integrated first. The next control level, M5, M6, and so on.

Steps For Integration Testing:-

The integration process is performed in a series of five steps:-

1. The main control module is used as a test driver and stubs are substituted for all modules
directly subordinate to the main control module.

2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate
stubs are replaced one at a time with actual module.

3. Tests are conducted as each module is integrated.

B] Bottom Up Integration
Bottom-up integration testing, begins construction and testing with atomic modules (i.e.,
modules at the lowest levels in the program structure).

Because modules are integrated from the bottom up, processing required for modules
subordinate to a given level is always available and the need for stubs is eliminated.

Steps: ---------

1. Low-level modules are combined into clusters (sometimes called builds) that perform a
specific software sub function.

2. A driver (a control program) is written to coordinate test case input and output.

3. The cluster is tested.

4. Drivers are removed and clusters are combined moving upward in the program structure.

5. Components are combined to form clusters 1, 2, and 3. Each of the clusters is tested
using a driver (shown as a dashed block).
6. Modules in clusters 1 and 2 are subordinate to Ma. Drivers D1 and D2 are removed
and the clusters are interfaced directly to Ma.
7. Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb. Both
Ma and Mb will ultimately be integrated with modules Mc, and so forth.
8. The advantages of Bottom-up integration eliminates the need for complex stubs.
[3]Regression Testing:-
1. Each time a new module is added as part of integration testing, the software changes.
New data flow paths are established, new I/O may occur, and new control logic is
invoked.
2. These changes may cause problems with functions that previously worked smoothly.
3. In integration test strategy, regression testing is the re-execution of some subset of tests
that have already been conducted to ensure that changes have not transmitted adverse
side effects.
4. The regression test suite contains three different classes of test cases:-
• A representative sample of tests that will exercise all software functions.

• Additional tests that focus on software functions that are likely to be affected by the
change.

• Tests that focus on the software components that have been changed.
[4]Mixed Mode Integration(Sandwitch Testing)

It follows a combination of top down and bottom up testing approaches.

In mixed testing approach testing can start as and when modules become available. So this is
most commonly used integration testing approach.

3. SYSTEM TESTING

1. System tests are designed to validate fully developed software to assure that it meets
requirements.

There are 3 kinds of system testing:-

1.Alpha Testing: - It is carried out by the test team within the developing organization.

2. Beta Testing:- it is performed by a select group of friendly customers.

3. Acceptance Testing: - It is performed by the customer to determine whether to accept or


reject the delivery of the system.

Broadly this test can be classified into two groups:-

1. Functionality Test
2. Performance Test
1. Functionality Test: -It tests the functionality of the software to check whether, it
satisfies the functional requirements as documented in the SRS document. The
functionality test cases are designed by using a black box approach.
2. Performance Test - It test the conformance of the system with the nonfunctional
requirements of the system. These are as follows :-
3. Stress Testing:-
1. It is called Endurance Testing.
2. It evaluates system performance when it is stressed for short periods of time.
3. These are black box Tests which are design to impose a range of abnormal and
even illegal input conditions so as to stress the capabilities of the software.
4. Input data volume, input data rate, processing time, utilization of memory are
tested beyond the designed capacity.
e.g. suppose an operating system is supposed to support 15 multiprogramming jobs
so the system is stressed by attempting to run 15 or more jobs simultaneously .

Volume testing:- It is important to check whether the data structure( arrays, queues,
stacks etc) have been designed successfully for extra- ordinary situations.

Configuration testing:- This Testing is used to analyze system behaviour in various


hardware and software configurations specified in the requirements.

Compatibility testing:- This testing is required when the systems interfaces with other
types of systems e.g. when the system communicate with a large database system to
retrieve information at that time compatibility testing is required to test the speed and
accuracy of the data retrieve.

Regression Testing:- This Testing is required when the system being tested is an up-
gradation of an already existing system to fix some bugs or enhance functionality
,performance etc.
Recovery Testing:- This testing tests the response of the system to the presence of
faults or less of power , devices, services, data etc. The system is subjected to the loss
of mentioned resources in order to check if the system recovers satisfactorily.

E.g. printers can be disconnected to check if the system hangs or the power may be
shutdown to check the extent of data loss and corruption.

Maintenance Testing:- The testing addresses the diagnostic programs and other
procedures that are required to be developed to help implement the maintenance of
the system.

Documentation Testing:- documentation is checked to ensure that the required user


manual, maintenance manuals and technical manuals exists and are consistent.

Usability Testing:- this testing check the user interface to see if it meets all the user
requirements during usability testing. The display screens, messages report formats
are tested.

Security Testing:- Security Testing attempts to verify that protection mechanisms built
into a system will protect it from improper entry.

1. Any computer-based system that manages sensitive information or causes actions


that can improperly harm individuals is a target for illegal Entry. Entry includes a
broad range of activities: hackers who attempt to penetrate systems for sport;
Disloyal employees who attempt to penetrate for revenge; dishonest individuals
who attempt to penetrate for illicit personal gain.
2. During security testing, the Tester plays the role(s) of the individual who desires
to penetrate the System.
3. Anything goes! The tester may attempt to acquire passwords through external
clerical means; may attack the system with custom software designed to
breakdown any defenses that have been constructed ;thereby denying service to
others; may purposely cause system errors, hoping to penetrate during recovery;
may browse through insecure data, hoping to find the key to system entry.
4. Given enough time and resources, good security testing will ultimately penetrate
a system.
The role of the system designer is to make penetration cost more than the value of the
information that will be obtained.

SOFTWARE TESTING TECHNIQUES


What is it?

Once source code has been generated, software must be tested to detect (and correct) as many
errors as possible before delivery to your customer.

Your goal is to design a series of test cases that have a high likelihood of finding errors—

but how?

That’s where software testing techniques enter into the picture. These techniques provide
systematic guidance for designing tests that:-

(1) Exercise the internal logic of software Modules.


(2) Exercise the input and output domains of the program to uncover errors in program
function, behavior and performance

Who does it?

During early stages of testing, a software engineer performs all tests. However, as the testing
process progresses, testing specialists may become involved.

What are the steps?

Software is tested from two different ways:

(1) Internal program logic is exercised using “white box” test case design techniques.

(2) Software requirements are exercised using “black box” test case design techniques.

In both cases, the purpose is to find the maximum number of errors with the minimum
amount of effort and time.

Test Case Design


1. The design of test cases for software is as challenging as the design of the software product.
2. The Test Case must be designed in such a way that it is of reasonable size and can uncover
as many errors as possible in the software.
3. Large collection of input data values(test cases) and randomly selected test cased don’t
guarantee that all errors are covered.
e.g. Consider the following code:-

if( x> y)

max= x ;

else

max=x ;

for the above code the test suit { (x=3, y=2), (x=2, y=3)} can detect errors where as large test
suit {(x=3,y=2), (x=4,y=3), (x=5,y=1)} doesn’t detect errors. So it implies that large test suit
doesn’t detect more errors if it is not carefully designed. A systematic approach should be
followed to design optimal test suit.

There are many test case design methods are developed for systematic approach to testing:-

1. White Box Or Glass Box Or Structural Testing Approach

2. Black Box Or Functional Or Behavioral Testing Approach

WHITE BOX TESTING OR GLASS BOX TESTING OR STRUCTURAL TESTING

Using white-box Testing methods, the software engineer can derive test cases that

(1) guarantee that all independent paths within a module have been exercised at least once,
(2) Exercise all logical decisions on their true and false sides, (3) execute all loops at their
boundaries and (4) exercise internal data structures to ensure their validity.

The White Box testing can be classified in the following ways:-


1. Statement Coverage
2. Basis Path Testing:- a. Control Flow graph
b. Path

c. Linear Independent Path

d. Cyclomatic Complexity

3. Branch Coverage
4. Control Structure Testing:- a. Condition Testing
b. Data Flow Testing

c. Loop Testing: - a. Simple

b. Nested

c. Concatenated

d. Unstructured

1. Statement Coverage Testing

The statement coverage, design test cases in such as way that every statement in a program
is executed at least once. The Test cases should be designed correctly for all input values.

e.g. if ( x> y) Test cases: - {(4, 3), (3, 4), (3, 3)}
x=x+5;

else

y=y+5;

2. Basis Path Testing

It is a white Box testing technique proposed by Mc Cabe. Here every path in the program
tested at least once.

A. Control Flow Graph/ Flow Graph(CFG)

A control Flow Graph describes the sequence in which the different instructions of a program
get executed.

Steps for Drawing Control Flow Graph:-

1. First give number to all the statements of a program.


2. the Numbered statements work as the node of Control Flow Graph
3. An Edge from one node to another exists if execution of 1st node results 2nd Node.

To Draw Control Flow Graph we must Know:-

1. Sequence 2. Selection 3. Iteration


Sequence Selection Iteration

1. a =5 ; 1.if(a>b) 1.while(a>b) {
2. b=a * 2-1; 2.c=3 ; 2. b= b-1 ;
3. else c=5; 3.b=b * a ; }

4. c=c* c ; 4.c=a+b ;

B. Path

It is a collection of node and edge sequence from starting node to a terminal node of the
Control Flow Graph in a program.

Here the path are: - 124 and 134

C. Linearly Independent Path:-

1. A linearly independent path is any path through the program that introduces at least one
new edge that is not included in any other linearly independent paths.
2. So any path having a new node automatically implies that it has a new edge.
3. To identify Liner Independent Path for simple program is easy but for complicated
program it is difficult.
Mc Cabe’s Cyclomatic complexity metric help us to determine an upper bound that shows
the maximum number of Linear Independent path in a program. It informs how many paths
are there but not identify paths.

D. Mc cabe’s Cyclomatic Complexity Metric / Structural Complexity Metric

It defines an upper bound on the number of independent paths in a program . it provides a


quantitative measure of the logical complexity of a program.

How Cyclomatic Complexity Calculated

We can calculate number of linear independent path in 3 methods:-

Method 1

Given a control flow graph G of a program, the cyclomatic complexity V(G)


can be computed as:

V(G) = E – N + 2

Where N is the number of nodes of the control flow graph and E is the number of edges in
the control flow graph.

Here E= 4 and N= 4 so

V(G) = 4 - 4 +2

= 0+2 =2

Method 2

Another method of computing the Cyclomatic Complexity of a program from an inspection


of its control flow graph is as follows:-

V (G) = Total number of bounded areas + 1

Bounded Area means any region enclosed by nodes and edges

Here total bounded area =1 so V(G) =1 +1

=2

Method 3

The cyclomatic complexity of a program can also be easily computed by computing the
number of decision statements of the program. If N is the number of decision statement of a
program, then the McCabe’s metric is equal to N+1.

here no of decision statements =1

So V(G)= 1 + 1 =2

Derivation of Test Cases From Cyclomatic Complexity

The following is the sequence of steps that need to be undertaken for deriving path coverage
based test cases of a program :-
1. Draw the CFG(control flow graph)
2. Determine V(G) i.e. maximum number of paths possible
3. Determine all the paths and describe it
4. Prepare the test case that exercise each path.
[3]Branch Coverage/ decision coverage (DC)
It is also known as Edge Testing. Here each edge of a program’s control flow graph is
traversed at least once.

Example:-

If(a>b) test cases:- { (3,4),(4,3)(3,3)}

c=3;

Else c=5;

c=c*c;

[4]Control Structure Testing


It is another way for basis path testing that improves quality of White Box testing. It is again
of following types:-

a. Condition Testing

Condition testing is a test case design method that exercises the logical conditions contained
in a program module. A simple condition is a Boolean variable or a relational expression,
possibly preceded with one NOT (¬) operator. A relational expression takes the form

E1 <relational-operator> E2

where E1 and E2 are arithmetic expressions and <relational-operator> is one of the following:
<, ≤, =, ≠ (non equality), >, or ≥. The condition testing method focuses on testing each condition
in the program.

b.Data Flow Testing

Data flow-based testing method selects test paths of a program according to the locations of
the definitions and uses of different variables in a program.

The data flow testing approach, assume that each statement in a program is assigned a unique
statement number and that each function does not modify its parameters or global variables.
For a statement with S as its statement number,

DEF(S) = {X | statement S contains a definition of X}

USE(S) = {X | statement S contains a use of X}

If statement S is an if or loop statement, its DEF set is empty and its USE set is based on the
condition of statement S. The definition of variable X at statement S is said to be live at
statement S' if there exists a path from statement S to statement S' that contains no other
definition of X.
[5] Loop Testing
Loop testing is a white-box testing technique that focuses exclusively on the validity of loop
constructs. Four different classes of loops can be defined:
simple loops, concatenated loops, nested loops, and unstructured loops.

Simple Loops:- The following set of tests can be applied to simple loops, where n is the
maximum number of allowable passes through the loop.

1. Skip the loop entirely.

2. Only one pass through the loop.

3. Two passes through the loop.

4. m passes through the loop where m < n.

5. n -1, n , n + 1 passes through the loop.

Nested Loops: - It is a collection of simple loops .So the following approaches useful for nested
loops.

1. Start at the innermost loop. Set all other loops to minimum values.

2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimum iteration parameter (e.g., loop counter) values. Add other tests for out-of-range or
excluded values.

3. Work outward, conducting tests for the next loop, but keeping all other outer loops at
minimum values and other nested loops to "typical" values.

4. Continue until all loops have been tested.

Concatenated loops. Concatenated loops can be Tested using the approach defined for simple
loops, if each of the loops is independent of the other. However, if two loops are concatenated
and the loop counter for loop 1 is used as the initial value for loop 2, then the loops are not
independent. When the loops are not independent, the approach applied to nested loops is
recommended.

Unstructured loops. Whenever possible, this class of loops should be redesigned to reflect the
use of the structured programming constructs
BLACK BOX TESTING OR FUNCTIONAL OR BEHAVIORAL TESTING
1. It is also called Behavioral testing. In BBT, test cases are designed from an
examination of the input/output values only and no knowledge of design or
code is required.
2. It focuses on functional requirements of the software.
3. BBT attempts to find errors in the following methods:-

1. Incorrect or Missing function


2. Interface Errors
3. Errors in Data structure or external data base access
4. Behavior or performance Errors

5. Initialization and Termination errors


BBT applied during later stages of testing because attention is focused on
information domain.
The following the approaches for BBT:-
1. Equivalence class partitioning
2. Boundary value analysis
3. Graph Based Testing
4. Comparison Testing or Back to Back testing
5. Orthogonal Array Testing

1. EQUIVALENCE CLASS PARTITIONING


In this method the input domain values are partitioned in many equivalent set of classes
such that any value from any of the class shows same result. So it helps to reduce no. of
test cases for any program . The equivalent classes are designed by examining the input
and output values of the software.

Guidelines for defining equivalence class

1. If an input condition specifies a range one valid and two invalid equivalence classes
are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence
condition are defined.
3. if an input condition specifies a member of a set , one valid and one invalid equivalence
class are defined.
4. if an input condition is Boolean then one valid and one invalid class are defined.
e.g. for a software that computes the square root of an input integer which can assure values
0 to 5000 , there are 3 equivalence classes :-

1. The set of negative integer

2. The set of integers in the range of 0 to 5000

3. Integers larger than 5000


So the test cases must include representative values from each of the 3 equivalence classes
and a possible test set can therefore be

{-5, 500, 6000}

2.BOUNDARY VALUE ANALYSIS (BVA)

1. B.V.A leads to a selection of test cases that exercise Boundary values.


2. A greater no of errors tend to occur at the boundaries of the input domain rather than in
the center.
3. For this reason BVA is developed as a technique. It leads to the selection of test cases at
the edge of the class rather than focusing fully on input conditions.
Guidelines for BVA

1. If an input condition specifies a range bounded by values a and b , test cases should
be designed with values a and b and just above and just below a and b.
2. If an input condition specifies a no of values, test cases should be developed that
exercise the minimum and maximum number values just above and below minimum
and maximum are also tested.
3. apply guidelines 1 and 2 to output conditions
4. If internal program data structure have prescribed boundaries , be certain to design a
test case to exercise the data structure at its boundary.
E.g. for a function that computes the square root of integer values in the range between 0
to 5000 the test cases must include the values ( 0, -1, 5000, 5001}

DEBUGGING
1. Debugging is the Process that results in the removal of the error. Once errors are identified
it is necessary to 1st locate the program statement responsible for the errors and then to fix
them. Debugging is not testing but always occurs as a consequence of testing. The
debugging process attempts to match symptoms with cause thereby leading to error
correction.
2. The debugging process always has two outcomes:-
1. The cause will be found and corrected.
2. The cause will not be found.
In the 2nd case the software engineer design a test case to help, validate that suspicion and
works towards error correction is an iterative fashion.

The difficulty arises in debugging process due to the following reasons:-

1. The symptoms may appear one part of a program while the cause is located at a site that
is far removed.
2. The symptoms may disappear temporarily when another error is corrected.
3. The symptoms may actually be caused by non errors e.g. round off, inaccuracies.
4. The symptom may be caused by human error that is not easily traced.
5. The symptom may be a result of timing problems, rather than processing problems.
6. It may be difficult to accurately reproduce input conditions (e.g., a real-time application
in which input ordering is indeterminate).
7. The symptom may be due to causes that are distributed across a number of tasks running
on different processors.
[ART OF DEBUGGING]

DEBUGGING APPROACHES

There are many important approaches that are available to identify error locations. Each
will be useful in appropriate circumstances.

1, Brute force Approach:- It is the most common and least efficient method for isolating
the cause of a software error. In this approach the program is loaded with print statement
to print the intermediate values with the hope that some of the printed values will help to
identify the statement in error.

2. Backtracking:- It is used successfully in small programs. In this approach beginning


from the statement at which an error symptoms is observed. The source code is traced
backwards until the error discovered.

3. Cause elimination method:- In this approach a list of causes which could possibly have
contributed to the error symptoms is developed and tests are conducted to eliminate each
cause.

4. Program slicing:-This is similar to backtracking. The search space is reduced by defining


slices. A slice of a program for a particular variable at a particular statement is the set of
source lines preceding this statement that can influence the value of that variables.

DEBUGGING GUIDELINES

Debugging is often carried out by programmers based on their power of imagination. The
following are some guidelines for effective debugging:-

1. Sometimes Debugging requires thorough understanding of the program design . So


Debugging should not be partial understanding .

2. Debugging sometimes require full redesign of the system.


3. One must be aware that any one error correction may introduce new errors. So after
every error fixing , regression testing should be carried out.

PROGRAM ANALYSIS TOOLS


A program analysis tool usually is an automated tool that takes either the source code or the
executable code of a program as input and produces reports regarding several important
characteristics of the program, such as its size, complexity, adequacy of commenting, adherence to
programming standards, adequacy of testing, etc. We can classify various program analysis tools into
the following two broad categories
 Static analysis tools
 Dynamic analysis tools

1. Static Analysis Tools


Static program analysis tools assess and compute various characteristics of a program without
executing it. Typically, static analysis tools analyse the source code to compute certain metrics
characterising the source code (such as size, cyclomatic complexity, etc.) and also report certain
analytical conclusions. These also check the conformance of the code with the prescribed coding
standards. In this context, it displays the following analysis results: „ To what extent the coding
standards have been adhered to? „ Whether certain programming errors such as uninitialised
variables, mismatch between actual and formal parameters, variables that are declared but never
used, etc., exist? A list of all such errors is displayed.
2. Dynamic Analysis Tools
Dynamic program analysis tools can be used to evaluate several program characteristics based on an
analysis of the run time behaviour of a program. These tools usually record and analyse the actual
behaviour of a program while it is being executed. A dynamic program analysis tool (also called a
dynamic analyser) usually collects execution trace information by instrumenting the code. Code
instrumentation is usually achieved by inserting additional statements to print the values of certain
variables into a file to collect the execution trace of the program. The instrumented code when
executed, records the behaviour of the software for different test cases.
After a software has been tested with its full test suite and its behaviour recorded, the dynamic
analysis tool carries out a post execution analysis and produces reports which describe the coverage
that has been achieved by the complete test suite for the program.
SOFTWARE MAINTAINANCE

Software Maintenance denotes any changes made to a software product after it has been
delivered to the customer. Software product need maintenance to correct errors, enhance
features, port to new platforms etc.

Necessity of software maintenance/Characteristics of Software


Maintenance

Software maintenance is becoming an important activity of a large number of software


organizations. When the hardware platform is changed, and a software product performs some
low-level functions, maintenance is necessary. Also, whenever the support environment of a
software product changes, the software product requires rework to cope up with the newer
interface. For instance, a software product may need to be maintained when the operating system
changes. Thus, every software product continues to evolve after its development through
maintenance efforts.

Types of software maintenance

There are basically three types of software maintenance. These are:

• Corrective: Corrective maintenance of a software product is necessary to rectify the bugs


observed while the system is in use.

• Adaptive: A software product might need maintenance when the customers need the product
to run on new platforms, on new operating systems, or when they need the product to interface
with new hardware or software.

• Perfective: A software product needs maintenance to support the new features that users
want it to support, to change different functionalities of the system according to customer
demands, or to enhance the performance of the system.

Characteristics of software Maintenance(pls go through it from page R. mall( 546)

PROBLEMS ASSOCIATED WITH SOFTWARE MAINTENANCE


Software maintenance work typically is much more expensive than what it should be and takes
more time than required. In software organizations, maintenance work is mostly carried out
using ad hoc techniques. The primary reason being that software maintenance is one of the most
neglected areas of software engineering. Even though software maintenance is fast becoming an
important area of work for many companies as the software products of yester years age, still
software maintenance is mostly being carried out as fire-fighting operations, rather than through
systematic and planned activities. Software maintenance has a very poor image in industry.
Therefore, an organization often cannot employ bright engineers to carry out maintenance work.
Even though maintenance suffers from a poor image, the work involved is often more challenging
than development work. During maintenance it is necessary to thoroughly understand someone
else’s work and then carry out the required modifications and extensions.Another problem
associated with maintenance work is that the majority of software products needing
maintenance are legacy products.

SOFTWARE REVERSE ENGINEERING


Software reverse engineering is the process of recovering the design and the requirements
specification of a product from an analysis of its code. The purpose of reverse engineering is to
facilitate maintenance work by improving the understandability of a system and to produce the
necessary documents for a Legacy(old and no longer use software that are hard to maintain)
system. The first stage of reverse engineering usually focuses on carrying out cosmetic changes
to the code to improve its readability, structure, and understandability, without changing of its
functionalities.

After the cosmetic changes have been completed on a legacy software, the process of extracting
the code, design, and the requirements specification can begin. In order to Extract the design, a
full understanding of the code is needed. Some automatic tools can be used to derive the data flow
and control flow diagram from the code. The structure chart (module call sequence and data
interchange among modules) should also be extracted. The SRS document can be written once
the full code has been thoroughly understood and the design extracted.
SOFTWARE MAINTENANCE PROCESS MODELS
First model

The first model is preferred for projects involving small reworks where the code is changed directly
and the changes are reflected in the relevant documents later. This maintenance process is graphically
presented in Figure 13.3. In this approach, the project starts by gathering the requirements for
changes. The requirements are next analysed to formulate the strategies to be adopted for code
change. At this stage, the association of at least a few members of the original development team
goes a long way in reducing the cycle time, especially for projects involving unstructured and
inadequately documented code. The availability of a working old system to the maintenance engineers
at the maintenance site greatly facilitates the task of the maintenance team as they get a good insight
into the working of the old system and also can compare the working of their modified system with
the old system. Also, debugging of the re-engineered system becomes easier as the program traces of
both the systems can be compared to localise the bugs.

Second model

The second model is preferred for projects where the amount of rework required is significant. This
approach can be represented by a reverse engineering cycle followed by a forward engineering cycle.
Such an approach is also known as software re-engineering. This process model is depicted in Figure
13.4. The reverse engineering cycle is required for legacy products. During the reverse engineering,
the old code is analysed (abstracted) to extract the module specifications. The module specifications
are then analysed to produce the design. The design is analysed (abstracted) to produce the original
requirements specification. The change requests are then applied to this requirements specification
to arrive at the new requirements specification. At this point a forward engineering is carried out to
produce the new code. At the design, module specification, and coding a substantial reuse is made
from the reverse engineered products. An important advantage of this approach is that it produces a
more structured design compared to what the original product had, produces good documentation,
and very often results in increased efficiency. The efficiency improvements are brought about by a
more efficient design.
SOFTWARE RE-ENGINEERING
The reconstruction of software during maintenance phase by a reverse engineering cycle
followed by a forward engineering cycle where the amount of rework required is important . This
approach is known as Software Reengineering.

The reverse engineering cycle is required for legacy products. During the reverse engineering,
the old code is analyzed (abstracted) to extract the module specifications. The module
specifications are then analyzed to produce the design. The design is analyzed (abstracted) to
produce the original requirements specification.
Maintenance Process model-2

Advantages & Disadvantages

1. Amount of rework is more.

2. Reengineering might be preferable for products which exhibit a high failure rate.

3. Reengineering might also be preferable for legacy products having poor design and code
structure.

The change requests are then applied to this requirements specification to arrive at the new
requirements specification. At the design, module specification, and coding a substantial reuse is
made from the reverse engineered products. An important advantage of this approach is that it
produces a more structured design compared to what the original product had, produces good
documentation, and very often results in increased efficiency. The efficiency improvements are
brought about by a more efficient design. However, this approach is more costly.

Estimation of Approximate Maintenance Cost


It is well known that maintenance efforts require about 60% of the total life cycle cost for a typical
software product. However, maintenance costs vary widely from one application domain to
another. For embedded systems, the maintenance cost can be as much as 2 to 4 times the
development cost.

Boehm [1981] proposed a formula for estimating maintenance costs as part of his COCOMO cost
estimation model. Boehm’s maintenance cost estimation is made in terms of a quantity called the
Annual Change Traffic (ACT). Boehm defined ACT as the fraction of a software product’s source
instructions which undergo change during a typical year either through addition or deletion.

*******

You might also like