Unit 4
Unit 4
CODING STANDARDS
1. Rules for limiting the use of global: These rules list what types of data can be
declared global and what cannot.
• Author’s name.
• Modification history.
4. Error return conventions and exception handling mechanisms: The way error
conditions are reported by different functions in a program are handled should be
standard within an organization. For example, different functions while encountering
an error condition should either return a 0 or 1 consistently.
CODE REVIEW
Code review for a module (that is, a unit) is undertaken after the module successfully
compiles. That is, all the syntax errors have been eliminated from the module.
Obviously, code review does not target to detect syntax errors in a program, but is
designed to detect logical, algorithmic, and programming errors. Code review has
been recognised as an extremely cost-effective strategy for eliminating coding errors
and for producing high quality code.
1. Code Walkthroughs
2. Code Inspection
1. CODE WALK THROUGHS
Code walk through is an informal code analysis technique. In this technique, after a
module has been coded, successfully compiled and all syntax errors eliminated. A
few members of the development team are given the code few days before the walk
through meeting to read and understand code. Each member selects some test cases
and simulates execution of the code by hand (i.e. trace execution through each
statement and function execution). The main objectives of the walk through are to
discover the algorithmic and logical errors in the code. The members note down their
findings to discuss these in a walk through meeting where the coder of the module
is present.
Even though a code walk through is an informal analysis technique, several
guidelines have evolved over the years for making this naïve but useful analysis
technique more effective. Of course, these guidelines are based on personal
experience, common sense, and several subjective factors. Therefore, these
guidelines should be considered as examples rather than accepted as rules to be
applied dogmatically.
• The team performing code walk through should not be either too big or too small.
Ideally, it should consist of between three to seven members.
• Discussion should focus on discovery of errors and not on how to fix the discovered
errors.
• In order to foster cooperation and to avoid the feeling among engineers that they
are being evaluated in the code walk through meeting, managers should not attend
the walk through meetings.
2. CODE INSPECTION
The aim of code inspection is to discover some common types of errors caused due
to oversight and improper programming. In addition adherence to coding standards
is also checked during code inspection. Good software development companies
collect statistics to identify the type of errors must frequently committed by their
engineers. These collected errors can be used to look out for possible errors.
The following lists some classical programming errors checked during code
inspection :-
TESTING
The aim of Testing process is to identify all defects present in a software product. So
Testing provides a practical way of reducing defects in a software & increasing the users’
confidence in a developed software.
In a testing process a set of test inputs (or test cases) injected to the program and observing, if
the program behaves as expected. If the program fails to behave as expected, then the
conditions under which failure occurs are noted for later debugging and correction.
Terminologies:-- In the following, we discuss a few important terminologies that have been
standardised by the IEEE Standard Glossary of Software Engineering Terminology [IEEE, 1990]:
• Failure: it is an occurrence of an error (or defect or bug). But, the only presence of an error
may not necessarily lead to a failure.
• Test Case: This is the triplet [I, S, O], where I is the data input to the software, S is the state
of the software at which the data is input, and O is the expected output of the software.
• Test Suite: It is the set of all test cases with which a given software product is to be tested.
A mistake is essentially any programmer action that later shows up as an incorrect result during
program execution. A programmer may commit a mistake in almost any of the development
activities
An error is the result of a mistake committed by a developer in any of the development activities.
Mistakes can give rise to an extremely large variety of errors.
A failure of a program essentially denotes an incorrect behaviour exhibited by the program during
its execution. An incorrect behaviour is observed either as production of an incorrect result or as an
inappropriate activity carried out by the program.
A test scenario is an abstract test case in the sense that it only identifies the aspects of the program
that are to be tested without identifying the input, state, or output.
A test script is an encoding of a test case as a short program. Test scripts are developed for
automated execution of the test cases.
Testing Activities
Testing involves performing the following major activities:
Test suite design: The test suite is designed possibly using several test case design techniques. We
discuss a few important test case design techniques later in this Chapter.
Running test cases and checking the results to detect failures:
Each test case is run and the results are compared with the expected results. A mismatch between the
actual result and expected results indicates a failure.
The test cases for which the system fails are noted down for later debugging. Locate error:
In this activity, the failure symptoms are analysed to locate the errors. For each failure observed during
the previous activity, the statements that are in error are identified.
Error correction:
After the error is located during debugging, the code is appropriately changed to correct the error
SOFTWARE TESTING STRATEGY / LEVELS OF SOFTWARE TESTING
STRATEGY Means “An elaborate and systematic plan of action “
1. The software Testing Strategy provides a road map that describes the steps to be followed
as part of Testing.
2. When these steps are planned and then undertaken then how much effort, time and
resources will be required.
3. So any Testing Strategy must contain Test Planning, Test Case Design, Test Execution and
resultant data collection and evaluation. Strategy are :--------
1. Testing begins at the component level (module) and works "outward" toward the
integration of the entire computer-based system.
2. Different testing techniques are appropriate at different points in time.
3. Testing is conducted by the developer of the software and (for large projects) an
Independent Test group (ITG).
4. Testing and Debugging are different activities, but Debugging must be accommodated
in any testing Strategy.
WHAT IS THE OVERALL STRATEGY FOR SOFTWARE TESTING?
We can divide testing strategy into a series of 4 steps that are implemented sequentially:-
1. Unit Testing
2. Integration Testing
3. Validation Testing
4. System Testing
1. Initially tests focused on each component (Module) individually ensuring that it functions
properly as a unit.
2. Unit Testing Uses heavy use of White Box Testing.
3. After testing all the components (modules) individually the modules are slowly integrated
and tested each level of integration.
4. So integration Testing focuses on verification and Program construction. Here Black Box
testing and White Box testing both are applied.
5. After integration, Validation Testing is conducted.
6. Validation testing provides final assurance that software meets all functional, behavioral
and performance requirements.
7. Black Box Testing are used exclusively for validation Testing.
8. Finally fully integrated system tested as a whole called System Testing.
9. Here Software must be combined with other system elements (e.g., hardware, people,
databases).
10. System testing verifies that all elements added properly and that overall system
function/performance is achieved.
1. Unit Testing applies on Smallest Unit of Software Design called Module or Component.
2. Here with the help of detailed design description, important control paths are tested to
uncover errors within the boundary of the module.
3. The Unit Testing is a white Box oriented and the step can be conducted in parallel for
multiple modules.
4. The tests that occur as part of Unit Testing are:-
1. Interface 2. Local data structure
1. In order to test a single module we need a complete environment to provide all that is
necessary for execution of the module.
2. As a module is not a stand alone program ,Driver and Stub software must be
developed for each unit test.
3. A driver is nothing more than a main program that accepts test case data passes such
data to the module to be tested and prints relevant result.
4. Stubs serve to replace modules that are subordinate the component to be tested.
2. INTEGRATION TESTING
1. The primary objective of Integration Testing is to test the module interfaces in order
to ensure that there are no errors in the parameter passing,when one module calls another
module.
2. The integration plan specifies the steps & the order in which modules are combined to
realize the full software. After each integration step the partially integrated software is tested.
(ii)Bottom up Integration
3. Regression Testing
5. Smoke Testing
1. Big Bang Approach:- It is the simplest integration Testing Approach. Where all the modules
making of software are integrated in a single step that is all the modules are simply put
together and tested. This technique is useful for small software. But the problem is once an
error is found during integration testing, it is very difficult to localize the error which may
belong to any of the module being integrated.
The program is constructed and tested in small increments, where errors are easier to isolate
and correct interfaces are more likely to be tested completely.
It is following Types:--
1. Modules are integrated by moving downward through the control hierarchy, beginning
with the main control module (main program). Modules subordinate (and ultimately
subordinate) to the main control module are inserted into the structure in either a depth-first
or breadth-first manner.
2. Depth-first Integration would integrate all modules on a major control path of the
structure. Selection of a major path is somewhat arbitrary and depends on application-specific
characteristics. For example, selecting the left hand path, components M1, M2 , M5 would be
integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be
integrated. Then, the central and right hand control paths are built. (Figure Given Below)
1. The main control module is used as a test driver and stubs are substituted for all modules
directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate
stubs are replaced one at a time with actual module.
B] Bottom Up Integration
Bottom-up integration testing, begins construction and testing with atomic modules (i.e.,
modules at the lowest levels in the program structure).
Because modules are integrated from the bottom up, processing required for modules
subordinate to a given level is always available and the need for stubs is eliminated.
Steps: ---------
1. Low-level modules are combined into clusters (sometimes called builds) that perform a
specific software sub function.
2. A driver (a control program) is written to coordinate test case input and output.
4. Drivers are removed and clusters are combined moving upward in the program structure.
5. Components are combined to form clusters 1, 2, and 3. Each of the clusters is tested
using a driver (shown as a dashed block).
6. Modules in clusters 1 and 2 are subordinate to Ma. Drivers D1 and D2 are removed
and the clusters are interfaced directly to Ma.
7. Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb. Both
Ma and Mb will ultimately be integrated with modules Mc, and so forth.
8. The advantages of Bottom-up integration eliminates the need for complex stubs.
[3]Regression Testing:-
1. Each time a new module is added as part of integration testing, the software changes.
New data flow paths are established, new I/O may occur, and new control logic is
invoked.
2. These changes may cause problems with functions that previously worked smoothly.
3. In integration test strategy, regression testing is the re-execution of some subset of tests
that have already been conducted to ensure that changes have not transmitted adverse
side effects.
4. The regression test suite contains three different classes of test cases:-
• A representative sample of tests that will exercise all software functions.
• Additional tests that focus on software functions that are likely to be affected by the
change.
• Tests that focus on the software components that have been changed.
[4]Mixed Mode Integration(Sandwitch Testing)
In mixed testing approach testing can start as and when modules become available. So this is
most commonly used integration testing approach.
3. SYSTEM TESTING
1. System tests are designed to validate fully developed software to assure that it meets
requirements.
1.Alpha Testing: - It is carried out by the test team within the developing organization.
1. Functionality Test
2. Performance Test
1. Functionality Test: -It tests the functionality of the software to check whether, it
satisfies the functional requirements as documented in the SRS document. The
functionality test cases are designed by using a black box approach.
2. Performance Test - It test the conformance of the system with the nonfunctional
requirements of the system. These are as follows :-
3. Stress Testing:-
1. It is called Endurance Testing.
2. It evaluates system performance when it is stressed for short periods of time.
3. These are black box Tests which are design to impose a range of abnormal and
even illegal input conditions so as to stress the capabilities of the software.
4. Input data volume, input data rate, processing time, utilization of memory are
tested beyond the designed capacity.
e.g. suppose an operating system is supposed to support 15 multiprogramming jobs
so the system is stressed by attempting to run 15 or more jobs simultaneously .
Volume testing:- It is important to check whether the data structure( arrays, queues,
stacks etc) have been designed successfully for extra- ordinary situations.
Compatibility testing:- This testing is required when the systems interfaces with other
types of systems e.g. when the system communicate with a large database system to
retrieve information at that time compatibility testing is required to test the speed and
accuracy of the data retrieve.
Regression Testing:- This Testing is required when the system being tested is an up-
gradation of an already existing system to fix some bugs or enhance functionality
,performance etc.
Recovery Testing:- This testing tests the response of the system to the presence of
faults or less of power , devices, services, data etc. The system is subjected to the loss
of mentioned resources in order to check if the system recovers satisfactorily.
E.g. printers can be disconnected to check if the system hangs or the power may be
shutdown to check the extent of data loss and corruption.
Maintenance Testing:- The testing addresses the diagnostic programs and other
procedures that are required to be developed to help implement the maintenance of
the system.
Usability Testing:- this testing check the user interface to see if it meets all the user
requirements during usability testing. The display screens, messages report formats
are tested.
Security Testing:- Security Testing attempts to verify that protection mechanisms built
into a system will protect it from improper entry.
Once source code has been generated, software must be tested to detect (and correct) as many
errors as possible before delivery to your customer.
Your goal is to design a series of test cases that have a high likelihood of finding errors—
but how?
That’s where software testing techniques enter into the picture. These techniques provide
systematic guidance for designing tests that:-
During early stages of testing, a software engineer performs all tests. However, as the testing
process progresses, testing specialists may become involved.
(1) Internal program logic is exercised using “white box” test case design techniques.
(2) Software requirements are exercised using “black box” test case design techniques.
In both cases, the purpose is to find the maximum number of errors with the minimum
amount of effort and time.
if( x> y)
max= x ;
else
max=x ;
for the above code the test suit { (x=3, y=2), (x=2, y=3)} can detect errors where as large test
suit {(x=3,y=2), (x=4,y=3), (x=5,y=1)} doesn’t detect errors. So it implies that large test suit
doesn’t detect more errors if it is not carefully designed. A systematic approach should be
followed to design optimal test suit.
There are many test case design methods are developed for systematic approach to testing:-
Using white-box Testing methods, the software engineer can derive test cases that
(1) guarantee that all independent paths within a module have been exercised at least once,
(2) Exercise all logical decisions on their true and false sides, (3) execute all loops at their
boundaries and (4) exercise internal data structures to ensure their validity.
d. Cyclomatic Complexity
3. Branch Coverage
4. Control Structure Testing:- a. Condition Testing
b. Data Flow Testing
b. Nested
c. Concatenated
d. Unstructured
The statement coverage, design test cases in such as way that every statement in a program
is executed at least once. The Test cases should be designed correctly for all input values.
e.g. if ( x> y) Test cases: - {(4, 3), (3, 4), (3, 3)}
x=x+5;
else
y=y+5;
It is a white Box testing technique proposed by Mc Cabe. Here every path in the program
tested at least once.
A control Flow Graph describes the sequence in which the different instructions of a program
get executed.
1. a =5 ; 1.if(a>b) 1.while(a>b) {
2. b=a * 2-1; 2.c=3 ; 2. b= b-1 ;
3. else c=5; 3.b=b * a ; }
4. c=c* c ; 4.c=a+b ;
B. Path
It is a collection of node and edge sequence from starting node to a terminal node of the
Control Flow Graph in a program.
1. A linearly independent path is any path through the program that introduces at least one
new edge that is not included in any other linearly independent paths.
2. So any path having a new node automatically implies that it has a new edge.
3. To identify Liner Independent Path for simple program is easy but for complicated
program it is difficult.
Mc Cabe’s Cyclomatic complexity metric help us to determine an upper bound that shows
the maximum number of Linear Independent path in a program. It informs how many paths
are there but not identify paths.
Method 1
V(G) = E – N + 2
Where N is the number of nodes of the control flow graph and E is the number of edges in
the control flow graph.
Here E= 4 and N= 4 so
V(G) = 4 - 4 +2
= 0+2 =2
Method 2
=2
Method 3
The cyclomatic complexity of a program can also be easily computed by computing the
number of decision statements of the program. If N is the number of decision statement of a
program, then the McCabe’s metric is equal to N+1.
So V(G)= 1 + 1 =2
The following is the sequence of steps that need to be undertaken for deriving path coverage
based test cases of a program :-
1. Draw the CFG(control flow graph)
2. Determine V(G) i.e. maximum number of paths possible
3. Determine all the paths and describe it
4. Prepare the test case that exercise each path.
[3]Branch Coverage/ decision coverage (DC)
It is also known as Edge Testing. Here each edge of a program’s control flow graph is
traversed at least once.
Example:-
c=3;
Else c=5;
c=c*c;
a. Condition Testing
Condition testing is a test case design method that exercises the logical conditions contained
in a program module. A simple condition is a Boolean variable or a relational expression,
possibly preceded with one NOT (¬) operator. A relational expression takes the form
E1 <relational-operator> E2
where E1 and E2 are arithmetic expressions and <relational-operator> is one of the following:
<, ≤, =, ≠ (non equality), >, or ≥. The condition testing method focuses on testing each condition
in the program.
Data flow-based testing method selects test paths of a program according to the locations of
the definitions and uses of different variables in a program.
The data flow testing approach, assume that each statement in a program is assigned a unique
statement number and that each function does not modify its parameters or global variables.
For a statement with S as its statement number,
If statement S is an if or loop statement, its DEF set is empty and its USE set is based on the
condition of statement S. The definition of variable X at statement S is said to be live at
statement S' if there exists a path from statement S to statement S' that contains no other
definition of X.
[5] Loop Testing
Loop testing is a white-box testing technique that focuses exclusively on the validity of loop
constructs. Four different classes of loops can be defined:
simple loops, concatenated loops, nested loops, and unstructured loops.
Simple Loops:- The following set of tests can be applied to simple loops, where n is the
maximum number of allowable passes through the loop.
Nested Loops: - It is a collection of simple loops .So the following approaches useful for nested
loops.
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimum iteration parameter (e.g., loop counter) values. Add other tests for out-of-range or
excluded values.
3. Work outward, conducting tests for the next loop, but keeping all other outer loops at
minimum values and other nested loops to "typical" values.
Concatenated loops. Concatenated loops can be Tested using the approach defined for simple
loops, if each of the loops is independent of the other. However, if two loops are concatenated
and the loop counter for loop 1 is used as the initial value for loop 2, then the loops are not
independent. When the loops are not independent, the approach applied to nested loops is
recommended.
Unstructured loops. Whenever possible, this class of loops should be redesigned to reflect the
use of the structured programming constructs
BLACK BOX TESTING OR FUNCTIONAL OR BEHAVIORAL TESTING
1. It is also called Behavioral testing. In BBT, test cases are designed from an
examination of the input/output values only and no knowledge of design or
code is required.
2. It focuses on functional requirements of the software.
3. BBT attempts to find errors in the following methods:-
1. If an input condition specifies a range one valid and two invalid equivalence classes
are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence
condition are defined.
3. if an input condition specifies a member of a set , one valid and one invalid equivalence
class are defined.
4. if an input condition is Boolean then one valid and one invalid class are defined.
e.g. for a software that computes the square root of an input integer which can assure values
0 to 5000 , there are 3 equivalence classes :-
1. If an input condition specifies a range bounded by values a and b , test cases should
be designed with values a and b and just above and just below a and b.
2. If an input condition specifies a no of values, test cases should be developed that
exercise the minimum and maximum number values just above and below minimum
and maximum are also tested.
3. apply guidelines 1 and 2 to output conditions
4. If internal program data structure have prescribed boundaries , be certain to design a
test case to exercise the data structure at its boundary.
E.g. for a function that computes the square root of integer values in the range between 0
to 5000 the test cases must include the values ( 0, -1, 5000, 5001}
DEBUGGING
1. Debugging is the Process that results in the removal of the error. Once errors are identified
it is necessary to 1st locate the program statement responsible for the errors and then to fix
them. Debugging is not testing but always occurs as a consequence of testing. The
debugging process attempts to match symptoms with cause thereby leading to error
correction.
2. The debugging process always has two outcomes:-
1. The cause will be found and corrected.
2. The cause will not be found.
In the 2nd case the software engineer design a test case to help, validate that suspicion and
works towards error correction is an iterative fashion.
1. The symptoms may appear one part of a program while the cause is located at a site that
is far removed.
2. The symptoms may disappear temporarily when another error is corrected.
3. The symptoms may actually be caused by non errors e.g. round off, inaccuracies.
4. The symptom may be caused by human error that is not easily traced.
5. The symptom may be a result of timing problems, rather than processing problems.
6. It may be difficult to accurately reproduce input conditions (e.g., a real-time application
in which input ordering is indeterminate).
7. The symptom may be due to causes that are distributed across a number of tasks running
on different processors.
[ART OF DEBUGGING]
DEBUGGING APPROACHES
There are many important approaches that are available to identify error locations. Each
will be useful in appropriate circumstances.
1, Brute force Approach:- It is the most common and least efficient method for isolating
the cause of a software error. In this approach the program is loaded with print statement
to print the intermediate values with the hope that some of the printed values will help to
identify the statement in error.
3. Cause elimination method:- In this approach a list of causes which could possibly have
contributed to the error symptoms is developed and tests are conducted to eliminate each
cause.
DEBUGGING GUIDELINES
Debugging is often carried out by programmers based on their power of imagination. The
following are some guidelines for effective debugging:-
Software Maintenance denotes any changes made to a software product after it has been
delivered to the customer. Software product need maintenance to correct errors, enhance
features, port to new platforms etc.
• Adaptive: A software product might need maintenance when the customers need the product
to run on new platforms, on new operating systems, or when they need the product to interface
with new hardware or software.
• Perfective: A software product needs maintenance to support the new features that users
want it to support, to change different functionalities of the system according to customer
demands, or to enhance the performance of the system.
After the cosmetic changes have been completed on a legacy software, the process of extracting
the code, design, and the requirements specification can begin. In order to Extract the design, a
full understanding of the code is needed. Some automatic tools can be used to derive the data flow
and control flow diagram from the code. The structure chart (module call sequence and data
interchange among modules) should also be extracted. The SRS document can be written once
the full code has been thoroughly understood and the design extracted.
SOFTWARE MAINTENANCE PROCESS MODELS
First model
The first model is preferred for projects involving small reworks where the code is changed directly
and the changes are reflected in the relevant documents later. This maintenance process is graphically
presented in Figure 13.3. In this approach, the project starts by gathering the requirements for
changes. The requirements are next analysed to formulate the strategies to be adopted for code
change. At this stage, the association of at least a few members of the original development team
goes a long way in reducing the cycle time, especially for projects involving unstructured and
inadequately documented code. The availability of a working old system to the maintenance engineers
at the maintenance site greatly facilitates the task of the maintenance team as they get a good insight
into the working of the old system and also can compare the working of their modified system with
the old system. Also, debugging of the re-engineered system becomes easier as the program traces of
both the systems can be compared to localise the bugs.
Second model
The second model is preferred for projects where the amount of rework required is significant. This
approach can be represented by a reverse engineering cycle followed by a forward engineering cycle.
Such an approach is also known as software re-engineering. This process model is depicted in Figure
13.4. The reverse engineering cycle is required for legacy products. During the reverse engineering,
the old code is analysed (abstracted) to extract the module specifications. The module specifications
are then analysed to produce the design. The design is analysed (abstracted) to produce the original
requirements specification. The change requests are then applied to this requirements specification
to arrive at the new requirements specification. At this point a forward engineering is carried out to
produce the new code. At the design, module specification, and coding a substantial reuse is made
from the reverse engineered products. An important advantage of this approach is that it produces a
more structured design compared to what the original product had, produces good documentation,
and very often results in increased efficiency. The efficiency improvements are brought about by a
more efficient design.
SOFTWARE RE-ENGINEERING
The reconstruction of software during maintenance phase by a reverse engineering cycle
followed by a forward engineering cycle where the amount of rework required is important . This
approach is known as Software Reengineering.
The reverse engineering cycle is required for legacy products. During the reverse engineering,
the old code is analyzed (abstracted) to extract the module specifications. The module
specifications are then analyzed to produce the design. The design is analyzed (abstracted) to
produce the original requirements specification.
Maintenance Process model-2
2. Reengineering might be preferable for products which exhibit a high failure rate.
3. Reengineering might also be preferable for legacy products having poor design and code
structure.
The change requests are then applied to this requirements specification to arrive at the new
requirements specification. At the design, module specification, and coding a substantial reuse is
made from the reverse engineered products. An important advantage of this approach is that it
produces a more structured design compared to what the original product had, produces good
documentation, and very often results in increased efficiency. The efficiency improvements are
brought about by a more efficient design. However, this approach is more costly.
Boehm [1981] proposed a formula for estimating maintenance costs as part of his COCOMO cost
estimation model. Boehm’s maintenance cost estimation is made in terms of a quantity called the
Annual Change Traffic (ACT). Boehm defined ACT as the fraction of a software product’s source
instructions which undergo change during a typical year either through addition or deletion.
*******