0% found this document useful (0 votes)
51 views

Software Engineering

The document discusses different phases of software development like coding, testing, and code review. It provides details about: 1) Coding involves transforming the design into code according to module specifications. Unit testing is done after coding each module. 2) Integration and system testing is conducted after integrating and testing all modules. System testing checks if requirements are met. 3) Code review helps find logical and algorithmic errors. It includes informal walkthroughs and more rigorous inspections to catch common bugs.

Uploaded by

dinesh v
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

Software Engineering

The document discusses different phases of software development like coding, testing, and code review. It provides details about: 1) Coding involves transforming the design into code according to module specifications. Unit testing is done after coding each module. 2) Integration and system testing is conducted after integrating and testing all modules. System testing checks if requirements are met. 3) Code review helps find logical and algorithmic errors. It includes informal walkthroughs and more rigorous inspections to catch common bugs.

Uploaded by

dinesh v
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

SE -u5 1 of 10

Unit 5: CODING AND TESTING


 Coding is undertaken once the design phase is complete and the design documents have
been successfully reviewed.
 After all the modules of a system have been coded and unit tested, the integration and
system testing phase is undertaken.
 Integration and testing of modules is carried out according to an integration plan. The
integration plan, according to which different modules are integrated together, usually
envisages integration of modules through a number of steps.
 System testing is conducted on the full product. During system testing, the product is tested
against its requirements as recorded in the SRS document.
CODING
 The input to the coding phase is the design document produced at the end of the design
phase. Please recollect that the design document contains not only the high-level design of
the system in the form of a module structure, but also the detailed design.
 The detailed design is usually documented in the form of module specifications where the
data structures and algorithms for each module are specified.
 During the coding phase, different modules identified in the design document are coded
according to their respective module specifications.
 We can describe the overall objective of the coding phase to be the following.
 The objective of the coding phase is to transform the design of a system into code in a high-
level language, and then to unit test this code.
 The main advantages of adhering to a standard style of coding are the following:
 A coding standard gives a uniform appearance to the codes written by different
engineers.
 It facilitates code understanding and code reuse.
 It promotes good programming practices.
Coding Standards and Guidelines
 Good software development develop their own coding standards and guidelines depending
on what suits their organisation and based on the specific types of software they develop.
Representative coding standards
 Rules for limiting the use of globals:
 These rules list what types of data can be declared global and what cannot, with a view
to limit the data that needs to be defined with global scope.
 Standard headers for different modules:
 The header of different modules should have standard format and information for ease of
understanding and maintenance.
 The following is an example of header format that is being used in some companies:
 Name of the module.
 Date on which the module was created.
 Author’s name.
 Modification history.
 Synopsis of the module. This is a small writeup about what the module does.
 Different functions supported in the module, along with their input/output
parameters.
 Global variables accessed/modified by the module.
 Naming conventions for global variables, local variables, and constant identifiers:
 A popular naming convention is that variables are named using mixed case lettering.
 Global variable names would always start with a capital letter and local variable names
start with small letters.
 Constant names should be formed using capital letters only .
 Conventions regarding error return values and exception handling mechanisms:
SE -u5 2 of 10

 The way error conditions are reported by different functions in a program should be
standard within an organisation.
 For example, all functions while encountering an error condition should either return a 0
or 1.
 This facilitates reuse and debugging.
 Representative coding guidelines:
 The following are some representative coding guidelines that are recommended by many
software development organisations.
 Wherever necessary, the rationale behind these guidelines is also mentioned.
 Do not use a coding style that is too clever or too difficult to understand:
 Code should be easy to understand.
 Many inexperienced engineers actually take pride in writing cryptic and
incomprehensible code.
 Clever coding can obscure meaning of the code and reduce code understandability;
thereby making maintenance and debugging difficult and expensive.
 Avoid obscure side effects:
 The side effects of a function call include modifications to the parameters passed by
reference, modification of global variables, and I/O operations.
 An obscure side effect is one that is not obvious from a casual examination of the
code.
 Obscure side effects make it difficult to understand a piece of code.
 For example, suppose the value of a global variable is changed or some file I/O is
performed obscurely in a called module.
 That is, this is difficult to infer from the function’s name and header information.
Then, it would be really hard to understand the code.
 Do not use an identifier for multiple purposes:
 Programmers often use the same identifier to denote several temporary entities. For
example, some programmers make use of a temporary loop variable for also
computing and storing the final result.
 The rationale that they give for such multiple use of variables is memory efficiency,
e.g., three variables use up three memory locations, whereas when the same variable
is used for three different
 purposes, only one memory location is used. However, there are several things
wrong with this approach and hence should be avoided.
 Some of the problems caused by the use of a variable for multiple purposes are as
follows:
 Each variable should be given a descriptive name indicating its purpose.
 This is not possible if an identifier is used for multiple purposes.
 Use of a variable for multiple purposes can lead to confusion and make it
difficult for somebody trying to read and understand the code.
 Use of variables for multiple purposes usually makes future enhancements more
difficult.
 For example, while changing the final computed result from integer to float type,
the programmer might subsequently notice that it has also been used as a
temporary loop variable that cannot be a float type.
 Code should be well-documented:
 As a rule of thumb, there should be at least one comment line on the average for
every three source lines of code.
 Length of any function should not exceed 10 source lines: A lengthy function is
usually very difficult to understand as it probably has a large number of variables and
carries out many different types of computations.
SE -u5 3 of 10

 For the same reason, lengthy functions are likely to have disproportionately larger
number of bugs.
 Do not use GO TO statements: Use of GO TO statements makes a program
unstructured. This makes the program very difficult to understand, debug, and maintain.

CODE REVIEW
 Review is a very effective technique to remove defects from source code.
 Code review for a module is undertaken after the module successfully compiles.
 That is, all the syntax errors have been eliminated from the module.
 Obviously, code review does not target to design syntax errors in a program, but is designed
to detect logical, algorithmic, and programming errors.
 Code review has been recognised as an extremely cost-effective and for producing high
quality code.
 Normally, the following two types of reviews are carried out on the code of a module:
 Code walkthrough.
 Code inspection.
 Clean Room Testing
 Code walkthrough:
 Code walkthrough is an informal code analysis technique.
 A few members of the development team are given the code a couple of days before the
walkthrough meeting.
 Each member selects some test cases and simulates execution of the code by hand
 The main objective of code walkthrough is to discover the algorithmic and logical errors
in the code.
 The members note down their findings of their walkthrough and discuss those in a
walkthrough meeting where the coder of the module is present.
 Even though code walkthrough is an informal analysis technique, several guidelines
have evolved over the years for making this naive but useful analysis technique more
effective.
 Some of these guidelines are following:
 The team performing code walkthrough should not be either too big or too small.
Ideally, it should consist of between three to seven members.
 Discussions should focus on discovery of errors and avoid deliberations on how to
fix the discovered errors.
 In order to foster co-operation and to avoid the feeling among the engineers that they
are being watched and evaluated in the code walkthrough meetings, managers should
not attend the walkthrough meetings.
 Code Inspection
 During code inspection, the code is examined for the presence of some common
programming errors.
 This is in contrast to the hand simulation of code execution carried out during code
walkthroughs.
 We can state the principal aim of the code inspection to be the following:
 Check for the presence of some common types of errors that usually creep into code
due to programmer mistakes and oversights and
 Check whether coding standards have been adhered to.
 Following is a list of some classical programming errors which can be checked during
code inspection:
 Use of uninitialised variables.
 Jumps into loops.
 Non-terminating loops.
 Incompatible assignments.
SE -u5 4 of 10

 Array indices out of bounds.


 Improper storage allocation and deallocation.
 Mismatch between actual and formal parameter in procedure calls.
 Use of incorrect logical operators or incorrect precedence among operators.
 Improper modification of loop variables.
 Comparison of equality of floating point values.
 Dangling reference caused when the referenced memory has not been allocated.
 Clean Room Testing
 Clean room testing was pioneered at IBM.
 This type of testing relies heavily on walkthroughs, inspection, and formal verification.
 The programmers are not allowed to test any of their code by executing the code other
than doing some syntax testing using a compiler.
SOFTWARE DOCUMENTATION
 When a software is developed, in addition to the executable files and the source code,
several kinds of documents such as users’ manual, software requirements specification
(SRS) document, design document, test document, installation manual, etc., are developed
as part of the software engineering process.
 Good documents are helpful in the following ways:
 Understandability of code
 The users to understand and effectively use the system.
 Effectively tackle the manpower turnover problem
 The manager to effectively track the progress of the project.
 Classification of Software documents:
 Internal documentation: These are provided in the source code itself.
 External documentation: These are the supporting documents such as SRS document,
installation document, user manual, design document, and test document.
 Internal Documentation
 Internal documentation is the code comprehension features provided in the source code
itself.
 The important types of internal documentation are the following:
 Comments embedded in the source code.
 Use of meaningful variable names.
 Module and function headers.
 Code indentation.
 Code structuring (i.e., code decomposed into modules and functions).
 Use of enumerated types.
 Use of constant identifiers.
 Use of user-defined data types.
 External Documentation
 External documentation is provided through various types of supporting documents such
as users’ manual, software requirements specification document, design document, test
document, etc.
 A systematic software development style ensures that all these documents are of good
quality and are produced in an orderly fashion.
TESTING
 Testing a program involves executing the program with a set of test inputs and observing if
the program behaves as expected. If the program fails to behave as expected, then the input
data and the conditions under which it fails are noted for later debugging and error
correction.
 A highly simplified view of program testing is schematically shown in below
SE -u5 5 of 10

 Error detection techniques = Verification techniques + Validation techniques


 Testing Activities
 Testing involves performing the following main activities:
 Test suite design:
 The set of test cases using which a program is to be tested is designed possibly using
several test case design techniques.
 Running test cases and checking the results to detect failures:
 Each test case is run and the results are compared with the expected results.
 A mismatch between the actual result and expected results indicates a failure.
 The test cases for which the system fails are noted down for later debugging.
 Locate error:
 In this activity, the failure symptoms are analysed to locate the errors.
 For each failure observed during the previous activity, the statements that are in error
are identified.
 Error correction:
 After the error is located during debugging, the code is appropriately changed to
correct the error.
 The testing activities have been shown schematically in the following figure .

 As can be seen, the test cases are first designed, the test cases are run to detect
failures. The bugs causing the failure are identified through debugging, and the
identified error is corrected. Of all the above mentioned testing activities, debugging
often turns out to be the most time-consuming activity.
 Types of testing:
 Unit testing
 Black-box approach
 White-box (or glass-box) approach
 Integration testing
 System testing
 Unit testing
SE -u5 6 of 10

 During unit testing, the individual functions (or units) of a program are tested.
 Unit testing is undertaken after a module has been coded and reviewed.
 This activity is typically undertaken by the coder of the module himself in the coding
phase. Before carrying out unit testing, the unit test cases have to be designed and the
test environment for the unit under test has to be developed.
 Driver and stub modules
 In order to test a single module, we need a complete environment to provide all
relevant code that is necessary for execution of the module.
 The procedures belonging to other modules that the module under test calls.
 Non-local data structures that the module accesses.
 A procedure to call the functions of the module under test with appropriate

parameters.
 Stub: The role of stub and driver modules is pictorially shown in the below figure

 A stub procedure is a dummy procedure that has the same I/O parameters as the
function called by the unit under test but has a highly simplified behaviour. For
example, a stub procedure may produce the expected behaviour using a simple table
look up mechanism.
 BLACK-BOX TESTING
 In black-box testing, test cases are designed from an examination of the input/output
values only and no knowledge of design or code is required.
 The following are the two main approaches available to design black box test cases:
 Equivalence class partitioning
 Boundary value analysis
 Equivalence class partitioning
 In the equivalence class partitioning approach, the domain of input values to the
program under test is partitioned into a set of equivalence classes.
 The partitioning is done such that for every input data belonging to the same
equivalence class, the program behaves similarly.
 General guidelines for designing the equivalence classes:
 If the input data values to a system can be specified by a range of values, then one
valid and two invalid equivalence classes need to be defined.
 If the input data assumes values from a set of discrete members of some domain,
then one equivalence class for the valid input values and another equivalence class
for the invalid input values should be defined.
SE -u5 7 of 10

 Boundary value analysis


 Boundary value analysis-based test suite design involves designing test cases using
the values at the boundaries of different equivalence classes.
 For example, programmers may improperly use < instead of <=, or conversely <=
for <, etc.
 WHITE-BOX TESTING
White-box testing is an important type of unit testing. A large number of white-box
testing strategies exist. Each testing strategy essentially designs test cases based on
analysis of some aspect of source code and is based on some heuristic.
 Fault-based testing
 A fault-based testing strategy targets to detect certain types of faults.
 These faults that a test strategy focuses on constitutes the fault model of the strategy.
An example of a fault-based strategy is mutation testing.
 Coverage-based testing
 A coverage-based testing strategy attempts to execute (or cover) certain elements of
a program. Popular examples of coverage-based testing strategies are statement
coverage, branch coverage, multiple condition coverage, and path coverage-based
testing.
 Testing criterion for coverage-based testing
 The set of specific program elements that a testing strategy targets to execute is
called the testing criterion of the strategy.
 Stronger versus weaker testing
 A white-box testing strategy is said to be stronger than another strategy, if the
stronger testing strategy covers all program elements covered by the weaker
testing strategy, and the stronger strategy additionally covers at least one program
element that is not covered by the weaker strategy.
 Statement Coverage
 The statement coverage strategy aims to design test cases so as to execute every
statement in a program at least once.
 Branch Coverage
 A test suite satisfies branch coverage, if it makes each branch condition in the
program to assume true and false values in turn. In other words, for branch
coverage each branch in the CFG representation of the program must be taken at
least once, when the test suite is executed.
 Multiple Condition Coverage
 In the multiple condition (MC) coverage-based testing, test cases are designed to
make each component of a composite conditional expression to assume both true
and false values.
 Path Coverage
SE -u5 8 of 10

A test suite achieves path coverage if it exeutes each linearly independent paths
( o r basis paths ) at least once. A linearly independent path can be defined in
terms of the control flow graph (CFG) of a program.
 Control flow graph (CFG)
 A control flow graph describes how the control flows through the program.

 Data Flow-based Testing


 Data flow based testing method selects test paths of a program according to the
definitions and uses of different variables in a program.
 Mutation Testing
 Mutation testing is a fault-based testing technique
 The idea behind mutation testing is to make a few arbitrary changes to a program at a
time. Each time the program is changed, it is called a mutated program and the
change effected is called a mutant.
 INTEGRATION TESTING
 Integration testing is carried out after all (or at least some of ) the modules have been
unit tested.
 Successful completion of unit testing, to a large extent, ensures that the unit (or module)
as a whole works satisfactorily.
 In this context, the objective of integration testing is to detect the errors at the module
interfaces
 The objective of integration testing is to check whether the different modules of a
program interface with each other properly.
 During integration testing, different modules of a system are integrated in a planned
manner using an integration plan.
 Different approaches can be used to develop the test plan:
 Big-bang approach to integration testing
 Top-down approach to integration testing
 Bottom-up approach to integration testing
 Mixed (also called sandwiched ) approach to integration testing
 Big-bang approach to integration testing
 Big-bang testing is the most obvious approach to integration testing. In this
approach, all the modules making up a system are integrated in a single step. In
simple words, all the unit tested modules of the system are simply linked together
and tested.
 Bottom-up approach to integration testing
SE -u5 9 of 10

 Large software products are often made up of several subsystems. A subsystem


might consist of many modules which communicate among each other through well-
defined interfaces. In bottom-up integration testing, first the modules for the each
subsystem are integrated. Thus, the subsystems can be integrated separately and
independently.
 Top-down approach to integration testing
 Top-down integration testing starts with the root module in the structure chart and
one or two subordinate modules of the root module. After the top-level ‘skeleton’ has
been tested, the modules that are at the immediately lower layer of the ‘skeleton’ are
combined with it and tested.
 Top-down integration testing approach requires the use of program stubs to simulate
the effect of lower-level routines that are called by the routines under test.
 Mixed approach to integration testing
 The mixed (also called sandwiched ) integration testing follows a combination of
top-down and bottom-up testing approaches.
 SYSTEM TESTING
 After all the units of a program have been integrated together and tested, system testing
is taken up.
 System tests are designed to validate a fully developed system to assure that it meets its
requirements. The test cases are therefore designed solely based on the SRS document.
 There are essentially three main kinds of system testing depending on who carries out
testing:
 Alpha Testing:
 Alpha testing refers to the system testing carried out by the test team within the
developing organisation.
 Beta Testing:
 Beta testin g is the system testing performed by a select group of friendly
customers.
 Acceptance Testing:
 Acceptance testing is the system testing performed by the customer to determine
whether to accept the delivery of the system.
 Smoke testing:
 Smoke testing is carried out before initiating system testing to ensure that system
testing would be meaningful.
 For smoke testing, a few test cases are designed to check whether the basic
functionalities are working.
 DEBUGGING
After a failure has been detected, it is necessary to first identify the program statement(s)
that are in error and are responsible for the failure, the error can then be fixed.
 Types of Debugging methods:
 Brute force method
 This is the most common method of debugging but is the least efficient method.
In this approach, print statements are inserted throughout the program to print the
intermediate values with the hope that some of the printed values will help to
identify the statement in error.
 Backtracking
 This is also a fairly common approach. In this approach, starting from the
statement at which an error symptom has been observed, the source code is
traced backwards until the error is discovered.
 Cause elimination method
SE -u5 10 of 10

 In this approach, once a failure is observed, the symptoms of the failure (i.e.,
certain variable is having a negative value though it should be positive, etc.) are
noted.
 Based on the failure symptoms, the causes which could possibly have contributed
to the symptom is developed and tests are conducted to eliminate each.
 Program slicing
 This technique is similar to back tracking. In the backtracking approach, one
often has to examine a large number of statements. However, the search space is
reduced by defining slices. A slice of a program for a particular variable and at a
particular statement is the set of source lines preceding this statement that can
influence the value of that variable.
 Debugging Guidelines
 Many times debugging requires a thorough understanding of the program design.
 Trying to debug based on a partial understanding of the program design may require an
inordinate amount of effort to be put into debugging even for simple problems.
 Debugging may sometimes even require full redesign of the system. In such cases, a
common mistakes that novice programmers often make is attempting not to fix the error
but its symptoms.
 One must be beware of the possibility that an error correction may introduce new errors.
 Therefore after every round of error-fixing, regression testing must be carried out.
PROGRAM ANALYSIS TOOLS
 A program analysis tool usually is an automated tool that takes either the source code or the
executable code of a program as input and produces reports regarding several important
characteristics of the program, such as its size, complexity, adequacy of commenting,
adherence to programming standards, adequacy of testing, etc.
 We can classify various program analysis tools into the following two broad
 categories:
 Static analysis tools
 Dynamic analysis tools
 Static Analysis Tools
 Static program analysis tools assess and compute various characteristics of a program
without executing it. Typically, static analysis tools analyse the source code to compute
certain metrics characterising the source code (such as size, cyclomatic complexity, etc.)
and also report certain analytical conclusions.
 In this context, it displays the following analysis results:
 To what extent the coding standards have been adhered to?
 Whether certain programming errors such as uninitialised variables, mismatch
between actual and formal parameters, variables that are declared but never used,
etc., exist?
 A list of all such errors is displayed.
 Dynamic Analysis Tools
 Dynamic program analysis tools can be used to evaluate several program characteristics
based on an analysis of the run time behaviour of a program.
 These tools usually record and analyse the actual behaviour of a program while it is
being executed.
 A dynamic program analysis tool (also called a dynamic analyser ) usually collects
execution trace information by instrumenting the code.

You might also like