0% found this document useful (0 votes)
43 views20 pages

Unit 4 (KDS-063)

Uploaded by

Aditya Kesarwani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views20 pages

Unit 4 (KDS-063)

Uploaded by

Aditya Kesarwani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

SOFTWARE ENGINEERING

(KDS 063)

Lecture Notes

UNIT IV – SOFTWARE TESTING

By
Ms. Pallavi Shukla
U.C.E.R.
SOFTWARE TESTING –
It is the process of executing a program with the intention of finding errors in the code.

It is the process of exercising or evaluating a system or system component by manual or


automatic means to verify that it satisfies specified requirements or to identify difference
between expected & actual results.

The objective of testing is to show incorrections & testing is considered to succeed when an
error is detected.

TERMS USED IN TESTING :


Error – it refers to the discrepancy between a computed, observed or measured value and the
true, specified or theoretically correct value i.e. Error refers to the difference between the actual
output of a software & correct output.

Error is also used to refer to human action that results in software containing a defect or fault.

Fault- It is a condition that causes a system to fail in performing its required function

A fault is the basic reason for software malfunction and is synonymous with the commonly
used term Bug.

Failure – It is the inability of a system or computer to perform required function according to


its specification.

A software failure occurs if the behavior of the software different from the specified behavior.

Failure may be caused due to functional or reasons.

Tester – A person whose job is to find fault in the product is called Tester.

Test ware – A work product produced by software test engines or tester consisting of checklists
, test plans , test cases, test reports , test procedures etc.

Test Case – A test case is a set of inputs and expected outputs for a program under test,

Debugging – It is a systematic review of program text order to fix bugs in the program.
TEST ORACLES:
To test any program , we need to have a descriptive of its expected behavior and a method of
determining whether the observed behavior conforms to the expected behavior , For this we
need a Test Oracle.

A test oracle is a mechanism , different form the program itself, that can be used to check the
correctness of the output of the program for the test cases.

We can consider testing a process in which the test cases are given to the test oracle and the
program under testing the output of the two is then compared to determine if the program
behaved correctly for the test cases.

Test oracles are necessary for testing.

To help the oracle determine the correct behavior.it is important that the behavior of the system
or component be unambiguously specified and that the specification itself is error free.

Software Testing vs Debugging -


Software testing is the process of executing software in a controlled manner, in order to
answer the question:

• Does the software behave as specified?

• Software testing is used in association with the verification and validation

• Debugging is the process of analyzing and locating bugs when software does not
behave as expected.

• Debugging is an activity that supports testing, but cannot replace testing.

TESTING AND THE SOFTWARE LIFECYCLE –


• Testing should be thought of as an integral part of the software process and an activity
that must be carried out throughout the life cycle.
• Testing and fixing can be done at any stage in the life cycle. However , the cost of
finding and fixing errors increases dramatically as development progresses.

• Types of testing required during several phases of software life cycle.

• Requirements – Requirements must be reviewed with the clients, rapid prototyping


can refine requirements & accomdate changing requirements,

• Specifications- Specifications document must be checked for feasibility , tracebility,


completeness and absence of contradictions and ambiguities.

• Specification reviews are especially effective.

• Design – Design reviews are more technical.

• Design must be checked for logic faults , interface faults, lack of exception handling
and non conformance to specified.

• Implementation – code modules are informally tested by the programmers while they
are being implemented. Formal testing can include non execution modules &
execution based methods (Black Box Testing and White Box Testing)

• Integration – integration Testing is performed to ensure that the modules combine


together correcting to achieve a product that meets its specifications.

• Appropriate order of combination must be determined as top design and Bottom up.

• Product Testing –

• Functionality of the product as a whole is checked against its specifications . Test


cases are derived directly from the specifications document. The product is also tested
for robustness.

• Acceptance Testing – the software is delivered to the client, who tests the software on
the actual hardware, using actual data instead of data.

• A product cannot be considered to satisfy its specification until it has passed an


acceptance test.

• Maintenance – Modified versions of the original product must be tested to ensure that
changes have been correctly implemented.

• Software Process Management – The SPM plan must undergo scrutiny , it is


especially important that
Objectives of Software Testing –
Software testing is usually performed for the following objectives :

• Software Quality Improvement

• Verification and Validation

• Software reliability Estimation

Software Quality Improvement –


• Software Quality means the conformance of the specified software design
requirements.
• Debugging is performed heavily to find out design defects by the programmer.
• Finding the problems and get them fixed is the purpose of debugging in programming
phase.

Verification and Validation -


• Testing can serve as metrices. It is heavily used as tool the V & V process.
• Software Quality cannot be tested directly but the related factors to make Quality visible
can be tested.
• Quality has 3 sets of factors
a) Functionality
b) Engineering and
c) Adaptability
• Tests with the purpose of validating the product works are named clean test, or positive
test.
• The drawbacks are that it can only validate that the software works for the specified test
cases.
• A testable design is a design that can be easily validated falsified and maintained.
• Because testing is a rigorous effort and requires significant time and cost, design for
testability is also an important design rule for software development.

Software Reliability Estimation –


• Software reliability has important relations with many aspects of software including
the structure , and the amount of testing that has been subjected too

• The objective of testing is to discover the residual design errors before delivery to the
customer.

• The failure data during the testing process are taken down in order to estimate the
software reliability.
Bug Characteristics and Bug Types –
Characteristics of Software Bugs –

a) The symptom & the cause of a bug may exist geographically remote from each other.

b) The symptom may be caused by human error that is not easily traced.

c) The symptom may be a result of timing problems, rather than processing problems.

d) The symptom may disappear when another error is corrected.

e) The symptom may actually be caused by non errors.

f) It may be difficult to accurately reproduce input conditions

g) The symptom may be due to causes that are distributed across a number of tasks
running on different processors.

TYPES OF ERRORS-
1) Syntax Errors – They are produced by writing wrong syntax. These are generally
caught by compiler.
2) Logic/ Algorithm Errors- These errors occur due to
a) Branching too soon
b) Branching too late
c) Testing the wrong condition
d) Initialization errors
e) Forgetting to test for a particular condition
f) Data type mismatch
g) Incorrect formula or computation
3) Documentation Errors –
• These occur due to mismatch between documentation and code.
• These errors lead to difficulties especially during maintenance
4) Capacity Errors – These errors are due to system performance degradation at capacity.
5) Timing / Coordination Errors –
• These errors are mainly found in real time systems.
• These errors deal with process co ordination and are very difficult to find and correct.
6) Computation and Precision Error –
• These errors are caused due to rounding’s and truncations issues, while dealing with
the real numbers & conversion.
7) Stress / Overload errors – These errors are caused when users / device capacities
exceed.
8) Throughput / Performance Errors – these Errors come across due to throughput or
performance degradation ex – response time degradation.
9) Recovery Errors – These are error handling faults.
10) Standard / Procedures – These don’t cause errors in and of themselves but rather create
an environment where errors are created / introduced as the systems is tested and
modified.

SOFTWARE TESTING STRATEGIES –

Software Testing strategy provides a road map for the software developer , Quality
Assurance organization and the customer.
A strategy must provide guidance for the practitioner and a set of milestones for the manager.
Common characteristics of software testing strategies include the following :
a) Testing begins at the module level & works outward toward the integration of the
entire system.
b) Different testing techniques are appropriate at different times.
c) Testing is conducted by the developer and for large projects , by an independent test
group.
d) Testing & debugging are different activities , but debugging must be accommodated
in any testing strategy.

TYPES OF TESTING –

UNIT TESTING –

• Unit Testing procedures utilizes the white box methods and concentrates on testing
individual programming units.
• These units are sometimes referred to as modules or atomic modules and they represent the
smaller programming entity
• Unit testing is essentially a set of path test performance to examine the many different paths
through the modules.
• These types of tests are conducted to prove that all paths in the program are solid and without
error and will not cause abnormal termination of the program or undesirable results.

INTEGRATION TESTING –

It focuses on testing multiple modules working together . Two basic types of integration are
usually used:
1) Top Down Integration –
• It starts at the top of the program hierarchy and travels down its branches.
• This can be done in either Depth First (Shortest path down to the deepest level) or
breadth first (across the hierarchy) before proceeding to the next level.
• The main advantage of this type of integration is the basic skeleton of the program
/ system can be seen and tested early.
• Main disadvantage is the use of program stubs until the actual modules are written.
2) Bottom Up Integration –
• This type of integration has the lowest level modules built & tested first on
individual bases and in clusters using test drivers.
• This insures each module is fully tested before its utilized by its calling Module.
• Main advantage in uncovering errors in critical modules early.
• Main disadvantage is the fact that most or many modules must be build before a
working program can be presented.
Integration testing procedure can be performed in four ways:
• Top Design Strategy
• Bottom up Strategy
• Big Bang Strategy
• Sandwiched Strategy

Top Design Strategy -This integration is basically an approach where modules are developed
and tested starting at the top level of the programming hierarchy and continuing with the
lower levels.
It is incremental approach because we proceed one level at a time. It can be done in either
“depth” or “breadth” manner.

Bottom Up Strategy – This Process starts with building and testing the low level modules
first, working its way up the hierarchy. Because the modules at the low levels are very specific,
we may need to combine several of them into what is sometimes called a cluster or build in
order to test them properly.

Big Bang Strategy – In this all the modules or builds constructed and tested independently
of each other and when they are finished , they are all put together at the same time.

Sandwiched Strategy – It is most widely used integration strategy as this strategy aims at
overcoming the limitations of both top design and bottom up strategies. This strategy is a
mixture of both top down and bottom up approaches.

FUNCTIONAL TESTING –
• In this each function implemented in the module is identified. From this test data are
devised to test each function separately.
• Functional testing verifies that an application does what it is supposed to do and doesn’t
do what it shouldn’t do.
• Functional testing includes testing of all the interfaces and should therefore involve the
clients in the process.
• Functional testing includes testing of all the interfaces and should therefore involve the
clients in the process.
• Functional testing can be difficult for following reasons :
o Functional within a module may consist of lower level functions. Each of which
must be tested first.
o Lower level functions may not be independent.
o Functionality may not coincide with module boundaries this tends to blur the
distinction between module testing and integration testing.

Functional testing fall in 2 categories :


Positive Functional Testing
Negative Functional Testing
Positive Functional Testing :
This testing entails exercising the application’s functions with valid input and
verifying that the output are correct.
Negative Functional Testing :
This testing involves exercising application for using a combination of invalid
inputs, unexpected conditions and other “out of bounds” scenarios.

REGRESSION TESTING –
• This testing is the process of running a subset of previously executed integration &
function tests to ensure that program changes have not degraded the system.
• The regressive phase concerns the effect of newly introduce changes on all the
previously integrated code.
• It may be conducted manually or using automated tools.
• Basic approach is to incorporate selected tests case into a regression bucket that is run
periodically to find regression problems.

SYSTEMS TESTING –
A system test checks for unexpected interactions between the units & modules and
also evaluates the system for compliance with functional requirements.

ACCEPTANCE TESTING –
An acceptance test is the process of executing the test cases agreed with the
customer as being an adequate representation of user requirements .
BLACK BOX TESTING –
• It is also known as Functional Testing, Specification Testing, Behavioral Testing,
Data Driven Testing and input / output driven testing.
• In functional testing the structure of the program is not considered. Test cases are
decided solely on the basis of requirements or specifications of the program or
module and the internals of the module or the program are not considered for
selection of test cases.
• Basis for deciding test cases in functional testing is the requirements or specifications
of the system or module.
• Black Box Testing attempts to uncover the following :
a) Incorrect Functions
b) Data structure Errors
c) Missing Functions
d) Performance Errors
e) Initialization & Termination Errors
f) External Databases Access Errors
Advantages of Black Box Testing:
1) The test is unbiased because the designer and the tester are independent of each other.
2) The tester does not need knowledge of any specific programming language,
3) The test is done from the point of view of the user nit the designer.
4) Test cases can be designed as soon as the specific are complete.

Disadvantages of Black Box Testing –


1) The test can be redundant if the software designer has already run a test case.
2) The test cases are difficult to design.
BLACK BOX TESTING TECHINIQUES –
a) Equivalence Class Partitioning
b) Boundary value Analysis
c) Cause Effect Graphs
d) Comparison Testing
Equivalence Class Partitioning:
The idea is to partition the input domain of the system into a finite number of equivalence
classes such that each number of the class would behave in similar fashion.

This technique increase the efficiency of software testing as number of input states are
drastically reduced. This techniques involves 2 steps.
a) Identification of equivalence classes
b) Generating the test cases

Identification of Equivalence Classes -


Following guidelines are used-
a) Partition any input domain into minimum two sets of valid values and invalid values.
b) If arrange of valid values is specified as input – select one valid input within range
and two invalid inputs outside at each end of range ex – if required height is (160 cm
– 170 cm) then for testing one valid input can be 165 and two invalid inputs can be
159 cm and 171 cm.
c) If a set of defined values are specified as inputs , define one valid input from within
the set and one invalid output outside the set.
Ex – if degree requires B.E,M.E or Ph.d then valid input is M.E and invalid is MBA
d) If a specific number (N) of valid values or enumeration of values (i.e 10 15 16 18
20) is specified as input space valid input – 16 , invalid input (8 & 22).
e) If a mandatory value is defined in the input space say input must start with $ , define
one valid input , where first character is $ & one without $
ii) Generating the test cases –
a) To each valid & invalid class of input assign a unique identification number,
b) Write test cases covering all valid class of inputs.
c) Write test cases covering all invalid class of inputs such that no test case contains
more than one invalid inputs so as to

Boundary Value Analysis –


It has been observed that boundaries are a very good place for errors to occur . Hence if test
cases are designed for the boundary values of input domain efficiency of testing increases ,
thereby increasing the probability of detecting errors.

Guidelines of BVA –
a) For Given range of input values say 10.0 to 20.0 identigy valid inputs as ends of range
(10,20) , invalid inputs such as (9.99 & 20.01) and write test cases for the same,
b) For a given number of inputs say (eg 5,7,8,10,15) identify minimum and maximum
values as valid (5 & 15) and valid inputs (4 & 16).

Cause Effect Graphs –


• It establishes relationships between logical input combinations called causes and
corresponding actions called effect.
• The Causes & effects are represented using a Boolean Graph. Left hand column of the
figure gives the various logical association among causes and efforts.
• The right hand columns indicates potentials constraining associations that might apply
to other causes or effects.

Guidelines for Cause Effect Graph –


- Causes and effects are listed for modules and an identifier is assigned to each.
- Cause effect graph is developed.
- The graph is converted to a decision table.
- Decision tables rules are converted to test cases.

Symbols –
IDENTITY -

NOT

OR

AND
Comparison Testing –
• For critical applications requiring fault tolerance a number independent versions of
software are developed for the same specifications.
• If o/p from each version is the same then it is assumed that all implementation are correct/
• If the o/p is different , each version is examined see if is responsible of the differing output.
• It is not fool proof because if the specifications applied to all versions is incorrect , all
versions will likely reflect error and there may be the same output.
WHITE BOX TESTING –
• It is also known as Glass box Testing, Structural Testing, Clear Box Testing, Open Box
testing, Logic Driven Testing and Path Oriented Testing.
• White Box testing is used to test internals of the program. This is done by examining
the program structure and by deriving test cases from the program logic.
• Test cases are derived to ensure that
a) All independent paths in the program are executed at least once.
b) All logical decisions are tested i.e all possible combinations of true or false are tested.
c) All loops are tested.
d) All internal data structures are tested for their validity.

Advantages of White Box Testing –


a) Forces test developer to reason carefully about implementation.
b) Approximates the partitioning done by execution equivalence
c) Revels errors in hidden code.
d) Beneficial side effects.
e) Optimizations.
Disadvantages of White Box Testing –
a) Expensive
b) Miss cases omitted in the code.
Types of White Box Testing –
a) Basis Path Testing
b) Structural Testing
c) Logic based Testing
d) Fault Based Testing

BASIS PATH TETSING –

- It allows the design & definition of a basis set of execution paths


- The test cases created from the basis set allow the program to be executed in such a way as
to examine each possible path through the program by executing each statement at least once.
Following steps are followed –
a) Construction of flow graph from Source Code or flow charts.
b) Identification of independent paths.
c) Computation of cyclomatic complexity.
d) Test case design.
Construction of flow graph – A Flow graph consists of a number of nodes represents as circles
connected by directed arcs.

Computation of Cyclomatic Complexity –


Cyclomatic Complexity is a metric to measure logical complexity of the program.
This value defines the number of independent paths in the program to be executed in
order to ensure that all statements in the program are executed at least once.
It gives us the value for maximum number of test cases to be designed.
C(g) = E – N +2

E = Number of edges in the flow graph


N = Number of nodes
ii) Identification of independent paths –
• Independent path in a program’s a path consisting of at least one new condition or set
if processing statements.
• In case of a flow graph it must consist of a least one new edge which is not traversed or
included in other paths.
• Number of independent path is given by value of cyclomatic complexity.
Here C(g) = E – N + 2P = 15-12 +2 = 5

Number of independent path

Path 1 : a -- b -- d --e
Path 2 : a –b –d – f –n –b –d – e
Path 3 : a—b –c –g –j –k –m –n –b – d –e
Path 4 : a—b—c –g –j –l –m –n –b –d –e
Path 5 : a –b –c –g –h –I –n –b –d –e

Design of test cases :


• Test cases can now be designed for execution of independent paths as identified.
• This ensures that all statements are executed at least once.

STRUCTURAL TESTING –

• It examines source code and analyses what is present the code.


• Structural testing techniques are often dynamic, meaning that code is executed during
analysis.
• This implies a high test cost due to compilation or interpretation, , linkage , file
management and execution time.
• Structural testing cannot expose errors of code omission but cannot estimate the test
suite adequacy in terms of code coverage , that is execution of components by the test
suite or its fault finding ability.
• Following are some important types of structural testing –
a) Statement Coverage Testing
b) Branch Coverage Testing
c) Condition Coverage Testing
d) Loop Coverage Testing
e) Path Coverage testing
f) Domain & Boundary Testing
g) Data flow Testing

Statement Coverage Testing –


- In this a series of test cases are run such that each statement is executed at least once.
- A weakness of this approach is that there is no guarantee that all outcomes of branches
are properly tested.
- Ex – if (x > 50 && y <10)
z = x+y
printf(“%d\n”,z);
x = x+1;
• In this test case the value of x = 60 & y = 5 are sufficient to execute all the statements.
• Main disadvantage of statement coverage is that it does not handle control structures
fully.
• It does not report whether loops reach their termination condition or not and is
insensitive to the logical operators.
• It only answers that all statements are executed at least once.

Branch Coverage Testing :


• In this a series of tests are run to ensure that all branches are tested at least once.
• It is also called decision coverage.
• Techniques such as statement or branch coverage are called structural tests.
• It requires sufficient test cases for each program decision or branch to be executed so
that each possible outcomes occurs at least once.
if (x < 20) AND (y > 50)
test = total + x;
else
total = total +y;
• This can be tested using test cases such as (x <20) AND (y >50)
• Disadvantage - this may ignore branches within a Boolean Expression.
Ex – if (x && (y ||add_digit()))
printf(“success \n”);
else
printf(“failure\n”);
• Now, Branch coverage would completely exercise the control structure without
calling the function add_digit();
• Expression is true when x & y are true and false if X is false.

Condition Testing :
• Condition Testing is done to test all logical conditions in a program module.
• It differs from branch coverage only when multiple conditionsmust be evaluated to
reach a decision.
• Multi condition coverage requires sufficient test cases to exercise all possible
combinations of conditions in a program decision.
• Test cases are designed so that at least once each condition takes on every possible
values.
Eg. if ((x) &&(y) &&(!z))
printf(“valid”);
else
printf(“invalid\n”);
Hence, two valid conditions are :
i) X = T , Y = T , Z = F
ii) X = F, Y = F , Z = T
In multi conditions all conditions are tested.
Loop coverage Testing :

This requires sufficient test cases for all program loops to be executed for 0,1,2 &
many iterations covering initialization typical running & termination conditions.

Path Coverage Testing :

• It is most powerful form of white box testing all paths are tested.
• This criteria requires sufficient test cases for each feasible path, basis path etc. from
start to exit of defined program segment to be executed at least once.

Domain & Boundary Testing :

• It is a form of path coverage


• Path domains are a subset of the input that causes execution of unique paths.
• Input data can be derived from the program control graph. Test inputs are chosen to
exercise each path & also the boundaries of each domain.

Data Flow Testing :


• Data flow testing focus on the points at which variables receives values and the points
at points at which the values are used.
• This technique requires sufficient test cases for each feasible data flow to be executed
at least once.
• Data flow analysis studies the sequence of actions & variables along program paths.
• Terminology used during data flow testing is –
• Def- Statement in the program where an initial value is assigned to a variable eg.
i = 1, sum = 0
Basis Block – it is set of consecutive statements that can be executed without branching
Eg. sum = sum +next;
bill value = bill value + sum;
next++;

c_use – it is also called Computation use and occurs when variables occurs for computation. A
path can be therefore identified starting from the definition and ending at a statement where it
is used for computation called dc path.
P_use – it is similar to c_use except that in the statement the variable appears in the condition.
So path can be identified starting from definition of variable & ending at statement where the
variable is appearing in the predicate called dp- depth.
All_use – In this paths can be identified starting from definition of a variable to its every
possible use.
Du_use – In this a path can be identified starting from definition of a variable & ending at a
point where it is used but its value is not changed called sdu_path.
Logic Based Testing –

Logic Based testing is used when the input domain resulting processing are amenable to a
decision table representation.

Fault Based Testing-


• It attempts to show the absence of certain classes of faults in code.
• Code is analyzed for uninitialized or in referenced variable parameter type checking
etc.

Mutation Testing –
• It is also fault based testing.
• It comprises of two steps
• A set of program variants called Mutants is generated by introducing known bugs
representing typical errors.
• Possible changes that can be done are :
• Scalar variables replacement i.e each occurrence of variable x is replaced with all other
variables in scope.
• Arithmetic operator replacements i.e each occurrence of an arithmetic operator is
replaced with all possibilities.
• The program with mutated statement is called Mutant.
Eg. INPUT A
INPUT B
TOTAL = 0
FOR I = 1 TO A
IF (B>0)
TOTAL = TOTAL +B
INPUT B
NEXT I
PRINT TOTAL
• In this program mutants can be generated by changing variable A to B or Total.
• If we want to change operator
Then TOTAL = TOTAL – B or TOTAL = TOTAL + B;
• Test cases are designed to distinguish the original program is determined by the quantity
of test data.
• A Test case differentiates two program if different results are produced by the two.
• A mutant is said to be called if it is detected by the test case objective is to find test
cases which can kill the mutants.

DEFECT TESTING :

• It is aimed at discovering the latent defects before the system is delivered.


• It demonstrated the presence of program faults not the absence.
• A successful defect test is that causes the system to perform incorrectly & thus exposes
a defect.
• Testing of a system has 2 objectives
1) It is intended to show that the system meets its specification
2) It is intended to exercise the system in such a way that latent defects are expected.
INTERFACE TESTING :

• It is intended to discover defects in the interfaces of objects or modules.


• Interface defects may arise because of errors made in reading the specifications ,
specification misunderstandings or errors or invalid timing assumption.
• It is useful for object oriented software development.
• Types of interface errors may include: parameter interfaces errors, shared memory
errors , message passing interface errors , procedural interface errors etc.

ALPHA TESTING:

• Acceptance Testing is also called Alpha Testing.


• Custom Made systems are developed for a single client. Alpha testing process continues
until the system developer and the client agrees that the delivered system is an
acceptable implementation of the system requirements.

BETA TESTING :

• When a system is to be marketed as a software product a testing process called beta


testing is often used.
• It involves determining a system to a number of potential customers who agree to use
that system.
• They report problems to the system developers.
• This exposes the product to real use and detects errors that may not have been
anticipated by the system builders.

CODE REVIEWS AND WALKTROUGHS

• Code Review for a module is carried out after the Module is successfully compiled
and all the syntax errors eliminated
• Code reviews are extremely cost effective strategies for reduction in coding errors in
order to produce high quality code.
• Two types of reviews are carried out on the code of a module – code wwalk trough &
code inspection.

CODE WALKTHROUGHS:
• It is an informal code analysis technique.
• In this technique, after a module has been coded, it is successfully compiled and all
syntax errors are eliminated.
• Some members of the development team are given the code a few days before the walk
through meeting to read & understand the code.
• Each members selects some test cases & simulates execution of the code by hand.
• The main objectives of the walk through are to discover the algorithms & logical errors
in the code.
• Guidelines For the walk through are-
o The team performing the code walk through should not be either too big or too small.
3 to 7 members.

You might also like