0% found this document useful (0 votes)
88 views31 pages

Unit 3

This document discusses software design concepts and principles. It describes how software design transforms customer requirements into an implementable form using programming languages. The design phase produces modules, control relationships between modules, module interfaces, data structures, and algorithms. The objective is to take the requirements document as input and produce design documents. Design can be classified into preliminary (high-level) design and detailed design. High-level design identifies modules, relationships, and interfaces, while detailed design specifies data structures and algorithms. The document also discusses good versus bad design characteristics and principles for creating a quality design model.

Uploaded by

Pulkit Jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views31 pages

Unit 3

This document discusses software design concepts and principles. It describes how software design transforms customer requirements into an implementable form using programming languages. The design phase produces modules, control relationships between modules, module interfaces, data structures, and algorithms. The objective is to take the requirements document as input and produce design documents. Design can be classified into preliminary (high-level) design and detailed design. High-level design identifies modules, relationships, and interfaces, while detailed design specifies data structures and algorithms. The document also discusses good versus bad design characteristics and principles for creating a quality design model.

Uploaded by

Pulkit Jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 31

Software Engineering 1

Unit 3
Design concepts and principles
Software Design
1. Software design deals with transforming the customer requirements, as described by the SRS document, into a form that
is implementable using a programming language.
2. For a design to be easily implementable in a conventional programming language, the following items must be designed during
the design phase.
 Different modules required to implement the design solution.
 Control relationships among the identified modules. The relationship is also known as the call relationship or
invocation relationship among modules.
 Interface among different modules. The interface among different modules identifies the exact data items exchanged
among the modules.
 Data structures of the individual modules.
 Algorithms required to implement the individual modules.
3. Thus, the objective of the design phase is to take the SRS document as the input and produce the above mentioned documents
before completion of the design phase.
4. We can broadly classified the design activities into two important parts:
a) Preliminary (or High level) Design
 High level design means identification of different modules, the control relationships and the definitions of
the interfaces among them.
 The outcome of the high level design is called the program structure or software architecture.
 Many different types of notations have been used to represent a high level design. A popular way is to use a tree
like diagram called the structure chart to represent the control hierarchy in a high level design.
b) Detailed Design
 During detailed design, the data structure and the algorithms of different modules are designed.
 The outcome of the detailed design stage is usually known as the module specification document.

Difference between Good Design and a Bad Design


No. Good Design Bad Design
1. If the design is good then it will not A bad design will show ripple effect.
exhibit ripple effect i.e. change in one
part of system will not affect other parts
of the system.
2. It will be simple. It will be complex.
3. System can be extended with changes in It can’t add a new function without only
one place. breaking an existing function.
4. The logic is near the data it operates on. We can’t remember where all the implicitly
linked changes have to take place.
5. A good design costs less. A bad design has more cost.
6. No need of logic duplication. Logic has to be duplicated.
Design Model
The design principles and concepts establish a foundation for the creation of the design model that encompasses representation of data,
architecture, interface and components. Like the analysis model before it, each of these design representations is tied to the others, and all
can be traced back to software requirements.
The Entity-Relationship Diagrams (ERD), the Data Flow Diagrams (DFD), the State Transition
Diagrams (STD) and the Data Dictionaries (DD) that are constructed during the requirements phase are directly mapped on to the
corresponding design model as shown below.

1. Data Design – It transforms the information domain model created during analysis into the data structures that will be required to
implement the software. The data objects (or entities) and the relationships defined in ER diagram and the detailed data content
depicted in the Data Dictionary provide the basis for the data design activity. Detailed data design occurs as each software
component is designed.
2. Architectural Design – It defines the relationship between major structural elements of the software, the “design patterns” that can
be used to achieve the requirements that have been defined for the system. This design representation forms the framework of a
computer based system. It can be derived from the system specification, the analysis model and the interaction of subsystems
defined within the analysis model.
3. Interface Design – It describes how the software communicates within itself, with systems that interoperate with it and with
humans who use it. An interface implies a flow of information and a specific type of behavior. Therefore, data and control flow
diagrams provide much of the information required for interface design.
4. Component Level Design – It transforms structural elements of the software architecture into a procedural description of software
components. Information obtained from ER-Diagrams, Data Flow diagrams or STDs, serves as the basis for component design.
During design we make decisions that will ultimately affect the success of software construction and, as important, the ease with which
software can be maintained. But why is design so important?
The importance of software design can be stated with a single word – Quality. Design is the place where quality is fostered in
software engineering. Design provides us with representations of software that can be accessed for quality. Design is the only way that we
can accurately translate a customer’s requirements into a finished software product or system. Software design serves as the foundation for
all the software engineering and software support activities that follow. Without design, we risk building an unstable system – one that will
fail when small changes are made; one that may be difficult to test; one whose quality cannot be accessed until late in the software process,
when time is short and many dollars have already been spent.

Design Process
Software design is an iterative process through which requirements are translated into a “blueprint” for constructing the software. Initially,
the blueprint depicts a holistic view of software i.e., the design is represented at a high level of abstraction – a level that can be directly traced
to the specific system objectives. As design iterations occur, subsequent refinement leads to design representations at much lower levels of
abstraction.

McGlaughlin suggests 3 characteristics that serve as a guide for evaluation of a good design:
1. The design must implement all of the explicit requirements contained in the analysis model and it must accommodate all of the
implicit requirements desired by the customer.
2. The design must be readable, understandable guide for those who generate code and for those who test and support the software.
3. The design should provide a complete picture of the software, addressing all data, functional and behavioral
domains. Each of these characteristics is actually a goal of the design process.

Quality Guidelines that lead to a good design


1. A design should exhibit an architecture that
a) Has been created using recognizable architectural styles or patterns;
b) Is composed of components that exhibit good design characteristics, and
c) Can be implemented in an evolutionary fashion, thereby facilitating implementation and testing.
2. A design should be modular; that is, the software should be logically partitioned into elements or subsystems.
3. A design should contain distinct representations of data, architecture, interfaces, and components.
4. A design should lead to data structures that are appropriate for the classes to be implemented and are drawn from recognizable data
patterns.
5. A design should lead to the components that exhibit independent functional characteristics.
6. A design should lead to interfaces that reduce the complexity of connections between components and with the
external environment.
7. A design should be derived using a repeatable method that is driven by information obtained during software requirements analysis.
8. A design should be represented using a notation that effectively communicates its meaning.

Design Principles
Software design is both a process and a model. The ‘design process’ is a sequence of steps that enable the designer to describe all aspects of
the software to be built. The ‘design model’ is however, an equivalent of an architect’s plan for a house. It begins by representing the totality
of the thing to be built (e.g. a 3D house) and slowly refining it into more details. Similarly, the design model that is created for software
provides a variety of different views of the computer software.
Davis – Design Principles
1. The design process should not suffer from “tunnel vision”. A good designer should consider alternative approaches, judging each
based on the requirements of the problem, and the resources available to do the job.
2. The design should be traceable to the analysis model. It is necessary to have a means for tracking how requirements have been
satisfied by the design model.
3. The designer should not reinvent the wheel, i.e., Use the set of design patterns, already encountered so that new patterns are not
reinvented. Time is short and resources are limited. Design time should be invested in representing truly new ideas and integrating
those patterns that already exist.
4. The design should “minimize the intellectual distance” between the software and the problem as it exists in the real world i.e., the
structure of the software design should mimic the structure of the problem domain.
5. The design should exhibit uniformity and integration. A design is uniform if it appears that one person developed the entire thing.
Rules of style and format should be defined for a design team before design work begins. A design is integrated if care is taken in
defining interfaces between design components.
6. The design should be structured to accommodate change.
7. The design should be structured to degrade gently, even when aberrant data, events or operating conditions are encountered. Well
designed software should never “bomb”. It should be designed to accommodate unusual circumstances and if it must terminate
processing, do so in a graceful manner.
8. Design is not coding, coding is not design.
9. The design should be assessed for quality as it is being created, not after the fact.
10. The design should be reviewed to minimize conceptual (semantic) errors.

Design Concepts
There are 9 design concepts that we must study:
1. Abstraction
2. Refinement
3. Modularity
4. Software Architecture
5. Control Hierarchy
6. Structural Partitioning
7. Data Structure
8. Software Procedure
9. Information Hiding

1. Abstraction
a) When we consider a modular solution to any problem, many levels of abstraction can be posed. At the highest level of abstraction, a
solution is stated in broad terms using the language of the problem environment. At lower levels of abstraction, a more procedural
orientation is taken. Problem-oriented terminology is coupled with implementation – oriented terminology in an effort to state a
solution. Finally, at the lowest level of abstraction, the solution is stated in a manner that can be directly implemented.
b) Each step in the software process is a refinement in the level of abstraction of the software solution. During system engineering,
software is allocated as an element of a computer-based system. During software requirements analysis, the software solution is
stated in terms "that are familiar in the problem environment." As we move through the design process, the level of abstraction is
reduced. Finally, the lowest level of abstraction is reached when source code is generated.
c) As we move through different levels of abstraction, we work to create procedural and data abstractions. A procedural abstraction is
a named sequence of instructions that has a specific and limited function. An example of a procedural abstraction would be the
word open for a door. Open implies a long sequence of procedural steps (e.g., walk to the door, reach out and grasp knob, turn knob
and pull door, step away from moving door, etc.).
d) A data abstraction is a named collection of data that describes a data object. In the context of the procedural abstraction open, we
can define a data abstraction called door. Like any data object, the data abstraction for door would encompass a set of attributes that
describe the door (e.g., door type, swing direction, opening mechanism, weight, dimensions). It follows that the procedural
abstraction open would make use of information contained in the attributes of the data abstraction door.
e) Many modern programming languages provide mechanisms for creating abstract data types. For example, the Ada package is a
programming language mechanism that provides support for both data and procedural abstraction. The original abstract data type is
used as a template or generic data structure from which other data structures can be instantiated.
f)Control abstraction is the third form of abstraction used in software design. Like procedural and data abstraction, control
abstraction implies a program control mechanism without specifying internal details. An example of a control abstraction is
the synchronization semaphore used to coordinate activities in an operating system.

2. Refinement
a) Stepwise refinement is a top-down design strategy originally proposed by Niklaus Wirth. A program is developed by successively
refining levels of procedural detail. A hierarchy is developed by decomposing a macroscopic statement of function (a procedural
abstraction) in a stepwise fashion until programming language statements are reached.
b) Refinement is actually a process of elaboration. We begin with a statement of function (or description of information) that is
defined at a high level of abstraction. That is, the statement describes function or information conceptually but provides no
information about the internal workings of the function or the internal structure of the information. Refinement causes the
designer to elaborate on the original statement, providing more and more detail as each successive refinement (elaboration)
occurs.
c) Abstraction and refinement are complementary concepts. Abstraction enables a designer to specify procedure and data and yet
suppress low-level details. Refinement helps the designer to reveal low-level details as design progresses. Both concepts aid the
designer in creating a complete design model as the design evolves.

3. Modularity
a) It has been stated that "modularity is the single attribute of software that allows a program to be intellectually manageable".
Monolithic software (i.e., a large program composed of a single module) cannot be easily grasped. The number of control paths,
span of reference, number of variables, and overall complexity would make understanding close to impossible.
b) Let C(x) be a function that defines the perceived complexity of a problem x, and
E(x) be a function that defines the effort (in time) required to solve a problem x.
For two problems, p1 and p2, if C(p1) > C(p2)
it follows that E(p1) > E(p2)
i.e., it does take more time to solve a difficult problem.
Also, from experimentation it has been found that C(p1 + p2) > C(p1) + C(p2)
i.e., the perceived complexity of a problem that combines p1 and p2 is greater than the perceived complexity when each problem is
considered separately. So,
E(p1 + p2) > E(p1) + E(p2)
This leads to a "divide and conquer" conclusion—i.e., it is easier to solve a complex problem when you break it into manageable
pieces.
i.e., if we subdivide software indefinitely, the effort required to develop it will become negligibly small! Unfortunately, other forces
come into play, causing this conclusion to be (sadly) invalid.
c) Consider the following graph

Fig Total cost for Efforts curves


i.e., the effort (cost) required to develop an individual software module does decrease as the total number of modules increases,
(cost/module decreases as the number of modules increases). Given the same set of requirements, more modules means smaller
individual size. However, as the number of modules grows, the effort (cost) associated with integrating the module also grows.
These characteristics lead to a total cost or effort curve as shown in fig above. There is a number, M, of modules that would result in
minimum development cost but we do not have the necessary sophistication to predict M with assurance. The curves above do
provide a useful guidance when modularity is considered. We should modularize but care should be taken to stay in the vicinity of
M. Under-modularity or over-modularity should be avoided.

4. Software Architecture
a) It covers the overall structure of the software and the ways in which that structure provides conceptual integrity for a system. So,
architecture is the hierarchical structure of program components (modules), the manner in which these components interact and
the structure of data that are used by the components.
b) An architectural design can be represented using any of the 5 – models given below:
 Structural Models – They represent architecture as an organized collection of program components.
 Framework Models – They increase the level of design abstraction by attempting to identify repeatable architectural
design frameworks.
 Dynamic Models – They address the behavioral aspects of the program architecture (states).
 Process Models – They focus on the design of the business or technical process that the system must accommodate.
 Functional Models – They can be used to represent the functional hierarchy of a system.
c) A number of different architectural description languages (ADLs) have been developed to represent these models.

5. Control Hierarchy
a) It is also called as program structure.
b) It represents the organization of program components (modules) and implies a hierarchy of control.
c) It does not represent procedural aspects of software such as sequence of processes, occurrence or order of decisions or repetitions
of operations nor is it necessarily applicable to all architectural styles.
d) The most commonly used notation to represent control hierarchy is the tree–like diagram that represents hierarchical control for call
and return architectures.

Fig Structural terminology for a call and return architectural style


e) ‘Depth’ and ‘Width’ provide an indication of the number of levels of control and overall span control respectively.
f)‘Fan-out’ is a measure of the number of modules that are directly controlled by another module. For e.g. Fan-out of M is 3.
g) ‘Fan-in’ indicates how many modules directly control a given module. For e.g. Fan-in of r is 4.
h) A module that controls another module is said to be super ordinate to it and conversely, a module controlled by another is said to be
subordinate to the controller. For e.g. module M is super ordinate to modules a, b and c. Module h is subordinate to module e and is
ultimately subordinate to module M.

6. Structural Partitioning
If the architectural style of a system is hierarchical, then the program structure can be partitioned both – horizontally and vertically.
a) Horizontal Partitioning – It defines separate branches of the modular hierarchy for each major program function.
Control modules, represented in darker – shade (hatched ones) are used to coordinate communication between and execution of
the functions. The simplest approach to horizontal partitioning defines 3 partitions input, data transformation (or processing) and
output. It has many benefits:
i. Software that is easier to test.
ii. Software that is easier to maintain.
iii. Propagation of fewer side effects.
iv. Software that is easier to extend.

Its negative side (drawback) is that it causes data to be passed across module interfaces and can complicate the overall control of
program flow.
b) Vertical Partitioning – Also called as factoring, suggests that control and work should be distributed top-down in the program
structure. Top – level modules should perform control functions and do little actual processing work. Whereas the modules that
reside low in the structure should be the workers, performing all input, computation and output tasks.

So, it can be seen that a change in a control module (high in the structure) will have a higher probability of propagating side effects
to the modules that are subordinate to it, Whereas a change to a worker module (at low level) is less likely to cause the propagation of
side effects. In general, changes to computer programs resolve a round changes to input, computation (or transformation) and
output. The overall control structure of the program is far less likely to change. For this reason, vertically partitioned structures
are less likely to be susceptible to side effects when changes are made and will therefore be more maintainable – a key quality
factor.

7. Data Structure
a) Data structure is a representation of the logical relationship among individual elements of data.
b) Data structure dictates the organization, methods of access, degree of associativity and processing alternatives for information.
c) It may be a scalar item (or a variable), a sequential vector (array) or a linked list.
d) Note that data structures like program structures can be represented at different levels of abstraction.

8. Software Procedure
a) It focuses on the processing details of each module individually.
b) Procedures must provide a precise specification of processing, including sequence of events, exact decision points,
repetitive operations and even data organization and structure.
c) A procedural representation of software is layered i.e., we will have procedure for super ordinate modules first and then for
subordinate modules.
9. Information Hiding
a) It suggests that the modules should be specified and designed so that information (procedure and data) contained within a module is
not accessible to other modules that have no need for such information.
b) Hiding implies that effective modularity can be achieved by defining a set of independent modules that communicate with
one another only that information necessary to achieve software function.
c) Abstraction helps to define the procedural entities that make up the software. Hiding defines and enforces access constraints to both
procedural detail within a module and any local data structure used by the module.

Modular Design
Cohesion
Cohesion is the measure of strength of the association of elements within a module. Modules whose elements are strongly and genuinely
related to each other are desired. A module should be highly cohesive.

Types of Cohesion
There are 7 types of cohesion in a module
1. Coincidental Cohesion – A module has coincidental cohesion if its elements have no meaningful relationship to one another. It
happens when a module is created by grouping unrelated instructions that appear repeatedly in other modules.
2. Logical Cohesion – A logically cohesive module is one whose elements perform similar activities and in which the activities to be
executed are chosen from outside the module. Here the control parameters are passed between those functions. For example,
Instructions grouped together due to certain activities, like a switch statement. For ex. A module that performs all input & output
operations.
3. Temporal Cohesion – A temporally cohesive module is one whose elements are functions that are related in time. It occurs when
all the elements are interrelated to each other in such a way that they are executed a single time. For ex. A module performing
program initialization.
4. Procedural Cohesion – A procedurally cohesive module is one whose elements are involved in different activities, but the
activities are sequential. Procedural cohesion exists when processing elements of a module are related and must be executed in a
specified order. For example, Do-while loops.
5. Communication Cohesion – A communicationally cohesive module is one whose elements perform different functions, but each
function references the same input information or output. For example, Error handling modules.
6. Sequential Cohesion – A sequentially cohesive module is one whose functions are related such that output data from one function
serves as input data to the next function. For example, deleting a file and updating the master record or function calling another
function.
7. Functional Cohesion – A functionally cohesive module is one in which all of the elements contribute to a single, well-defined task.
Object-oriented languages tend to support this level of cohesion better than earlier languages do. For example, When a module
consists of several other modules.

Coupling
Coupling is the measure of the interdependence of one module to another. Modules should have low coupling. Low coupling minimizes the
"ripple effect" where changes in one module cause errors in other modules.

Types of Coupling
There are 6 types of coupling in the modules are
1. No direct Coupling – These are independent modules and so are not really components of a single system. For e.g., this occurs
between modules a and d.
2. Data Coupling – Two modules are data coupled if they communicate by passing parameters. This has been told to you as a "good
design principle" since day one of your programming instruction. For e.g., this occurs between module a and c.
3. Stamp Coupling – Two modules are stamp coupled if they communicate via a passed data structure that contains more information
than necessary for them to perform their functions. For e.g., this occurs between modules b and a.

4. Control Coupling – Two modules are control coupled if they communicate using at least one "control flag". For e.g., this occurs
between modules d and e.
5. Common Coupling – Two modules are common coupled if they both share the same global data area. Another design principle you
have been taught since day one: don't use global data. For e.g., this occurs between modules c, g and k.
6. Content Coupling – Two modules are content coupled if:
i. One module changes a statement in another (Lisp was famous for this ability).
ii. One module references or alters data contained inside another module.
iii. One module branches into another
module. For e.g., this occurs between modules b
and f.

Difference between Cohesion and Coupling


No Cohesion Coupling
1. Cohesion is the measure of strength of the Coupling is the measure of the interdependence of
association of elements within a module. one module to another.
2. A module should be highly cohesive. Modules should have low coupling.
3. Cohesion is a property or characteristics Coupling is a property of a collection of modules.
of an individual module.
4. The advantage of cohesion is the ability The advantage of coupling is the ability to bind
to avoid changing source and target systems by sharing behavior, and bound data,
systems just to facilitate integration. versus simple sharing information.
5. The fact that a single system failure won’t The fact that systems coupled could cease to
bring down all connected systems. function if one or more of the coupled systems go
down.

Design Notations
In software design the representation schemes are of fundamental importance. A good design can clarify the interrelationships and actions of
interest, while poor notation can compliance and interfere with good practice. At least three levels of design specifications exist: external
design specifications, which describe the external characteristics of a software system; architectural design specifications, which describe the
structure of the system; and detailed design specifications, which describe control flow, data representation, and other algorithmic details
within the modules. Some common design notations are as follows:

Data flow diagrams (Bubble chart)


1) These are directed graphs in which the nodes specify processing activities and arcs (lines with arrow heads) specify the data
items transmitted between the processing nodes.
2) Like flowcharts, data flow diagrams can be used at any desired level of abstraction.
3) Unlike flowcharts, data flow diagrams do not indicate decision logic or conditions under which various processing nodes in the diagram
might be activated.
4) They might represent data flow :-
a) Between individual statements or blocks of statements in a routine,
b) Between sequential routines,
c) Between concurrent processes,
d) Between geographically remote processing units.
5) The DFDs have basic two levels of development, they are as follows –

3) A Level 0 DFD, also known as Fundamental System Model or Context Model, represents the entire system as a single bubble
with incoming arrowed lines as the input data and outgoing arrowed lines as output.

4) A Level 1 DFD might contain 5 or 6 bubbles with interconnecting arrows. Each process represented here is the detailed view
of the functions shown in the level 0 DFD.

 Here the rectangular boxes are used to represent the external entities that may act as input or output outside the system.

 Round Circles are used to represent any kind of transformation process inside the system.

 Arrow headed lines are used to represent the data object and its direction of flow.
6) Following are some guidelines for developing a DFD:-

7) Level 0 DFD should depict the entire software system as a single bubble.

8) Primary input and output should be carefully noted.

9) For the next level of DFD the candidate processes, data objects and stores should be recognized distinctively.

10) All the arrows and bubbles should be labeled with meaningful names.

11) Information flow continuity should be maintained in all the levels.

12) One bubble should be refined at a time.

Example: Let us consider a software system called the root mean square (RMS) calculating system which reads 3 integers in the range from –
1000 to + 1000 and calculate their RMS value and then display it.

The context level diagram of RMS is shown below

User

Data items RMS

RMS
calculator

Fig Context level diagram of RMS software


And its level 1 DFD is

Data items

Validate Valid data Compute m-sq. Display


input mean result
0.1 square 0.3
0.2

Fig Level 1 DFD of RMS Software


And its level 2 DFD is
a

Square
0.2.1

a-sq.

b
Square
0.2.2 b-sq. Mean
0.2.4

c m-sq. Root
c-sq. 0.2.5
Square
0.2.3
RMS

Fig Level 2 DFD of RMS software


Structure Charts
 They are used during architectural design for documenting hierarchical structure, parameters and interconnections in a system.
 In a structure chart a module is represented by a box with the module name written in the box.
 An arrow from a module A to a module B represents that the module A invokes module B. the arrow is labeled by the parameters
received by B as input and the parameters returned by B.
Repetitions and Selections:
 Repetitions can be represented by a looping arrow around the arrows joining the subordinate.
 If the invocation of modules depends on the outcome of some decision it is represented by a small diamond with the arrows coming
out of the diamond toward the sub-modules.

 It differs from a flow chart in two ways –


a) A structure chart has no decision boxes and
b) The elements need not be shown in a sequential order.
 Structure chart are useful to represent the model of the system however it is not useful for representing the final design as it does not
give all the information needed about the design.

User Interface Design Principles


Theo Mandel coins three “golden rules” for user interface design:
1. Place the user in control.
2. Reduce the user’s memory load.
3. Make the interface consistent.

1. Place the user in control


During a requirements gathering session for a major new information system, a key user was asked about the attributes of the window
oriented graphical interface. Mandel defines a number of design principles that allow the user to maintain control:
a) Define interaction modes in a way that does not force a user into unnecessary or undesired actions.
b) Provide for flexible interaction.
c) Allow user interaction to be interruptible and undoable.
d) Streamline interaction as skill levels advance and allow the interaction to be customized.
e) Hide technical internals from the casual user.
f)Design for direct interaction with objects that appear on the screen.

2. Reduce the user’s memory load


The more a user has to remember, the more error-prone will be the interaction with the system. It is for this reason that a well-designed user
interface does not tax the user’s memory. Whenever possible, the system should “remember” pertinent information and assist the user with
an interaction scenario that assists recall. Mandel defines design principles that enable an interface to reduce the user’s memory load:
a) Reduce demand on short – term memory.
b) Establish meaningful defaults.
c) Define shortcuts that are intuitive.
d) The visual layout of the interface should be based on a real world metaphor.
e) Disclose information in a progressive fashion.
3. Make the interface consistent
The interface should present and acquire information in a consistent fashion. This implies that
(1) All visual information is organized according to a design standard that is maintained throughout all screen displays,
(2) Input mechanisms are constrained to a limited set that are used consistently throughout the application, and
(3) Mechanisms for navigating from task to task are consistently defined and implemented.
Mandel defines a set of design principles that help make the interface consistent:
a) Allow the user to put the current task into a meaningful context.
b) Maintain consistency across a family of applications.
c) If past interactive models have created user expectations, do not make changes unless there is a compelling reason to do so.

The User Interface Design Process


The design process for user interfaces is iterative and can be represented using a spiral model. Referring to Figure below, the user interface
design process encompasses four distinct framework activities:
a) User, task, and environment analysis and modeling
b) Interface design
c) Interface construction
d) Interface validation
The spiral shown in Figure below implies that each of these tasks will occur more than once, with each pass around the spiral representing
additional elaboration of requirements and the resultant design. In most cases, the implementation activity involves prototyping—the only
practical way to validate what has been designed.

Fig User Interface Design Process


a) User, task, and environment analysis and modeling
The initial analysis activity focuses on the profile of the users who will interact with the system. Skill level, business understanding, and
general receptiveness to the new system are recorded; and different user categories are defined. For each user category, requirements
are elicited. In essence, the software engineer attempts to understand the system perception for each class of users.
Once general requirements have been defined, a more detailed task analysis is conducted. Those tasks that the user performs to
accomplish the goals of the system are identified, described, and elaborated (over a number of iterative passes through the spiral).

The analysis of the user environment focuses on the physical work environment. Among the questions to be asked are
 Where will the interface be located physically?
 Will the user be sitting, standing, or performing other tasks unrelated to the interface?
 Does the interface hardware accommodate space, light, or noise constraints?
 Are there special human factors considerations driven by environmental factors?

b) Interface design
The information gathered as part of the analysis activity is used to create an analysis model for the interface. Using this model as a basis, the
design activity commences. The goal of interface design is to define a set of interface objects and actions (and their screen representations)
that enable a user to perform all defined tasks in a manner that meets every usability goal defined for the system.
c) Interface construction
The implementation activity normally begins with the creation of a prototype that enables usage scenarios to be evaluated. As the iterative
design process continues, a user interface tool kit may be used to complete the construction of the interface.
d) Interface validation
Validation focuses on
1. The ability of the interface to implement every user task correctly, to accommodate all task variations, and to achieve all
general user requirements;
2. The degree to which the interface is easy to use and easy to learn; and
3. The users’ acceptance of the interface as a useful tool in their work.

UML

UML (Unified Modeling Language) is a general-purpose, graphical modeling language in the field of Software Engineering. UML is
used to specify, visualize, construct, and document the artifacts (major elements) of the software system. It was initially developed by
Grady Booch, Ivar Jacobson, and James Rumbaugh in 1994-95 at Rational software, and its further development was carried out
through 1996. In 1997, it got adopted as a standard by the Object Management Group.

UML (Unified Modeling Language) is a general-purpose, graphical modeling language in the field of Software Engineering. UML is
used to specify, visualize, construct, and document the artifacts (major elements) of the software system. It was initially developed by
Grady Booch, Ivar Jacobson, and James Rumbaugh in 1994-95 at Rational software, and its further development was carried out
through 1996. In 1997, it got adopted as a standard by the Object Management Group.

What is UML
The UML stands for Unified modeling language, is a standardized general-purpose visual modeling language in the field of Software
Engineering. It is used for specifying, visualizing, constructing, and documenting the primary artifacts of the software system. It helps
in designing and characterizing, especially those software systems that incorporate the concept of Object orientation. It describes the
working of both the software and hardware systems.
The UML was developed in 1994-95 by Grady Booch, Ivar Jacobson, and James Rumbaugh at the Rational Software. In 1997, it got
adopted as a standard by the Object Management Group (OMG). ward Skip 10s

The Object Management Group (OMG) is an association of several companies that controls the open standard UML. The OMG was
established to build an open standard that mainly supports the interoperability of object-oriented systems. It is not restricted within
the boundaries, but it can also be utilized for modeling the non-software systems. The OMG is best recognized for the Common
Object Request Broker Architecture (CORBA) standards.

Goals of UML
o Since it is a general-purpose modeling language, it can be utilized by all the modelers.
o UML came into existence after the introduction of object-oriented concepts to systemize and consolidate the object-oriented
development, due to the absence of standard methods at that time.
o The UML diagrams are made for business users, developers, ordinary people, or anyone who is looking forward to
understand the system, such that the system can be software or non-software.
o Thus it can be concluded that the UML is a simple modeling approach that is used to model all the practical systems.

Characteristics of UML
The UML has the following features:

o It is a generalized modeling language.


o It is distinct from other programming languages like C++, Python, etc.
o It is interrelated to object-oriented analysis and design.
o It is used to visualize the workflow of the system.
o It is a pictorial language, used to generate powerful modeling artifacts.

Conceptual Modeling
Before moving ahead with the concept of UML, we should first understand the basics of the conceptual model.

A conceptual model is composed of several interrelated concepts. It makes it easy to understand the objects and how they interact
with each other. This is the first step before drawing UML diagrams.

Following are some object-oriented concepts that are needed to begin with UML:

o Object: An object is a real world entity. There are many objects present within a single system. It is a fundamental building
block of UML.
o Class: A class is a software blueprint for objects, which means that it defines the variables and methods common to all the
objects of a particular type.
o Abstraction: Abstraction is the process of portraying the essential characteristics of an object to the users while hiding the
irrelevant information. Basically, it is used to envision the functioning of an object.
o Inheritance: Inheritance is the process of deriving a new class from the existing ones.
o Polymorphism: It is a mechanism of representing objects having multiple forms used for different purposes.
o Encapsulation: It binds the data and the object together as a single unit, enabling tight coupling between them.

OO Analysis and Design


OO is an analysis of objects, and design means combining those identified objects. So, the main purpose of OO analysis is identifying
the objects for designing a system. The analysis can also be done for an existing system. The analysis can be more efficient if we can
identify the objects. Once we have identified the objects, their relationships are then identified, and the design is also produced.
The purpose of OO is given below:

o To identify the objects of a system.


o To identify their relationships.
o To make a design that is executable when the concepts of OO are employed.

Following are the steps where OO concepts are applied and implemented:

UML-Building Blocks
UML is composed of three main building blocks, i.e., things, relationships, and diagrams. Building blocks generate one complete UML
model diagram by rotating around several different blocks. It plays an essential role in developing UML diagrams. The basic UML
building blocks are enlisted below:

1. Things
2. Relationships
3. Diagrams

Things
Anything that is a real world entity or object is termed as things. It can be divided into several different categories:

o Structural things
o Behavioral things
o Grouping things
o Annotational things

Structural things

Nouns that depicts the static behavior of a model is termed as structural things. They display the physical and conceptual
components. They include class, object, interface, node, collaboration, component, and a use case.

Class: A Class is a set of identical things that outlines the functionality and properties of an object. It also represents the abstract class
whose functionalities are not defined. Its notation is as follows; Skip 10s

Object:: An individual that describes the behavior and the functions of a system. The notation of the object is similar to that of the
class; the only difference is that the object name is always underlined and its notation is given below;
Interface: A set of operations that describes the functionality of a class, which is implemented whenever an interface is implemented.

Collaboration: It represents the interaction between things that is done to meet the goal. It is symbolized as a dotted ellipse with its
name written inside it.

Use case: Use case is the core concept of object-oriented modeling. It portrays a set of actions executed by a system to achieve the
goal.

Actor: It comes under the use case diagrams. It is an object that interacts with the system, for example, a user.
Component: It represents the physical part of the system.

Node: A physical element that exists at run time.

Behavioral Things

They are the verbs that encompass the dynamic parts of a model. It depicts the behavior of a system. They involve state machine,
activity diagram, interaction diagram, grouping things, annotation things

State Machine: It defines a sequence of states that an entity goes through in the software development lifecycle. It keeps a record of
several distinct states of a system component.
Activity Diagram: It portrays all the activities accomplished by different entities of a system. It is represented the same as that of a
state machine diagram. It consists of an initial state, final state, a decision box, and an action notation.

Interaction Diagram: It is used to envision the flow of messages between several components in a system.
Diagrams
The diagrams are the graphical implementation of the models that incorporate symbols and text. Each symbol has a different
meaning in the context of the UML diagram. There are thirteen different types of UML diagrams that are available in UML 2.0, such
that each diagram has its own set of a symbol. And each diagram manifests a different dimension, perspective, and view of the
system.

UML diagrams are classified into three categories that are given below:

1. Structural Diagram
2. Behavioral Diagram
3. Interaction Diagram

Structural Diagram: It represents the static view of a system by portraying the structure of a system. It shows several objects residing
in the system. Following are the structural diagrams given below:

o Class diagram
o Object diagram
o Package diagram
o Component diagram
o Deployment diagram

Behavioral Diagram: It depicts the behavioral features of a system. It deals with dynamic parts of the system. It encompasses the
following diagrams:

o Activity diagram
o State machine diagram
o Use case diagram

Interaction diagram: It is a subset of behavioral diagrams. It depicts the interaction between two objects and the data flow between
them. Following are the several interaction diagrams in UML:

o Timing diagram
o Sequence diagram
o Collaboration diagram

UML-Diagrams
The UML diagrams are categorized into structural diagrams, behavioral diagrams, and also interaction overview diagrams. The
diagrams are hierarchically classified in the following figure:
1. Structural Diagrams
Structural diagrams depict a static view or structure of a system. It is widely used in the documentation of software architecture. It
embraces class diagrams, composite structure diagrams, component diagrams, deployment diagrams, object diagrams, and package
diagrams. It presents an outline for the system. It stresses the elements to be present that are to be modeled.

o Class Diagram: Class diagrams are one of the most widely used diagrams. It is the backbone of all the object-oriented
software systems. It depicts the static structure of the system. It displays the system's class, attributes, and methods. It is
helpful in recognizing the relation between different objects as well as classes.
o Composite Structure Diagram: The composite structure diagrams show parts within the class. It displays the relationship
between the parts and their configuration that ascertain the behavior of the class. It makes full use of ports, parts, and
connectors to portray the internal structure of a structured classifier. It is similar to class diagrams, just the fact it represents
individual parts in a detailed manner when compared with class diagrams.
o Object Diagram: It describes the static structure of a system at a particular point in time. It can be used to test the accuracy
of class diagrams. It represents distinct instances of classes and the relationship between them at a time.
o Component Diagram: It portrays the organization of the physical components within the system. It is used for modeling
execution details. It determines whether the desired functional requirements have been considered by the planned
development or not, as it depicts the structural relationships between the elements of a software system.
o Deployment Diagram: It presents the system's software and its hardware by telling what the existing physical components
are and what software components are running on them. It produces information about system software. It is incorporated
whenever software is used, distributed, or deployed across multiple machines with dissimilar configurations.
o Package Diagram: It is used to illustrate how the packages and their elements are organized. It shows the dependencies
between distinct packages. It manages UML diagrams by making it easily understandable. It is used for organizing the class
and use case diagrams.

2. Behavioral Diagrams:
Behavioral diagrams portray a dynamic view of a system or the behavior of a system, which describes the functioning of the system. It
includes use case diagrams, state diagrams, and activity diagrams. It defines the interaction within the system.

o State Machine Diagram: It is a behavioral diagram. it portrays the system's behavior utilizing finite state transitions. It is also
known as the State-charts diagram. It models the dynamic behavior of a class in response to external stimuli.
o Activity Diagram: It models the flow of control from one activity to the other. With the help of an activity diagram, we can
model sequential and concurrent activities. It visually depicts the workflow as well as what causes an event to occur.
o Use Case Diagram: It represents the functionality of a system by utilizing actors and use cases. It encapsulates the functional
requirement of a system and its association with actors. It portrays the use case view of a system.

3. Interaction Diagrams
Interaction diagrams are a subclass of behavioral diagrams that give emphasis to object interactions and also depicts the flow
between various use case elements of a system. In simple words, it shows how objects interact with each other and how the data
flows within them. It consists of communication, interaction overview, sequence, and timing diagrams.

o Sequence Diagram: It shows the interactions between the objects in terms of messages exchanged over time. It delineates in
what order and how the object functions are in a system.
o Communication Diagram: It shows the interchange of sequence messages between the objects. It focuses on objects and
their relations. It describes the static and dynamic behavior of a system.
o Timing Diagram: It is a special kind of sequence diagram used to depict the object's behavior over a specific period of time.
It governs the change in state and object behavior by showing the time and duration constraints.
o Interaction Overview diagram: It is a mixture of activity and sequence diagram that depicts a sequence of actions to
simplify the complex interactions into simple interactions.

UML Association vs. Aggregation vs. Composition


In UML diagrams, relationships are used to link several things. It is a connection between structural, behavioral, or grouping things.
Following are the standard UML relationships enlisted below:

o Association
o Dependency
o Generalization
o Realization

Association
Association relationship is a structural relationship in which different objects are linked within the system. It exhibits a binary
relationship between the objects representing an activity. It depicts the relationship between objects, such as a teacher, can be
associated with multiple teachers.
It is represented by a line between the classes followed by an arrow that navigates the direction, and when the arrow is on both sides,
it is then called a bidirectional association. We can specify the multiplicity of an association by adding the adornments on the line
that will denote the association.

Example:

1) A single teacher has multiple students.

2) A single student can associate with many teachers.

The composition and aggregation are two subsets of association. In both of the cases, the object of one class is owned by the object
of another class; the only difference is that in composition, the child does not exist independently of its parent, whereas in
aggregation, the child is not dependent on its parent i.e., standalone. An aggregation is a special form of association, and
composition is the special form of aggregation.
Aggregation
Aggregation is a subset of association, is a collection of different things. It represents has a relationship. It is more specific than an
association. It describes a part-whole or part-of relationship. It is a binary association, i.e., it only involves two classes. It is a kind of
relationship in which the child is independent of its parent.

For example:

Here we are considering a car and a wheel example. A car cannot move without a wheel. But the wheel can be independently used
with the bike, scooter, cycle, or any other vehicle. The wheel object can exist without the car object, which proves to be an
aggregation relationship.

Composition
The composition is a part of aggregation, and it portrays the whole-part relationship. It depicts dependency between a composite
(parent) and its parts (children), which means that if the composite is discarded, so will its parts get deleted. It exists between similar
objects.

As you can see from the example given below, the composition association relationship connects the Person class with Brain class,
Heart class, and Legs class. If the person is destroyed, the brain, heart, and legs will also get discarded.

Object-Oriented Software Development


In the object-oriented design method, the system is viewed as a collection of objects (i.e., entities). The state is distributed among the
objects, and each object handles its state data. For example, in a Library Automation Software, each library representative may be a
separate object with its data and functions to operate on these data. The tasks defined for one purpose cannot refer or change data
of other objects. Objects have their internal data which represent their state. Similar objects create a class. In other words, each object
is a member of some class. Classes may inherit features from the superclass.

The different terms related to object design are:

1. Objects: All entities involved in the solution design are known as objects. For example, person, banks, company, and users
are considered as objects. Every entity has some attributes associated with it and has some methods to perform on the
attributes.
2. Classes: A class is a generalized description of an object. An object is an instance of a class. A class defines all the attributes,
which an object can have and methods, which represents the functionality of the object.
3. Messages: Objects communicate by message passing. Messages consist of the integrity of the target object, the name of the
requested operation, and any other action needed to perform the function. Messages are often implemented as procedure or
function calls.
4. Abstraction In object-oriented design, complexity is handled using abstraction. Abstraction is the removal of the irrelevant
and the amplification of the essentials.
5. Encapsulation: Encapsulation is also called an information hiding concept. The data and operations are linked to a single
unit. Encapsulation not only bundles essential information of an object together but also restricts access to the data and
methods from the outside world.
6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where the lower or sub-classes can import,
implement, and re-use allowed variables and functions from their immediate superclasses.This property of OOD is called an
inheritance. This makes it easier to define a specific class and to create generalized classes from specific ones.
7. Polymorphism: OOD languages provide a mechanism where methods performing similar tasks but vary in arguments, can
be assigned the same name. This is known as polymorphism, which allows a single interface is performing functions for
different types. Depending upon how the service is invoked, the respective portion of the code gets executed.

Expected Benefits of OOD

Many benefits are cited for OOD, often to an unrealistic degree. Some of these potential benefits are:

 Faster Development: OOD has long been touted as leading to faster development. Many of the claims of potentially
reduced development time are correct in principle, if a bit overstated.
 Reuse of Previous work: This is the benefit cited most commonly in literature, particularly in business periodicals. OOD
produces software modules that can be plugged into one another, which allows creation of new programs. However, such
reuse does not come easily. It takes planning and investment.
 Increased Quality: Increases in quality are largely a by-product of this program reuse. If 90% of a new application consists of
proven, existing components, then only the remaining 10% of the code has to be tested from scratch. That observation
implies an order-of-magnitude reduction in defects.
 Modular Architecture: Object-oriented systems have a natural structure for modular design: objects, subsystems,
framework, and so on. Thus, OOD systems are easier to modify. OOD systems can be altered in fundamental ways without
ever breaking up since changes are neatly encapsulated. However, nothing in OOD guarantees or requires that the code
produced will be modular. The same level of care in design and implementation is required to produce a modular structure in
OOD, as it is for any form of software development.
 Client/Server Applications: By their very nature, client/server applications involve transmission of messages back and forth
over a network, and the object-message paradigm of OOD meshes well with the physical and conceptual architecture of
client/server applications.
 Better Mapping to the Problem Domain: This is a clear winner for OOD, particularly when the project maps to the real
world. Whether objects represent customers, machinery, banks, sensors or pieces of paper, they can provide a clean, self-
contained implication which fits naturally into human thought processes.

Therefore, OOD offers significant benefits in many domains, but those benefits must be considered realistically. There are many
pitfalls that await those who venture into OOD development. These pitfalls threaten to undermine the acceptance and use of object-
oriented development before its promise can be achieved. Due to the excitement surrounding OOD, expectations are high and delays
and failures, when they come, will have a greater negative impact.

The purpose of this paper is to summarize different types of pitfalls that developers should try to avoid when developing applications
in an OOD environment. In addition, suggested approaches to avoid these pitfalls can enable system developers to truly capitalize on
the benefits of OOD.

Potential Pitfalls of OOD

When new software development technologies are adopted, it is often the case that, during the early stages of adoption, they are
misused, abused or create totally unrealistic expectations. Lack of understanding and unrealistic expectations of new technologies
seem to be the common denominators for the delay of their proper use. These types of problems can be avoided or minimized by a
clear view of the pitfalls that developers face when adopting new technologies.

Many of the pitfalls described below are common to any software development as they are significant in the context of OOD. Some
of the pitfalls are commonly believed to be eliminated by OOD when they are not. Further, they may threaten to derail OOD projects
in spite of the benefits of OOD.

The next section of this paper analyzes such potential pitfalls from the viewpoint of the various entities involved in the OOD process
and recommends tactics that can help prevent or avoid these problems.

Coding Standards and Guidelines


Different modules specified in the design document are coded in the Coding phase according to the module specification. The main goal of the
coding phase is to code from the design document prepared after the design phase through a high-level language and then to unit test this code.
Good software development organizations want their programmers to maintain to some well-defined and standard style of coding called coding
standards. They usually make their own coding standards and guidelines depending on what suits their organization best and based on the types of
software they develop. It is very important for the programmers to maintain the coding standards otherwise the code will be rejected during code
review.
Purpose of Having Coding Standards:
 A coding standard gives a uniform appearance to the codes written by different engineers.
 It improves readability, and maintainability of the code and it reduces complexity also.
 It helps in code reuse and helps to detect error easily.
 It promotes sound programming practices and increases efficiency of the programmers.
Some of the coding standards are given below:
1. Limited use of globals:
These rules tell about which types of data that can be declared global and the data that can’t be.

2. Standard headers for different modules:


For better understanding and maintenance of the code, the header of different modules should follow some standard format and
information. The header format must contain below things that is being used in various companies:
 Name of the module
 Date of module creation
 Author of the module
 Modification history
 Synopsis of the module about what the module does
 Different functions supported in the module along with their input output parameters
 Global variables accessed or modified by the module

3. Naming conventions for local variables, global variables, constants and functions:
Some of the naming conventions are given below:
 Meaningful and understandable variables name helps anyone to understand the reason of using it.
 Local variables should be named using camel case lettering starting with small letter (e.g. localData) whereas Global variables
names should start with a capital letter (e.g. GlobalData). Constant names should be formed using capital letters only
(e.g. CONSDATA).
 It is better to avoid the use of digits in variable names.
 The names of the function should be written in camel case starting with small letters.
 The name of the function must describe the reason of using the function clearly and briefly.

4. Indentation:
Proper indentation is very important to increase the readability of the code. For making the code readable, programmers should use
White spaces properly. Some of the spacing conventions are given below:
 There must be a space after giving a comma between two function arguments.
 Each nested block should be properly indented and spaced.
 Proper Indentation should be there at the beginning and at the end of each block in the program.
 All braces should start from a new line and the code following the end of braces also start from a new line.

5. Error return values and exception handling conventions:


All functions that encountering an error condition should either return a 0 or 1 for simplifying the debugging.
On the other hand, Coding guidelines give some general suggestions regarding the coding style that to be followed for the betterment of
understandability and readability of the code. Some of the coding guidelines are given below :

6. Avoid using a coding style that is too difficult to understand:


Code should be easily understandable. The complex code makes maintenance and debugging difficult and expensive.

7. Avoid using an identifier for multiple purposes:


Each variable should be given a descriptive and meaningful name indicating the reason behind using it. This is not possible if an
identifier is used for multiple purposes and thus it can lead to confusion to the reader. Moreover, it leads to more difficulty during future
enhancements.

8. Code should be well documented:


The code should be properly commented for understanding easily. Comments regarding the statements increase the understandability of
the code.

9. Length of functions should not be very large:


Lengthy functions are very difficult to understand. That’s why functions should be small enough to carry out small work and lengthy
functions should be broken into small ones for completing small tasks.
10. Try not to use GOTO statement:
GOTO statement makes the program unstructured, thus it reduces the understandability of the program and also debugging becomes
difficult.

Advantages of Coding Guidelines:


 Coding guidelines increase the efficiency of the software and reduces the development time.
 Coding guidelines help in detecting errors in the early phases, so it helps to reduce the extra cost incurred by the software project.
 If coding guidelines are maintained properly, then the software code increases readability and understandability thus it reduces the
complexity of the code.
 It reduces the hidden cost for developing the software.

What is Code Review?


Code Review is a systematic examination, which can find and remove the vulnerabilities in the code such as memory leaks and buffer
overflows.
 Technical reviews are well documented and use a well-defined defect detection process that includes peers and technical experts.
 It is ideally led by a trained moderator, who is NOT the author.
 This kind of review is usually performed as a peer review without management participation.
 Reviewers prepare for the review meeting and prepare a review report with a list of findings.
 Technical reviews may be quite informal or very formal and can have a number of purposes but not limited to discussion, decision
making, evaluation of alternatives, finding defects and solving technical problems .

Where Code Review fits in ?

Milestones, Walkthroughs and Inspections


These activities help in exposing errors, provide increased project communication, keeping the project in schedule, and verification that the
design satisfies the requirements.

Milestones
1) These are a set of occasions in project design where the proper progress of the project can be assessed in such a way that
corrective measures could be taken if necessary.
a)The two major milestones are –
i. Preliminary Design Review (PDR) :- Its normally held near the end of architectural design and prior to detailed design
ii. Critical design Review (CDR) :- Its normally held at the end of detailed design and prior to implementation.
b) The major goal of PDR is to demonstrate the externally observable characteristics and architectural structure of the product which
would satisfy the customer’s requirements. Functional characteristics, performance attributes, external interface, user dialogs, report
formats, exception conditions and exception handling and future enhancements are reviewed during PDR.
c)The CDR provides a final management decision point , to build or cancel the system.
Implementatio
Phases :- Analysis Design
n
Planning
External, Coding,
&
Activities :- Architectural Debugging
Requirement
and Detailed and Testing
definition

Reviews :- SRR PDR CDR

SRR:- Software Requirements Review


PDR: - Preliminary Design Review.
CDR: - Critical Design Review.

Walkthroughs
2) A structured walkthrough is an in-depth, technical review of some aspects of a software system. Walkthroughs can be anytime, during
any phase of a software project.
3) A walkthrough team consists of 4 to 6 people. The person whose material is being reviewed is responsible for providing copies of the
review materials to the members of the walkthrough group in advance of the walkthrough session and the team members are
responsible for understanding the reviewing material before the session.
4) During the walkthrough the reviewed “walks through” the material while the reviewers look for errors, request clarification and
explore problem areas in the material under review.
5) High-level managers should not attend walkthrough sessions as the aim of walkthroughs is error detection not corrective action. Its
important to note that the material is reviewed not the person whose material is being reviewed.

Inspections
1) Design inspections are conducted by teams of trained inspectors who have a check list of items to be examined.
2) Special forms are used to record problems encountered.
3) A typical inspection team consists of a Moderator or Secretary, a Designer, an Implementor and a Tester. The Designer, Implementor
and Tester may or may not be the people responsible for the actual design, implementation and testing of the product being inspected.

The team members are trained for their specific roles and typically conduct a dual 2-hrs sessions per day.

You might also like