0% found this document useful (0 votes)
25 views

Software Eng Unit 3

The document discusses software design including its definition, objectives, concepts, levels, architecture styles, modularization, design structure charts and pseudocode. Software design transforms user requirements into a suitable form for programming and implementation. It considers a system as a set of components or modules with defined behaviors and boundaries.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Software Eng Unit 3

The document discusses software design including its definition, objectives, concepts, levels, architecture styles, modularization, design structure charts and pseudocode. Software design transforms user requirements into a suitable form for programming and implementation. It considers a system as a set of components or modules with defined behaviors and boundaries.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Software Design:

• Software design is a mechanism to transform user requirements into


some suitable form, which helps the programmer in software coding
and implementation.

• Software design is both a process and a model. The design process


is a sequence of steps that enables the designer to describe all
aspects of the software for building. Creative skill, past experience, a
sense of what makes "good" software, and an overall commitment to
quality are examples of critical success factors for a competent
design. It deals with representing the client's requirement, as
described in SRS (Software Requirement Specification) document,
into a form, i.e., easily implementable using programming language.
• The software design phase is the first step in SDLC (Software Design
Life Cycle), which moves the concentration from the problem domain
to the solution domain. In software design, we consider the system to
be a set of components or modules with clearly defined behaviors &
boundaries.

Objectives:
1. Correctness:Software design should be correct as per
requirement.It should correctly implement all the functionalities of the
system.
2. Completeness:The design should have all components like data
structures, modules, and external interfaces, etc.
3. Efficiency:Resources should be used efficiently by the program.
4. Flexibility:Able to modify on changing needs.
5. Consistency:There should not be any inconsistency in the design.
6. Maintainability: The design should be so simple so that it can be
easily maintainable by other designers.

Basic Concept of Software Design:


Concepts of Software Design:
The software design concept simply means the idea or principle behind the
design. It describes how you plan to solve the problem of designing
software, the logic, or thinking behind how you will design software. It
allows the software engineer to create the model of the system or software
or product that is to be developed or built.
Software Design Levels
Software design yields three levels of results:

• Architectural Design - :
➢ The process of defining a collection of hardware and
software components and their interfaces to establish the
framework for the development of a computer system.
➢ The architectural design is the highest abstract version of the
system. It identifies the software as a system with many
components interacting with each other. At this level, the
designers get the idea of proposed solution domain.
• High-level Design- The high-level design breaks the ‘single entity-multiple
component’ concept of architectural design into less-abstracted view of
sub-systems and modules and depicts their interaction with each other.
High-level design focuses on how the system along with all of its
components can be implemented in forms of modules. It recognizes
modular structure of each sub-system and their relation and interaction
among each other.
• Detailed Design(Low Level Design)- Detailed design deals with the
implementation part of what is seen as a system and its sub-systems in the
previous two designs. It is more detailed towards modules and their
implementations. It defines logical structure of each module and their
interfaces to communicate with other modules.

➢ It is more detailed towards modules and their


implementations.
➢ It defines logical structure of each module and their interfaces
to communicate with other modules.
➢ Low Level Design in short LLD is like detailing HLD
means it refers to component-level design process.
➢ A detailed description of each and every module means it
includes actual logic for every system component and it
goes deep into each module's specification. It is also
known as micro level/detailed design.
➢ It is created by designers and developers. It converts the
High Level Solution into Detailed solution. It is created as
a second means after High Level Design.
➢ Low-level design (LLD) is a component-level design process
that follows a step-by-step refinement process.
➢ This process can be used for designing data structures,
required software architecture, source code and ultimately,
performance algorithms.
➢ The LLD phase is the stage where the actual software
components are designed.

Layered architecture:

• A number of different layers are defined with each layer performing a


well-defined set of operations.
• Each layer will do some operations that become closer to the
machine instruction set progressively.
• Layered architecture style is the most common architecture style.
Modules or components with similar functionalities are organized into
horizontal layers; therefore, each layer performs a specific role within
the application.
• The layered architecture style does not define how many layers are
in the application.

Modularization:
• Modularization is a technique to divide a software system into
multiple discrete and independent modules, which are
expected to be capable of carrying out task(s) independently.
These modules may work as basic constructs for the entire
software. Designers tend to design modules such that they can
be executed and/or compiled separately and independently.
• Modular design unintentionally follows the rules of ‘divide and
conquer’ problem-solving strategy. This is because there are
many other benefits attached with the modular design of a
software.
Modularity has several key benefits:

• Testing & Debugging


Since each component is self-contained, you mitigate dependency
issues. It becomes easy to test each component in isolation by using
a mocking or isolation framework.

• Reusability
If you discover you need the same functionality in a new project, you
can package the existing functionality into something reusable by
multiple projects without copying and pasting the code.

• Extensibility
Your software now runs as a set of independent components
connected by an abstraction layer.
Advantage of modularization:

• Smaller components are easier to maintain


• Program can be divided based on functional aspects
• Desired level of abstraction can be brought in the program
• Components with high cohesion can be reused again

Design Structure Charts,

• Structure Chart represents hierarchical structure of modules.


• It breaks down the entire system into lowest functional modules,
describe functions and sub-functions of each module of a system to a
greater detail. Structure.
• A Structure Chart (SC) in software engineering and organizational
theory is a chart which shows the breakdown of a system to its
lowest manageable levels.
• Structure Chart partitions the system into black boxes (functionality of
the system is known to the users but inner details are unknown).
Inputs are given to the black boxes and appropriate outputs are
generated.
• Modules at top level called modules at low level. Components are
read from top to bottom and left to right. When a module calls
another, it views the called module as black box, passing required
parameters and receiving results.
• They are used in structured programming to arrange program
modules into a tree. Each module is represented by a box, which
contains the module's name.
• Structured Charts are an example of a top-down design where a
problem (the program) is broken into its components.
• The tree shows the relationship between modules, showing data
transfer between the models. Data being passed from module to
module that needs to be processed.
• Structure chart is a chart derived from a Data Flow Diagram. It
represents the system in more detail than DFD. It breaks down the
entire system into lowest functional modules, describes functions and
sub-functions of each module of the system to a greater detail than
DFD.

Symbols used in construction of structured chart


1. Module
It represents the process or task of the system. It is of three types.

a. Control Module
A control module branches to more than one sub module.
b. Sub Module
Sub Module is a module which is the part (Child) of another
module.
c. Library Module
Library Module are reusable and invokable from any module.

1. Conditional Call
It represents that control module can select any of the sub module
on the basis of some condition.
2. Loop (Repetitive call of module)
It represents the repetitive execution of module by the sub module.
A curved arrow represents loop in the module.

3. Data Flow
It represents the flow of data between the modules. It is
represented by directed arrow with empty circle at the end.

4. Control Flow
It represents the flow of control between the modules. It is
represented by directed arrow with filled circle at the end.
5. Physical Storage
Physical Storage is that where all the information are to be stored.

Pseudo Codes:

1. Pseudocode is a newer tool and has features that make it more


reflective of the structured concepts.The drawback is that the
narrative presentation is not as easy to understand and/or follow.
2. Rules for Pseudocode:
1. Write only one statement per line
2. Capitalise initial keyword
3. Indent to show hierarchy and structures
4. End multi line structures
5. Keep statements language independent
1. Remember you are describing a logic plan to develop a
program, you are not programming!

Examples:
Advantages:

• Easily modified
• Implements structured concepts
• Done easily on Word Processor
Disadvantages:

• Not visual
• No accepted standard,varies from company to company

Flow Charts:
• Graphical representation of an Algorithm.
• First design tool to be widely used,but unfortunately they do not
reflect some of the concepts of structured programming very well.
• A traditional graphical tool with standardized symbols
Rules for Flowchart:

• The flowchart should flow from top to bottom


• If the chart becomes complex, utilize connecting blocks
• Avoid intersecting flow lines
• Use meaningful description in the symbol
Example:

Advantages:

• Standardized
• Visual
Disadvantages:

• Hard to Modify
• Structured design elements not implemented
• Special software required

Coupling and Cohesion Measures:


Cohesion is a measure that defines the degree of intra-dependability
within elements of a module. The greater the cohesion, the better is the
program design.
Cohesion: Cohesion is a measure of the degree to which the elements of
the module are functionally related. It is the degree to which all elements
directed towards performing a single task are contained in the
component. Basically, cohesion is the internal glue that keeps the module
together. A good software design will have high cohesion.

There are seven types of cohesion, namely –


Types of Cohesion:
• Functional Cohesion: Every essential element for a single
computation is contained in the component. A functional cohesion
performs the task and functions. It is an ideal situation.
• Sequential Cohesion: An element outputs some data that becomes
the input for other element, i.e., data flow between the parts. It occurs
naturally in functional programming languages.
• Communicational Cohesion: Two elements operate on the same
input data or contribute towards the same output data. Example-
update record in the database and send it to the printer.
• Procedural Cohesion: Elements of procedural cohesion ensure the
order of execution. Actions are still weakly connected and unlikely to
be reusable. Ex- calculate student GPA, print student record, calculate
cumulative GPA, print cumulative GPA.
• Temporal Cohesion: The elements are related by their timing
involved. A module connected with temporal cohesion all the tasks
must be executed in the same time span. This cohesion contains the
code for initializing all the parts of the system. Lots of different
activities occur, all at unit time.
• Logical Cohesion: The elements are logically related and not
functionally. Ex- A component reads inputs from tape, disk, and
network. All the code for these functions is in the same component.
Operations are related, but the functions are significantly different.
• Coincidental Cohesion: The elements are not related(unrelated).
The elements have no conceptual relationship other than location in
source code. It is accidental and the worst form of cohesion. Ex- print
next line and reverse the characters of a string in a single component.

Coupling is a measure that defines the level of inter-dependability


among modules of a program.
Coupling: Coupling is the measure of the degree of interdependence
between the modules. A good software will have low coupling.

• It tells at what level the modules interfere and interact with each
other. The lower the coupling, the better the program.
• The degree of interdependence between two modules”
• Coupling is a measure that defines the level of inter- dependability
among modules of a program. It tells at what level the modules
interfere and interact with each other. The lower the coupling, the
better the program.
• Coupling is the measure of the interdependence of one module to
another. Modules should have low coupling.
• Low coupling minimizes the "ripple effect" where changes in one
module cause errors in other modules

There are five levels of coupling, namely -

• Content coupling - When a module can directly access or modify or


refer to the content of another module, it is called content level
coupling.
• Common coupling- When multiple modules have read and write
access to some global data, it is called common or global coupling.
• Control coupling- Two modules are called control-coupled if one of
them decides the function of the other module or changes its flow of
execution.
• Stamp coupling- When multiple modules share a common data
structure and work on different parts of it, it is called stamp coupling.
• Data coupling- Data coupling is when two modules interact with
each other by means of passing data (as parameter). If a module
passes data structure as parameter, then the receiving module
should use all its components.Ideally, no coupling is considered to be
the best

Design Strategies:

Function Oriented Design:

• Function Oriented Design is an approach to software design where


the design is decomposed into a set of interacting units where each
unit has a clearly defined function.
• Relies on identifying functions which transform inputs to outputs.
• Practised informally since programming began

• Thousands of systems have been developed using this approach


• Supported directly by most programming languages
• Most design methods are functional in their approach
• CASE tools are available for design support
Techniques used by Function Oriented Design are:

• Data Flow Diagram(A data flow diagram (DFD) maps out the flow of
information for any process or system. )
• Data Dictionaries(Data dictionaries are simply repositories to store
information about all data items defined in DFDs. At the requirement
stage, data dictionaries contain data items.)
• Structure Charts(Components are read from top to bottom and left
to right. When a module calls another, it views the called module as
black box, passing required parameters and receiving results)
• Pseudo Codes(It uses keywords and indentation. Pseudo codes are
used as replacement for flow charts. It decreases the amount of
documentation required.)
Object Oriented Design,

• Object-oriented design is the process of planning a system of


interacting objects for the purpose of solving a software problem. It is
one approach to software design.
• In the object-oriented design method, the system is viewed as a
collection of objects (i.e., entities).

• The state is distributed among the objects, and each object handles
its state data. For example, in a Library Automation Software, each
library representative may be a separate object with its data and
functions to operate on these data.
• The tasks defined for one purpose cannot refer or change data of
other objects. Objects have their internal data which represent their
state. Similar objects create a class.
• In the object-oriented approach, the focus is on capturing the
structure and behavior of information systems into small modules that
combine both data and process. The main aim of Object Oriented
Design (OOD) is to improve the quality and productivity of system
analysis and design by making it more usable.
Top-Down and Bottom-Up Design:
Top Down Design
• We know that a system is composed of more than one subsystem
and it contains a number of components. Further, these subsystems
and components may have their own set of sub-system and
components and create hierarchical structure in the system.
• Top-down design takes the whole software system as one entity and
then decomposes it to achieve more than one sub-system or
component based on some characteristics. Each sub-system or
component is then treated as a system and decomposed further.
• This process keeps on running until the lowest level of the system in
the top-down hierarchy is achieved.
• They allow us to quickly and efficiently specify, design, synthesize
and verify designs ready for fabrication. The key to these
methodologies is synthesis, which relies on a mapping between the
logical functions we use in a design and the physical circuits that
realize the functions.

Advantages:
• The main advantage of the top-down approach is that its strong focus
on requirements helps to make a design responsive according to its
requirements.
Disadvantages:
• Project and system boundaries tend to be application specification-
oriented. Thus it is more likely that advantages of component reuse
will be missed.
• The system is likely to miss, the benefits of a well-structured, simple
architecture.

Bottom-up Design

• The bottom up design model starts with most specific and basic
components. It proceeds with composing higher level components by
using basic or lower level components.
• It keeps creating higher level components until the desired system is
not evolved as one single component. With each higher level, the
amount of abstraction is increased.
• Any design method in which the most primitive operations are
specified first and the combined later into progressively larger units
until the whole problem can be solved: the converse of TOP-DOWN
DESIGN.
• For example, a communications program might be built by first
writing a routine to fetch a single byte from the communications port
and working up from that.
• Bottom-up strategy is more suitable when a system needs to be
created from some existing system, where the basic primitives can
be used in the newer system.
• Both, top-down and bottom-up approaches are not practical
individually. Instead, a good combination of both is used.
Top Down Bottom Up
It starts from the bottom of the
A module cannot be tested in hierarchy. First the modules at the
isolation because they invoke very bottom, which have no
some other modules. To allow the subordinates, are tested. Then these
modules to be tested before their modules are combined with higher-
subordinates have been coded, level modules for testing. At any
stubs simulate the behavior of the stage of testing all the subordinate
subordinates. modules exist and have been tested
earlier.

Here,we start by testing the top of


the hierarchy and we
Drivers are needed to set up the
incrementally add modules that it
appropriate environment and invoke
calls and then test the new
the module. It is the job of the driver
combined system. This approach
to invoke the module under testing
of testing requires stubs to be
with the different set of test cases
written. A stub is a dummy
routine that simulates a module.

Executives set the direction and


Targets given by executives
define the mission

Long execution time Short execution time

More stable and accurate at the Higher accuracy at the granular


aggregate level level.
Software Measurement and Metrics
Software Measurement: A measurement is a manifestation of the size,
quantity, amount, or dimension of a particular attribute of a product or
process. Software measurement is a titrate impute of a characteristic of a
software product or the software process. It is an authority within software
engineering. The software measurement process is defined and
governed by ISO Standard.

Software Measurement Principles:

The software measurement process can be characterized by five


activities-
1. Formulation: The derivation of software measures and metrics
appropriate for the representation of the software that is being
considered.
2. Collection: The mechanism used to accumulate data required to
derive the formulated metrics.
3. Analysis: The computation of metrics and the application of
mathematical tools.
4. Interpretation: The evaluation of metrics resulting in insight into the
quality of the representation.
5. Feedback: Recommendation derived from the interpretation of
product metrics transmitted to the software team.

Need for Software Measurement:

Software is measured to:


• Create the quality of the current product or process.
• Anticipate future qualities of the product or process.
• Enhance the quality of a product or process.
• Regulate the state of the project in relation to budget and schedule.

Classification of Software Measurement:

There are 2 types of software measurement:


1. Direct Measurement: In direct measurement, the product, process, or
thing is measured directly using a standard scale.
2. Indirect Measurement: In indirect measurement, the quantity or
quality to be measured is measured using related parameters i.e. by
use of reference.

Metrics:

A metric is a measurement of the level at which any impute belongs to a


system product or process.
Software metrics will be useful only if they are characterized effectively
and validated so that their worth is proven. There are 4 functions related
to software metrics:
1. Planning
2. Organizing
3. Controlling
4. Improving

Characteristics of software Metrics:

1. Quantitative: Metrics must possess quantitative nature. It means


metrics can be expressed in values.
2. Understandable: Metric computation should be easily understood,
and the method of computing metrics should be clearly defined.
3. Applicability: Metrics should be applicable in the initial phases of the
development of the software.
4. Repeatable: The metric values should be the same when measured
repeatedly and consistent in nature.
5. Economical: The computation of metrics should be economical.
6. Language Independent: Metrics should not depend on any
programming language.

Classification of Software Metrics:

There are 3 types of software metrics:


1. Product Metrics: Product metrics are used to evaluate the state of
the product, tracing risks and undercover prospective problem areas.
The ability of the team to control quality is evaluated.
2. Process Metrics: Process metrics pay particular attention to
enhancing the long-term process of the team or organization.
3. Project Metrics: The project matrix describes the project
characteristic and execution process.
• Number of software developer
• Staffing patterns over the life cycle of software
• Cost and schedule
• Productivity

Advantages of Software Metrics :

1. Reduction in cost or budget.


2. It helps to identify the particular area for improvising.
3. It helps to increase the product quality.
4. Managing the workloads and teams.
5. Reduction in overall time to produce the product,.
6. It helps to determine the complexity of the code and to test the code
with resources.
7. It helps in providing effective planning, controlling and managing of the
entire product.

Disadvantages of Software Metrics :

1. It is expensive and difficult to implement the metrics in some cases.


2. Performance of the entire team or an individual from the team can’t be
determined. Only the performance of the product is determined.
3. Sometimes the quality of the product is not met with the expectation.
4. It leads to measure the unwanted data which is wastage of time.
5. Measuring the incorrect data leads to make wrong decision making.

Various Size Oriented Measures:


These are derived by normalizing quality and productivity measures by
considering the size of software that has been produced.
Choose LOC as normalization value. Then we can develop a set of simple
size-oriented metrics:

• Errors per KLOC


• Defects per KLOC
• Page of documentation per KLOC
• Errors per Person-Months
• LOC per Person-Months
Function - Oriented Metrics:
• Uses a measure of the functionality delivered by the application as a
normalization value.
• Functionality cannot be measured directly, it must be derived
indirectly using other direct measures. A measure called Function
Point.

Functional Point (FP) Analysis


Allan J. Albrecht initially developed function Point Analysis in 1979 at IBM and it
has been further modified by the International Function Point Users Group
(IFPUG). FPA is used to make estimate of the software project, including its testing
in terms of functionality or function size of the software product. However,
functional point analysis may be used for the test estimation of the product. The
functional size of the product is measured in terms of the function point, which is
a standard of measurement to measure the software application.

Objectives of FPA
The basic and primary purpose of the functional point analysis is to measure and
provide the software application functional size to the client, customer, and the
stakeholder on their request. Further, it is used to measure the software project
development along with its maintenance, consistently throughout the project
irrespective of the tools and the technologies.

Following are the points regarding FPs

1. FPs of an application is found out by counting the number and types of


functions used in the applications. Various functions used in an application can be
put under five types, as shown in Table:

Types of FP Attributes

Examples
Measurements Parameters

1.Number of External Inputs(EI) Input screen and tables

2. Number of External Output (EO) Output screens and reports


3. Number of external inquiries (EQ) Prompts and interrupts.

4. Number of internal files (ILF) Databases and directories

5. Number of external interfaces (EIF) Shared databases and shared routines.

All these parameters are then individually assessed for complexity.

The FPA functional units are shown in Fig:

2. FP characterizes the complexity of the software system and hence can be used to depict the
project time and the manpower requirement.

3. The effort required to develop the project depends on what the software does.

4. FP is programming language independent.

5. FP method is used for data processing systems, business systems like information systems.

6. The five parameters mentioned above are also known as information domain
characteristics.

7. All the parameters mentioned above are assigned some weights that have
been experimentally determined and are shown in Table

Weights of 5-FP Attributes


Measurement Parameter Low Average High

1. Number of external inputs (EI) 7 10 15

2. Number of external outputs (EO) 5 7 10

3. Number of external inquiries (EQ) 3 4 6

4. Number of internal files (ILF) 4 5 7

5. Number of external interfaces (EIF) 3 4 6

The functional complexities are multiplied with the corresponding weights against
each function, and the values are added up to determine the UFP (Unadjusted
Function Point) of the subsystem.

Here that weighing factor will be simple, average, or complex for a measurement
parameter type.

Software Engineering | Calculation


of Function Point (FP)
Function Point (FP) is an element of software development which helps to
approximate the cost of development early in the process. It may
measures functionality from user’s point of view.
Counting Function Point (FP):
• Step-1:
F = 14 * scale
Scale varies from 0 to 5 according to character of Complexity
Adjustment Factor (CAF). Below table shows scale:
0 - No Influence
1 - Incidental
2 - Moderate
3 - Average
4 - Significant
5 - Essential
• Step-2: Calculate Complexity Adjustment Factor (CAF).
CAF = 0.65 + ( 0.01 * F )
• Step-3: Calculate Unadjusted Function Point (UFP).
TABLE (Required)
Function Units Low Avg High

EI 3 4 6

EO 4 5 7

EQ 3 4 6

ILF 7 10 15

EIF 5 7 10

• Multiply each individual function point to corresponding values in


TABLE.
• Step-4: Calculate Function Point.
FP = UFP * CAF
Example:
Given the following values, compute function point when all complexity
adjustment factor (CAF) and weighting factors are average.
User Input = 50
User Output = 40
User Inquiries = 35
User Files = 6
External Interface = 4
Explanation:
• Step-1: As complexity adjustment factor is average (given in
question), hence,
• scale = 3.
F = 14 * 3 = 42
• Step-2:
CAF = 0.65 + ( 0.01 * 42 ) = 1.07
• Step-3: As weighting factors are also average (given in question)
hence we will multiply each individual function point to corresponding
values in TABLE.
UFP = (50*4) + (40*5) + (35*4) + (6*10) + (4*7) = 628
• Step-4:
Function Point = 628 * 1.07 = 671.96

Cyclomatic Complexity
Cyclomatic complexity of a code section is the quantitative measure of
the number of linearly independent paths in it. It is a software metric used
to indicate the complexity of a program. It is computed using the Control
Flow Graph of the program. The nodes in the graph indicate the smallest
group of commands of a program, and a directed edge in it connects the
two nodes i.e. if second command might immediately follow the first
command.
For example, if source code contains no control flow statement then its
cyclomatic complexity will be 1 and source code contains a single path in
it. Similarly, if the source code contains one if condition then cyclomatic
complexity will be 2 because there will be two paths one for true and the
other for false.
Mathematically, for a structured program, the directed graph inside
control flow is the edge joining two basic blocks of the program as control
may pass from first to second.
So, cyclomatic complexity M would be defined as,

M = E – N + 2P
where,
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
Steps that should be followed in calculating cyclomatic complexity and
test cases design are:
• Construction of graph with nodes and edges from code.
• Identification of independent paths.
• Cyclomatic Complexity Calculation
• Design of Test Cases
• Let a section of code as such:

• A = 10
• IF B > C THEN
• A = B
• ELSE
• A = C
• ENDIF
• Print A
• Print B
• Print C
Control Flow Graph of above code

The cyclomatic complexity calculated for above code will be from control
flow graph. The graph shows seven shapes(nodes), seven lines(edges),
hence cyclomatic complexity is 7-7+2 = 2.
Use of Cyclomatic Complexity:
• Determining the independent path executions thus proven to be very
helpful for Developers and Testers.
• It can make sure that every path have been tested at least once.
• Thus help to focus more on uncovered paths.
• Code coverage can be improved.
• Risk associated with program can be evaluated.
• These metrics being used earlier in the program helps in reducing the
risks.
Advantages of Cyclomatic Complexity:.
• It can be used as a quality metric, gives relative complexity of various
designs.
• It is able to compute faster than the Halstead’s metrics.
• It is used to measure the minimum effort and best areas of
concentration for testing.
• It is able to guide the testing process.
• It is easy to apply.
Disadvantages of Cyclomatic Complexity:
• It is the measure of the programs’s control complexity and not the data
complexity.
• In this, nested conditional structures are harder to understand than
non-nested structures.
• In case of simple comparisons and decision structures, it may give a
misleading figure.

Halstead’s Software Science:


Statement:
A computer program is an implementation of an algorithm considered to be
a collection of tokens which can be classified as either operators or
operand.

• All software science metrics can be defined in terms of these basic


symbols. These symbols are called as a token.
• The basic measures are
o n1 = count of unique operators.
o n2 = count of unique operands.
o N1 = count of total occurrences of operators.
o N2 = count of total occurrence of operands.
o Size of the program can be expressed as N = N1 + N2.
• Program Volume (V)
The unit of measurement of volume is the standard unit for size
"bits." It is the actual size of a program if a uniform binary encoding
for the vocabulary is used. V=N*log2n

• Program Level (L)


The value of L ranges between zero and one, with L=1 representing
a program written at the highest possible level (i.e., with minimum
size).
L=V*/V

• Program Difficulty
The difficulty level or error-proneness (D) of the program is
proportional to the number of the unique operator in the program.D=
(n1/2) * (N2/n2)

• Programming Effort (E)


The unit of measurement of E is elementary mental discriminations.
E=V/L=D*V

• Estimated Program Length


According to Halstead, The first Hypothesis of software science is
that the length of a well-structured program is a function only of the
number of unique operators and operands. N=N1+N2.And estimated
program length is denoted by N^
N^ = n1log2n1 + n2log2n2

• Potential Minimum Volume


The potential minimum volume V* is defined as the volume of the
most short program in which a problem can be coded.
V* = (2 + n2*) * log2 (2 + n2*)
Here, n2* is the count of unique input and output parameters

• Size of Vocabulary (n)


The size of the vocabulary of a program, which consists of the
number of unique tokens used to build a program, is defined as:
n=n1+n2
Where n =vocabulary of a program
n1=number of unique operators
n2=number of unique operands

• Language Level - Shows the algorithm implementation program


language level. The same algorithm demands additional effort if it is
written in a low-level program language. For example, it is easier to
program in Pascal than in Assembler.
L' = V / D / D
lambda = L * V* = L2 * V
Advantages:

• Predicts error rate.


• Predicts maintenance effort
• Simple to calculate
• Measure overall quality
• Used for any language
Disadvantages:

• Depends on complete code


• Complexity increases as program level decreases
• Difficult to compute

You might also like