0% found this document useful (0 votes)
23 views

Unit 3

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Unit 3

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Software Design

Objectives of Software Design


1. Correctness: A good design should be correct i.e., it should correctly implement all the
functionalities of the system.
2. Efficiency: A good software design should address the resources, time, and cost optimization issues.
3. Flexibility: A good software design should have the ability to adapt and accommodate changes
easily. It includes designing the software in a way, that allows for modifications, enhancements, and
scalability without requiring significant rework or causing major disruptions to the existing
functionality.
4. Understandability: A good design should be easily understandable, it should be modular, and all
the modules are arranged in layers.
5. Completeness: The design should have all the components like data structures, modules, external
interfaces, etc.
6. Maintainability: A good software design aims to create a system that is easy to understand, modify,
and maintain over time. This involves using modular and well-structured design principles
e.g.,(employing appropriate naming conventions and providing clear documentation). Maintainability
in Software and design also enables developers to fix bugs, enhance features, and adapt the software
to changing requirements without excessive effort or introducing new issues.

Software Design Concepts


Concepts are defined as a principal idea or invention that comes into our mind or in thought to understand
something.
The software design concept simply means the idea or principle behind the design.
It describes how you plan to solve the problem of designing software, and the logic, or thinking behind how
you will design software.
It allows the software engineer to create the model of the system software or product that is to be developed
or built.
The software design concept provides a supporting and essential structure or model for developing the right
software.

Software Design Concepts

Software Design Process – Software Engineering


The•design phase of software development deals with transforming the customer requirements as
described in the SRS documents into a form implementable using a programming language. The
software design process can be divided into the following three levels or phases of design:
1. Interface Design
2. Architectural Design
3. Detailed Design
Elements of a System
1. Architecture: This is the conceptual model that defines the structure, behavior, and views
of a system. We can use flowcharts to represent and illustrate the architecture.
2. Modules: These are components that handle one specific task in a system. A combination
of the modules makes up the system.
3. Components: This provides a particular function or group of related functions. They are
made up of modules.
4. Interfaces: This is the shared boundary across which the components of a system
exchange information and relate.
5. Data: This is the management of the information and data flow.

Software Design Process

Interface Design
Interface design is the specification of the interaction between a system and its environment. This
phase proceeds at a high level of abstraction with respect to the inner workings of the system i.e,
during interface design, the internal of the systems are completely ignored, and the system is treated
as a black box. Attention is focused on the dialogue between the target system and the users, devices,
and other systems with which it interacts. The design problem statement produced during the
problem analysis step should identify the people, other systems, and devices which are collectively
called agents.
Interface design should include the following details:
1. Precise description of events in the environment, or messages from agents to which the system
must respond.
2. Precise description of the events or messages that the system must produce.
3. Specification of the data, and the formats of the data coming into and going out of the system.
4. Specification of the ordering and timing relationships between incoming events or messages,
and outgoing events or outputs.
Architectural Design
Architectural design is the specification of the major components of a system, their responsibilities,
properties, interfaces, and the relationships and interactions between them. In architectural design,
the overall structure of the system is chosen, but the internal details of major components are
ignored. Issues in architectural design includes:
1. Gross decomposition of the systems into major components.
2. Allocation of functional responsibilities to components.
3. Component Interfaces.
4. Component scaling and performance properties, resource consumption properties,
reliability properties, and so forth.
5. Communication and interaction between components.
The architectural design adds important details ignored during the interface design. Design of the
internals of the major components is ignored until the last phase of the design.
Detailed Design
Detailed design is the specification of the internal elements of all major system components, their
properties, relationships, processing, and often their algorithms and the data structures. The detailed
design may include:
1. Decomposition of major system components into program units.
2. Allocation of functional responsibilities to units.
3. User interfaces.
4. Unit states and state changes.
5. Data and control interaction between units.
6. Data packaging and implementation, including issues of scope and visibility of program
elements.
7. Algorithms and data structures.

Design Structure Charts

What is a Structure Chart?


Structure Chart partitions the system into black boxes (functionality of the system is known to the users,
but inner details are unknown).
1. Inputs are given to the black boxes and appropriate outputs are generated.
2. Modules at the top level are called modules at low level.
3. Components are read from top to bottom and left to right.
4. When a module calls another, it views the called module as a black box, passing the required
parameters and receiving results.
Symbols in Structured Chart
1. Module
It represents the process or task of the system. It is of three types:
• Control Module: A control module branches to more than one submodule.
• Sub Module: Sub Module is a module which is the part (Child) of another module.
• Library Module: Library Module are reusable and invokable from any module.

2. Conditional Call: It represents that control module can select any of the sub module on the basis of some
condition.

3. Loop (Repetitive call of module)


It represents the repetitive execution of module by the sub module. A curved arrow represents a loop in the
module. All the submodules cover by the loop repeat execution of module.
4. Data Flow
It represents the flow of data between the modules. It is represented by a directed arrow with an empty

circle at the end.


5. Control Flow
It represents the flow of control between the modules. It is represented by a directed arrow with a filled
circle at the end.

6. Physical Storage

It is that where all the information are to be stored.


Example
Structure chart for an Email server

Types of Structure Chart


1. Transform Centered Structure: These type of structure chart are designed for the systems
that receives an input which is transformed by a sequence of operations being carried out by
one module.
2. Transaction Centered Structure: These structure describes a system that processes a number
of different types of transaction

Coupling and Cohesion


Coupling refers to the degree of interdependence between software modules. High coupling means that
modules are closely connected and changes in one module may affect other modules. Low coupling means
that modules are independent, and changes in one module have little impact on other modules.

Cohesion refers to the degree to which elements within a module work together to fulfill a single, well-
defined purpose. High cohesion means that elements are closely related and focused on a single purpose,
while low cohesion means that elements are loosely related and serve multiple purposes.

Cohesion

Both coupling and cohesion are important factors in determining the maintainability, scalability, and
reliability of a software system. High coupling and low cohesion can make a system difficult to change and
test, while low coupling and high cohesion make a system easier to maintain and improve.
Basically, design is a two-part iterative process. The first part is Conceptual Design which tells the customer
what the system will do. Second is Technical Design which allows the system builders to understand the
actual hardware and software needed to solve a customer’s problem.
Conceptual design of the system:
• Written in simple language i.e. customer understandable language.
• Detailed explanation about system characteristics.
• Describes the functionality of the system.
• It is independent of implementation.
• Linked with requirement document.
Technical Design of the System:
• Hardware component and design.
• Functionality and hierarchy of software components.
• Software architecture
• Network architecture
• Data structure and flow of data.
• I/O component of the system.
• Shows interface.
Modularization is the process of dividing a software system into multiple independent modules where each
module works independently. There are many advantages of Modularization in software engineering. Some
of these are given below:
• Easy to understand the system.
• System maintenance is easy.
• A module can be used many times as their requirements. No need to write it again and again.
Types of Coupling
Coupling is the measure of the degree of interdependence between the modules. A good software will have
low coupling.

Types of Coupling

Following are the types of Coupling:


• Data Coupling: If the dependency between the modules is based on the fact that they communicate
by passing only data, then the modules are said to be data coupled. In data coupling, the components
are independent of each other and communicate through data. Module communications don’t contain
tramp data. Example-customer billing system.
• Stamp Coupling In stamp coupling, the complete data structure is passed from one module to
another module. Therefore, it involves tramp data. It may be necessary due to efficiency factors- this
choice was made by the insightful designer, not a lazy programmer.
• Control Coupling: If the modules communicate by passing control information, then they are said
to be control coupled. It can be bad if parameters indicate completely different behavior and good if
parameters allow factoring and reuse of functionality. Example- sort function that takes comparison
function as an argument.
• External Coupling: In external coupling, the modules depend on other modules, external to the
software being developed or to a particular type of hardware. Ex- protocol, external file, device
format, etc.
• Common Coupling: The modules have shared data such as global data structures. The changes in
global data mean tracing back to all modules which access that data to evaluate the effect of the
change. So it has got disadvantages like difficulty in reusing modules, reduced ability to control data
accesses, and reduced maintainability.
• Content Coupling: In a content coupling, one module can modify the data of another module, or
control flow is passed from one module to the other module. This is the worst form of coupling and
should be avoided.
• Temporal Coupling: Temporal coupling occurs when two modules depend on the timing or order
of events, such as one module needing to execute before another. This type of coupling can result in
design issues and difficulties in testing and maintenance.
• Sequential Coupling: Sequential coupling occurs when the output of one module is used as the
input of another module, creating a chain or sequence of dependencies. This type of coupling can be
difficult to maintain and modify.
• Communicational Coupling: Communicational coupling occurs when two or more modules share
a common communication mechanism, such as a shared message queue or database. This type of
coupling can lead to performance issues and difficulty in debugging.
• Functional Coupling: Functional coupling occurs when two modules depend on each other’s
functionality, such as one module calling a function from another module. This type of coupling can
result in tightly-coupled code that is difficult to modify and maintain.
• Data-Structured Coupling: Data-structured coupling occurs when two or more modules share a
common data structure, such as a database table or data file. This type of coupling can lead to
difficulty in maintaining the integrity of the data structure and can result in performance issues.
• Interaction Coupling: Interaction coupling occurs due to the methods of a class invoking methods
of other classes. Like with functions, the worst form of coupling here is if methods directly access
internal parts of other methods. Coupling is lowest if methods communicate directly through
parameters.
• Component Coupling: Component coupling refers to the interaction between two classes where a
class has variables of the other class. Three clear situations exist as to how this can happen. A class
C can be component coupled with another class C1, if C has an instance variable of type C1, or C
has a method whose parameter is of type C1,or if C has a method which has a local variable of type
C1. It should be clear that whenever there is component coupling, there is likely to be interaction
coupling.
Types of Cohesion
Cohesion is a measure of the degree to which the elements of the module are functionally related. It is the
degree to which all elements directed towards performing a single task are contained in the component.
Basically, cohesion is the internal glue that keeps the module together. A good software design will have
high cohesion.
Types of Cohesion

Following are the types of Cohesion:


• Functional Cohesion: Every essential element for a single computation is contained in the
component. A functional cohesion performs the task and functions. It is an ideal situation.
• Sequential Cohesion: An element outputs some data that becomes the input for other element, i.e.,
data flow between the parts. It occurs naturally in functional programming languages.
• Communicational Cohesion: Two elements operate on the same input data or contribute towards the
same output data. Example- update record in the database and send it to the printer.
• Procedural Cohesion: Elements of procedural cohesion ensure the order of execution. Actions are
still weakly connected and unlikely to be reusable. Ex- calculate student GPA, print student record,
calculate cumulative GPA, print cumulative GPA.
• Temporal Cohesion: The elements are related by their timing involved. A module connected with
temporal cohesion all the tasks must be executed in the same time span. This cohesion contains the code
for initializing all the parts of the system. Lots of different activities occur, all at unit time.
• Logical Cohesion: The elements are logically related and not functionally. Ex- A component reads
inputs from tape, disk, and network. All the code for these functions is in the same component.
Operations are related, but the functions are significantly different.
• Coincidental Cohesion: The elements are not related(unrelated). The elements have no conceptual
relationship other than location in source code. It is accidental and the worst form of cohesion. Ex- print
next line and reverse the characters of a string in a single component.
• Procedural Cohesion: This type of cohesion occurs when elements or tasks are grouped together in a
module based on their sequence of execution, such as a module that performs a set of related procedures
in a specific order. Procedural cohesion can be found in structured programming languages.
• Communicational Cohesion: Communicational cohesion occurs when elements or tasks are grouped
together in a module based on their interactions with each other, such as a module that handles all
interactions with a specific external system or module. This type of cohesion can be found in object-
oriented programming languages.
• Temporal Cohesion: Temporal cohesion occurs when elements or tasks are grouped together in a
module based on their timing or frequency of execution, such as a module that handles all periodic or
scheduled tasks in a system. Temporal cohesion is commonly used in real-time and embedded systems.
• Informational Cohesion: Informational cohesion occurs when elements or tasks are grouped together
in a module based on their relationship to a specific data structure or object, such as a module that
operates on a specific data type or object. Informational cohesion is commonly used in object-oriented
programming.
• Functional Cohesion: This type of cohesion occurs when all elements or tasks in a module contribute
to a single well-defined function or purpose, and there is little or no coupling between the elements.
Functional cohesion is considered the most desirable type of cohesion as it leads to more maintainable
and reusable code.
• Layer Cohesion: Layer cohesion occurs when elements or tasks in a module are grouped together
based on their level of abstraction or responsibility, such as a module that handles only low-level
hardware interactions or a module that handles only high-level business logic. Layer cohesion is
commonly used in large-scale software systems to organize code into manageable layers.
Advantages of low coupling
• Improved maintainability: Low coupling reduces the impact of changes in one module on other
modules, making it easier to modify or replace individual components without affecting the entire
system.
• Enhanced modularity: Low coupling allows modules to be developed and tested in isolation,
improving the modularity and reusability of code.
• Better scalability: Low coupling facilitates the addition of new modules and the removal of existing
ones, making it easier to scale the system as needed.
Advantages of high cohesion
• Improved readability and understandability: High cohesion results in clear, focused modules with a
single, well-defined purpose, making it easier for developers to understand the code and make changes.
• Better error isolation: High cohesion reduces the likelihood that a change in one part of a module will
affect other parts, making it easier to
• Improved reliability: High cohesion leads to modules that are less prone to errors and that function
more consistently,
• leading to an overall improvement in the reliability of the system.
Disadvantages of high coupling
• Increased complexity: High coupling increases the interdependence between modules, making the
system more complex and difficult to understand.
• Reduced flexibility: High coupling makes it more difficult to modify or replace individual
components without affecting the entire system.
• Decreased modularity: High coupling makes it more difficult to develop and test modules in isolation,
reducing the modularity and reusability of code.
Disadvantages of low cohesion
• Increased code duplication: Low cohesion can lead to the duplication of code, as elements that belong
together are split into separate modules.
• Reduced functionality: Low cohesion can result in modules that lack a clear purpose and contain
elements that don’t belong together, reducing their functionality and making them harder to maintain.
• Difficulty in understanding the module: Low cohesion can make it harder for developers to understand
the purpose and behavior of a module, leading to errors and a lack of clarity.

Software Measurement Principles


The software measurement process can be characterized by five activities-
1. Formulation: The derivation of software measures and metrics appropriate for the
representation of the software that is being considered.
2. Collection: The mechanism used to accumulate data required to derive the formulated metrics.
3. Analysis: The computation of metrics and the application of mathematical tools.
4. Interpretation: The evaluation of metrics results in insight into the quality of the
representation.
5. Feedback: Recommendation derived from the interpretation of product metrics transmitted to
the software team.
Need for Software Measurement
Software is measured to:
• Create the quality of the current product or process.
• Anticipate future qualities of the product or process.
• Enhance the quality of a product or process.
• Regulate the state of the project concerning budget and schedule.
• Enable data-driven decision-making in project planning and control.
• Identify bottlenecks and areas for improvement to drive process improvement activities.
• Ensure that industry standards and regulations are followed.
• Give software products and processes a quantitative basis for evaluation.
• Enable the ongoing improvement of software development practices.
Classification of Software Measurement
There are 2 types of software measurement:
1. Direct Measurement: In direct measurement, the product, process, or thing is measured
directly using a standard scale.
2. Indirect Measurement: In indirect measurement, the quantity or quality to be measured is
measured using related parameters i.e. by use of reference.
Software Metrics
A metric is a measurement of the level at which any impute belongs to a system product or process.
Software metrics are a quantifiable or countable assessment of the attributes of a software product. There
are 4 functions related to software metrics:
1. Planning
2. Organizing
3. Controlling
4. Improving
Characteristics of software Metrics
1. Quantitative: Metrics must possess a quantitative nature. It means metrics can be expressed in
numerical values.
2. Understandable: Metric computation should be easily understood, and the method of
computing metrics should be clearly defined.
3. Applicability: Metrics should be applicable in the initial phases of the development of the
software.
4. Repeatable: When measured repeatedly, the metric values should be the same and consistent.
5. Economical: The computation of metrics should be economical.
6. Language Independent: Metrics should not depend on any programming language.
Types of Software Metrics
1. Product Metrics: Product metrics are used to evaluate the state of the product, tracing risks
and undercover prospective problem areas. The ability of the team to control quality is
evaluated. Examples include lines of code, cyclomatic complexity, code coverage, defect
density, and code maintainability index.
2. Process Metrics: Process metrics pay particular attention to enhancing the long-term process
of the team or organization. These metrics are used to optimize the development process and
maintenance activities of software. Examples include effort variance, schedule variance, defect
injection rate, and lead time.
3. Project Metrics: The project metrics describes the characteristic and execution of a
project. Examples include effort estimation accuracy, schedule deviation, cost variance, and
productivity. Usually measures-
• Number of software developer
• Staffing patterns over the life cycle of software
• Cost and schedule
• Productivity
Advantages of Software Metrics
1. Reduction in cost or budget.
2. It helps to identify the particular area for improvising.
3. It helps to increase the product quality.
4. Managing the workloads and teams.
5. Reduction in overall time to produce the product,.
6. It helps to determine the complexity of the code and to test the code with resources.
7. It helps in providing effective planning, controlling and managing of the entire product.

Halestead’s Software Science

Program length (N): This is the total number of operator and operand occurrences in the program.
Vocabulary size (n): This is the total number of distinct operators and operands in the program.
Program volume (V): This is the product of program length (N) and the logarithm of vocabulary size (n),
i.e., V = N*log2(n)
Program level (L): This is the ratio of the number of operator occurrences to the number of operand
occurrences in the program,
i.e., L = n1/n2
where n1 is the number of operator occurrences and n2 is the number of operand occurrences.
Program difficulty (D): This is the ratio of the number of unique operators to the total number of operators
in the program,
i.e., D = (n1/2) * (N2/n2)
Program effort (E): This is the product of program volume (V) and program difficulty
(D), i.e., E = V*D
Time to implement (T): This is the estimated time required to implement the program, based on the
program effort (E) and a constant value that depends on the programming language and development
environment.
Halstead’s software metrics can be used to estimate the size, complexity, and effort required to develop and
maintain a software program. However, they have some limitations, such as the assumption that all operators
and operands are equally important, and the assumption that the same set of metrics can be used for different
programming languages and development environments.
Token Count
Overall, Halstead’s software metrics can be a useful tool for software developers and project managers to
estimate the effort required to develop and maintain software programs.
n1 = Number of distinct operators.
n2 = Number of distinct operands.
N1 = Total number of occurrences of operators.
N2 = Total number of occurrences of operands.
Halstead Metrics
Halstead metrics are:
Halstead Program Length:
The total number of operator occurrences and the total number of operand occurrences.
N = N1 + N2
And estimated program length is,
N^ = n1log2n1 + n2log2n2
The following alternate expressions have been published to estimate program length:
NJ = log2(n1!) + log2(n2!)
NB = n1 * log2n2 + n2 * log2n1
NC = n1 * sqrt(n1) + n2 * sqrt(n2)
NS = (n * log2n) / 2
Halstead Vocabulary:
The total number of unique operators and unique operand occurrences.
n = n1 + n2
Program Volume: Proportional to program size, represents the size, in bits, of space necessary for storing
the program. This parameter is dependent on specific algorithm implementation. The properties V, N, and
the number of lines in the code are shown to be linearly connected and equally valid for measuring relative
program size.
V = Size * (log2 vocabulary) = N * log2(n)
The unit of measurement of volume is the common unit for size “bits”. It is the actual size of a program if
a uniform binary encoding for the vocabulary is used. And
error = Volume / 3000
Potential Minimum Volume: The potential minimum volume V* is defined as the volume of the most
succinct program in which a problem can be coded.
V* = (2 + n2*) * log2(2 + n2*)
Here, n2* is the count of unique input and output parameters
Program Level: To rank the programming languages, the level of abstraction provided by the programming
language, Program Level (L) is considered. The higher the level of a language, the less effort it takes to
develop a program using that language.
L = V* / V
The value of L ranges between zero and one, with L=1 representing a program written at the highest
possible level (i.e., with minimum size).
And estimated program level is
L^ =2 * (n2) / (n1)(N2)
Program Difficulty: This parameter shows how difficult to handle the program is.
D = (n1 / 2) * (N2 / n2)
D=1/L
As the volume of the implementation of a program increases, the program level decreases and the
difficulty increases. Thus, programming practices such as redundant usage of operands, or the failure to
use higher-level control constructs will tend to increase the volume as well as the difficulty.
Programming Effort: Measures the amount of mental activity needed to translate the existing algorithm
into implementation in the specified program language.
E = V / L = D * V = Difficulty * Volume
Language Level:
Shows the algorithm implementation program language level. The same algorithm demands additional
effort if it is written in a low-level program language. For example, it is easier to program in Pascal than
in Assembler.
L' = V / D / D
lambda = L * V* = L2 * V
Intelligence Content: Determines the amount of intelligence presented (stated) in the program This
parameter provides a measurement of program complexity, independently of the programming language in
which it was implemented.
I=V/D
Programming Time: Shows time (in minutes) needed to translate the existing algorithm into
implementation in the specified program language.
T = E / (f * S)
The concept of the processing rate of the human brain, developed by psychologist John Stroud, is also
used. Stoud defined a moment as the time required by the human brain to carry out the most elementary
decision. The Stoud number S is therefore Stoud’s moments per second with:
5 <= S <= 20. Halstead uses 18. The value of S has been empirically developed from psychological
reasoning, and its recommended value for programming applications is 18.
Stroud number S = 18 moments / second
seconds-to-minutes factor f = 60
Counting Rules for C Language
1. Comments are not considered.
2. The identifier and function declarations are not considered
3. All the variables and constants are considered operands.
4. Global variables used in different modules of the same program are counted as multiple
occurrences of the same variable.
5. Local variables with the same name in different functions are counted as unique operands.
6. Functions calls are considered operators.
7. All looping statements e.g., do {…} while ( ), while ( ) {…}, for ( ) {…}, all control statements
e.g., if ( ) {…}, if ( ) {…} else {…}, etc. are considered as operators.
8. In control construct switch ( ) {case:…}, switch as well as all the case statements are considered as
operators.
9. The reserve words like return, default, continue, break, size, etc., are considered operators.
10. All the brackets, commas, and terminators are considered operators.
11. GOTO is counted as an operator and the label is counted as an operand.
12. The unary and binary occurrences of “+” and “-” are dealt with separately. Similarly “*”
(multiplication operator) is dealt with separately.
13. In the array variables such as “array-name [index]” “array-name” and “index” are considered as
operands and [ ] is considered as operator.
14. In the structure variables such as “struct-name, member-name” or “struct-name -> member-name”,
struct-name, and member-name are taken as operands, and ‘.’, ‘->’ are taken as operators. Some
names of member elements in different structure variables are counted as unique operands.
15. All the hash directives are ignored.
Example – List out the operators and operands and also calculate the values of software science measures
like
int sort (int x[ ], int n)
{
int i, j, save, im1;
/*This function sorts array x in ascending order */
If (n< 2) return 1;
for (i=2; i< =n; i++)
{
im1=i-1;
for (j=1; j< =im1; j++)
if (x[i] < x[j])
{
Save = x[i];
x[i] = x[j];
x[j] = save;
}
}
return 0;
}
Explanation:
Operators Occurrences Operands Occurrences
Int 4 sort 1
() 5 x 7
, 4 n 3
[] 7 i 8
If 2 j 7
< 2 save 3
; 11 im1 3
For 2 2 2
= 6 1 3
– 1 0 1
<= 2 – –
++ 2 – –
return 2 – –
{} 3 – –
n1=14 N1=53 n2=10 N2=38
Therefore,
N = 91
n = 24
V = 417.23 bits
N^ = 86.51
n2* = 3 (x:array holding integer
to be sorted. This is used both
as input and output)
V* = 11.6
L = 0.027
D = 37.03
L^ = 0.038
T = 610 seconds
Advantages of Halstead Metrics
• It is simple to calculate.
• It measures the overall quality of the programs.
• It predicts the rate of error.
• It predicts maintenance effort.
• It does not require a full analysis of the programming structure.
• It is useful in scheduling and reporting projects.
• It can be used for any programming language.
• Easy to use: The metrics are simple and easy to understand and can be calculated quickly using
automated tools.
• Quantitative measure: The metrics provide a quantitative measure of the complexity and effort
required to develop and maintain a software program, which can be useful for project planning and
estimation.
• Language independent: The metrics can be used for different programming languages and
development environments.
• Standardization: The metrics provide a standardized way to compare and evaluate different software
programs.

Functional Point Analysis gives a dimensionless number defined in function points which we have found to
be an effective relative measure of function value delivered to our customer.
Objectives of Functional Point Analysis
1. Encourage Approximation: FPA helps in the estimation of the work, time, and materials needed to
develop a software project. Organizations can plan and manage projects more accurately when a
common measure of functionality is available.
2. To assist with project management: Project managers can monitor and manage software
development projects with the help of FPA. Managers can evaluate productivity, monitor progress, and
make well-informed decisions about resource allocation and project timeframes by measuring the
software’s functional points.
3. Comparative analysis: By enabling benchmarking, it gives businesses the ability to assess how their
software projects measure up to industry standards or best practices in terms of size and complexity.
This can be useful for determining where improvements might be made and for evaluating how well
development procedures are working.
4. Improve Your Cost-Benefit Analysis: It offers a foundation for assessing the value provided by the
program concerning its size and complexity, which helps with cost-benefit analysis. Making educated
judgements about project investments and resource allocations can benefit from having access to this
information.
5. Comply with Business Objectives: It assists in coordinating software development activities with an
organization’s business objectives. It guarantees that software development efforts are directed toward
providing value to end users by concentrating on user-oriented functionality.
Types of Functional Point Analysis
There are two types of Functional Point Analysis:
1. Transactional Functional Type
1. External Input (EI): EI processes data or control information that comes from outside the
application’s boundary. The EI is an elementary process.
2. External Output (EO): EO is an elementary process that generates data or control information sent
outside the application’s boundary.
3. External Inquiries (EQ): EQ is an elementary process made up of an input-output combination that
results in data retrieval.
2. Data Functional Type
1. Internal Logical File (ILF): A user-identifiable group of logically related data or control
information maintained within the boundary of the application.
2. External Interface File (EIF): A group of users recognizable logically related data allusion to the
software but maintained within the boundary of another software.
Functional Point Analysis

Benefits of Functional Point Analysis


1. Technological Independence: It calculates a software system’s functional size independent of the
underlying technology or programming language used to implement it. As a result, it is a technology-
neutral metric that makes it easier to compare projects created with various technologies.
2. Better Accurate Project Estimation: It helps to improve project estimation accuracy by measuring
user interactions and functional needs. Project managers can improve planning and budgeting by using
the results of the FPA to estimate the time, effort and resources required for development.
3. Improved Interaction: It provides a common language for business analysts, developers, and project
managers to communicate with one another and with other stakeholders. By communicating the size
and complexity of software in a way that both technical and non-technical audiences can easily
understand this helps close the communication gap.
4. Making Well-Informed Decisions: FPA assists in making well-informed decisions at every stage of
the software development life cycle. Based on the functional requirements, organizations can use the
results of the FPA to make decisions about resource allocation, project prioritization, and technology
selection.
5. Early Recognition of Changes in Scope: Early detection of changes in project scope is made easier
with the help of FPA. Better scope change management is made possible by the measurement of
functional requirements, which makes it possible to evaluate additions or changes for their effect on
the project’s overall size.
Characteristics of Functional Point Analysis
We calculate the functional point with the help of the number of functions and types of functions used in
applications. These are classified into five types:

Measurement Parameters Examples


Number of External Inputs (EI) Input screen and tables
Number of External Output (EO) Output screens and reports
Number of external inquiries (EQ) Prompts and interrupts
Number of internal files (ILF) Databases and directories
Number of external interfaces (EIF) Shared databases and shared routines
Functional Point helps in describing system complexity and also shows project timelines. It is majorly used
for business systems like information systems.
Weights of 5 Functional Point Attributes

Measurement Parameter Low Average High


Number of external inputs
3 4 6
(EI)
Number of external
4 5 7
outputs (EO)
Number of external
3 4 6
inquiries (EQ)
Number of internal files
7 10 15
(ILF)
Number of External
5 7 10
Interfaces (EIF)

Functional Complexities help us in finding the corresponding weights, which results in finding the Unadjusted
Functional point (UFp) of the Subsystem. Consider the complexity as average for all cases. Below-mentioned
is the way how to compute FP.

Weighing Factor
Measurement Parameter Count Total_Count Simple
Number of external inputs (EI) 32 32*4=128 3
Number of external outputs (EO) 60 60*5=300 4
Number of external inquiries (EQ) 24 24*4=96 3
Number of internal files (ILF) 8 8*10=80 7
Number of external interfaces (EIF) 2 2*7=14 5
Algorithms used Count total → 618

From the above tables, Functional Point is calculated with the following formula
FP = Count-Total * [0.65 + 0.01 * ⅀(fi)]
= Count * CAF
Here, the count-total is taken from the chart.
CAF = [0.65 + 0.01 * ⅀(fi)]
1. ⅀(fi) = sum of all 14 questions and it also shows the complexity factor – CAF.
2. CAF varies from 0.65 to 1.35 and ⅀(fi) ranges from 0 to 70.
3. When ⅀(fi) = 0, CAF = 0.65 and when ⅀(fi) = 70, CAF = 0.65 + (0.01*70) = 0.65 + 0.7 = 1.35

Cyclomatic Complexity

The cyclomatic complexity of a code section is the quantitative measure of the number of linearly independent
paths in it. It is a software metric used to indicate the complexity of a program. It is computed using the Control
Flow Graph of the program. The nodes in the graph indicate the smallest group of commands of a program,
and a directed edge in it connects the two nodes i.e. if the second command might immediately follow the first
command.
For example, if the source code contains no control flow statement then its cyclomatic complexity will be 1,
and the source code contains a single path in it. Similarly, if the source code contains one if condition then
cyclomatic complexity will be 2 because there will be two paths one for true and the other for false.
Mathematically, for a structured program, the directed graph inside the control flow is the edge joining two
basic blocks of the program as control may pass from first to second.
So, cyclomatic complexity M would be defined as,
M = E – N + 2P where E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
In case, when exit point is directly connected back to the entry point. Here, the graph is strongly connected,
and cyclometric complexity is defined as
M=E–N+P
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
In the case of a single method, P is equal to 1. So, for a single subroutine, the formula can be defined as
M=E–N+2
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
How to Calculate Cyclomatic Complexity?
Steps that should be followed in calculating cyclomatic complexity and test cases design are:
Construction of graph with nodes and edges from code.
• Identification of independent paths.
• Cyclomatic Complexity Calculation
• Design of Test Cases
Let a section of code as such:
A = 10
IF B > C THEN
A=B
ELSE
A=C
ENDIF
Print A
Print B
Print C
Control Flow Graph of the above code

Cyclomatic Complexity
The cyclomatic complexity calculated for the above code will be from the control flow graph. The graph shows
seven shapes(nodes), and seven lines(edges), hence cyclomatic complexity is 7-7+2 = 2.
Use of Cyclomatic Complexity
• Determining the independent path executions thus proven to be very helpful for Developers and
Testers.
• It can make sure that every path has been tested at least once.
• Thus help to focus more on uncovered paths.
• Code coverage can be improved.
• Risks associated with the program can be evaluated.
• These metrics being used earlier in the program help in reducing the risks.
Advantages of Cyclomatic Complexity
• It can be used as a quality metric, given the relative complexity of various designs.
• It is able to compute faster than Halstead’s metrics.
• It is used to measure the minimum effort and best areas of concentration for testing.
• It is able to guide the testing process.
• It is easy to apply.
Disadvantages of Cyclomatic Complexity
• It is the measure of the program’s control complexity and not the data complexity.
• In this, nested conditional structures are harder to understand than non-nested structures.
• In the case of simple comparisons and decision structures, it may give a misleading figure.

You might also like