Unit 3
Unit 3
Interface Design
Interface design is the specification of the interaction between a system and its environment. This
phase proceeds at a high level of abstraction with respect to the inner workings of the system i.e,
during interface design, the internal of the systems are completely ignored, and the system is treated
as a black box. Attention is focused on the dialogue between the target system and the users, devices,
and other systems with which it interacts. The design problem statement produced during the
problem analysis step should identify the people, other systems, and devices which are collectively
called agents.
Interface design should include the following details:
1. Precise description of events in the environment, or messages from agents to which the system
must respond.
2. Precise description of the events or messages that the system must produce.
3. Specification of the data, and the formats of the data coming into and going out of the system.
4. Specification of the ordering and timing relationships between incoming events or messages,
and outgoing events or outputs.
Architectural Design
Architectural design is the specification of the major components of a system, their responsibilities,
properties, interfaces, and the relationships and interactions between them. In architectural design,
the overall structure of the system is chosen, but the internal details of major components are
ignored. Issues in architectural design includes:
1. Gross decomposition of the systems into major components.
2. Allocation of functional responsibilities to components.
3. Component Interfaces.
4. Component scaling and performance properties, resource consumption properties,
reliability properties, and so forth.
5. Communication and interaction between components.
The architectural design adds important details ignored during the interface design. Design of the
internals of the major components is ignored until the last phase of the design.
Detailed Design
Detailed design is the specification of the internal elements of all major system components, their
properties, relationships, processing, and often their algorithms and the data structures. The detailed
design may include:
1. Decomposition of major system components into program units.
2. Allocation of functional responsibilities to units.
3. User interfaces.
4. Unit states and state changes.
5. Data and control interaction between units.
6. Data packaging and implementation, including issues of scope and visibility of program
elements.
7. Algorithms and data structures.
2. Conditional Call: It represents that control module can select any of the sub module on the basis of some
condition.
6. Physical Storage
Cohesion refers to the degree to which elements within a module work together to fulfill a single, well-
defined purpose. High cohesion means that elements are closely related and focused on a single purpose,
while low cohesion means that elements are loosely related and serve multiple purposes.
Cohesion
Both coupling and cohesion are important factors in determining the maintainability, scalability, and
reliability of a software system. High coupling and low cohesion can make a system difficult to change and
test, while low coupling and high cohesion make a system easier to maintain and improve.
Basically, design is a two-part iterative process. The first part is Conceptual Design which tells the customer
what the system will do. Second is Technical Design which allows the system builders to understand the
actual hardware and software needed to solve a customer’s problem.
Conceptual design of the system:
• Written in simple language i.e. customer understandable language.
• Detailed explanation about system characteristics.
• Describes the functionality of the system.
• It is independent of implementation.
• Linked with requirement document.
Technical Design of the System:
• Hardware component and design.
• Functionality and hierarchy of software components.
• Software architecture
• Network architecture
• Data structure and flow of data.
• I/O component of the system.
• Shows interface.
Modularization is the process of dividing a software system into multiple independent modules where each
module works independently. There are many advantages of Modularization in software engineering. Some
of these are given below:
• Easy to understand the system.
• System maintenance is easy.
• A module can be used many times as their requirements. No need to write it again and again.
Types of Coupling
Coupling is the measure of the degree of interdependence between the modules. A good software will have
low coupling.
Types of Coupling
Program length (N): This is the total number of operator and operand occurrences in the program.
Vocabulary size (n): This is the total number of distinct operators and operands in the program.
Program volume (V): This is the product of program length (N) and the logarithm of vocabulary size (n),
i.e., V = N*log2(n)
Program level (L): This is the ratio of the number of operator occurrences to the number of operand
occurrences in the program,
i.e., L = n1/n2
where n1 is the number of operator occurrences and n2 is the number of operand occurrences.
Program difficulty (D): This is the ratio of the number of unique operators to the total number of operators
in the program,
i.e., D = (n1/2) * (N2/n2)
Program effort (E): This is the product of program volume (V) and program difficulty
(D), i.e., E = V*D
Time to implement (T): This is the estimated time required to implement the program, based on the
program effort (E) and a constant value that depends on the programming language and development
environment.
Halstead’s software metrics can be used to estimate the size, complexity, and effort required to develop and
maintain a software program. However, they have some limitations, such as the assumption that all operators
and operands are equally important, and the assumption that the same set of metrics can be used for different
programming languages and development environments.
Token Count
Overall, Halstead’s software metrics can be a useful tool for software developers and project managers to
estimate the effort required to develop and maintain software programs.
n1 = Number of distinct operators.
n2 = Number of distinct operands.
N1 = Total number of occurrences of operators.
N2 = Total number of occurrences of operands.
Halstead Metrics
Halstead metrics are:
Halstead Program Length:
The total number of operator occurrences and the total number of operand occurrences.
N = N1 + N2
And estimated program length is,
N^ = n1log2n1 + n2log2n2
The following alternate expressions have been published to estimate program length:
NJ = log2(n1!) + log2(n2!)
NB = n1 * log2n2 + n2 * log2n1
NC = n1 * sqrt(n1) + n2 * sqrt(n2)
NS = (n * log2n) / 2
Halstead Vocabulary:
The total number of unique operators and unique operand occurrences.
n = n1 + n2
Program Volume: Proportional to program size, represents the size, in bits, of space necessary for storing
the program. This parameter is dependent on specific algorithm implementation. The properties V, N, and
the number of lines in the code are shown to be linearly connected and equally valid for measuring relative
program size.
V = Size * (log2 vocabulary) = N * log2(n)
The unit of measurement of volume is the common unit for size “bits”. It is the actual size of a program if
a uniform binary encoding for the vocabulary is used. And
error = Volume / 3000
Potential Minimum Volume: The potential minimum volume V* is defined as the volume of the most
succinct program in which a problem can be coded.
V* = (2 + n2*) * log2(2 + n2*)
Here, n2* is the count of unique input and output parameters
Program Level: To rank the programming languages, the level of abstraction provided by the programming
language, Program Level (L) is considered. The higher the level of a language, the less effort it takes to
develop a program using that language.
L = V* / V
The value of L ranges between zero and one, with L=1 representing a program written at the highest
possible level (i.e., with minimum size).
And estimated program level is
L^ =2 * (n2) / (n1)(N2)
Program Difficulty: This parameter shows how difficult to handle the program is.
D = (n1 / 2) * (N2 / n2)
D=1/L
As the volume of the implementation of a program increases, the program level decreases and the
difficulty increases. Thus, programming practices such as redundant usage of operands, or the failure to
use higher-level control constructs will tend to increase the volume as well as the difficulty.
Programming Effort: Measures the amount of mental activity needed to translate the existing algorithm
into implementation in the specified program language.
E = V / L = D * V = Difficulty * Volume
Language Level:
Shows the algorithm implementation program language level. The same algorithm demands additional
effort if it is written in a low-level program language. For example, it is easier to program in Pascal than
in Assembler.
L' = V / D / D
lambda = L * V* = L2 * V
Intelligence Content: Determines the amount of intelligence presented (stated) in the program This
parameter provides a measurement of program complexity, independently of the programming language in
which it was implemented.
I=V/D
Programming Time: Shows time (in minutes) needed to translate the existing algorithm into
implementation in the specified program language.
T = E / (f * S)
The concept of the processing rate of the human brain, developed by psychologist John Stroud, is also
used. Stoud defined a moment as the time required by the human brain to carry out the most elementary
decision. The Stoud number S is therefore Stoud’s moments per second with:
5 <= S <= 20. Halstead uses 18. The value of S has been empirically developed from psychological
reasoning, and its recommended value for programming applications is 18.
Stroud number S = 18 moments / second
seconds-to-minutes factor f = 60
Counting Rules for C Language
1. Comments are not considered.
2. The identifier and function declarations are not considered
3. All the variables and constants are considered operands.
4. Global variables used in different modules of the same program are counted as multiple
occurrences of the same variable.
5. Local variables with the same name in different functions are counted as unique operands.
6. Functions calls are considered operators.
7. All looping statements e.g., do {…} while ( ), while ( ) {…}, for ( ) {…}, all control statements
e.g., if ( ) {…}, if ( ) {…} else {…}, etc. are considered as operators.
8. In control construct switch ( ) {case:…}, switch as well as all the case statements are considered as
operators.
9. The reserve words like return, default, continue, break, size, etc., are considered operators.
10. All the brackets, commas, and terminators are considered operators.
11. GOTO is counted as an operator and the label is counted as an operand.
12. The unary and binary occurrences of “+” and “-” are dealt with separately. Similarly “*”
(multiplication operator) is dealt with separately.
13. In the array variables such as “array-name [index]” “array-name” and “index” are considered as
operands and [ ] is considered as operator.
14. In the structure variables such as “struct-name, member-name” or “struct-name -> member-name”,
struct-name, and member-name are taken as operands, and ‘.’, ‘->’ are taken as operators. Some
names of member elements in different structure variables are counted as unique operands.
15. All the hash directives are ignored.
Example – List out the operators and operands and also calculate the values of software science measures
like
int sort (int x[ ], int n)
{
int i, j, save, im1;
/*This function sorts array x in ascending order */
If (n< 2) return 1;
for (i=2; i< =n; i++)
{
im1=i-1;
for (j=1; j< =im1; j++)
if (x[i] < x[j])
{
Save = x[i];
x[i] = x[j];
x[j] = save;
}
}
return 0;
}
Explanation:
Operators Occurrences Operands Occurrences
Int 4 sort 1
() 5 x 7
, 4 n 3
[] 7 i 8
If 2 j 7
< 2 save 3
; 11 im1 3
For 2 2 2
= 6 1 3
– 1 0 1
<= 2 – –
++ 2 – –
return 2 – –
{} 3 – –
n1=14 N1=53 n2=10 N2=38
Therefore,
N = 91
n = 24
V = 417.23 bits
N^ = 86.51
n2* = 3 (x:array holding integer
to be sorted. This is used both
as input and output)
V* = 11.6
L = 0.027
D = 37.03
L^ = 0.038
T = 610 seconds
Advantages of Halstead Metrics
• It is simple to calculate.
• It measures the overall quality of the programs.
• It predicts the rate of error.
• It predicts maintenance effort.
• It does not require a full analysis of the programming structure.
• It is useful in scheduling and reporting projects.
• It can be used for any programming language.
• Easy to use: The metrics are simple and easy to understand and can be calculated quickly using
automated tools.
• Quantitative measure: The metrics provide a quantitative measure of the complexity and effort
required to develop and maintain a software program, which can be useful for project planning and
estimation.
• Language independent: The metrics can be used for different programming languages and
development environments.
• Standardization: The metrics provide a standardized way to compare and evaluate different software
programs.
Functional Point Analysis gives a dimensionless number defined in function points which we have found to
be an effective relative measure of function value delivered to our customer.
Objectives of Functional Point Analysis
1. Encourage Approximation: FPA helps in the estimation of the work, time, and materials needed to
develop a software project. Organizations can plan and manage projects more accurately when a
common measure of functionality is available.
2. To assist with project management: Project managers can monitor and manage software
development projects with the help of FPA. Managers can evaluate productivity, monitor progress, and
make well-informed decisions about resource allocation and project timeframes by measuring the
software’s functional points.
3. Comparative analysis: By enabling benchmarking, it gives businesses the ability to assess how their
software projects measure up to industry standards or best practices in terms of size and complexity.
This can be useful for determining where improvements might be made and for evaluating how well
development procedures are working.
4. Improve Your Cost-Benefit Analysis: It offers a foundation for assessing the value provided by the
program concerning its size and complexity, which helps with cost-benefit analysis. Making educated
judgements about project investments and resource allocations can benefit from having access to this
information.
5. Comply with Business Objectives: It assists in coordinating software development activities with an
organization’s business objectives. It guarantees that software development efforts are directed toward
providing value to end users by concentrating on user-oriented functionality.
Types of Functional Point Analysis
There are two types of Functional Point Analysis:
1. Transactional Functional Type
1. External Input (EI): EI processes data or control information that comes from outside the
application’s boundary. The EI is an elementary process.
2. External Output (EO): EO is an elementary process that generates data or control information sent
outside the application’s boundary.
3. External Inquiries (EQ): EQ is an elementary process made up of an input-output combination that
results in data retrieval.
2. Data Functional Type
1. Internal Logical File (ILF): A user-identifiable group of logically related data or control
information maintained within the boundary of the application.
2. External Interface File (EIF): A group of users recognizable logically related data allusion to the
software but maintained within the boundary of another software.
Functional Point Analysis
Functional Complexities help us in finding the corresponding weights, which results in finding the Unadjusted
Functional point (UFp) of the Subsystem. Consider the complexity as average for all cases. Below-mentioned
is the way how to compute FP.
Weighing Factor
Measurement Parameter Count Total_Count Simple
Number of external inputs (EI) 32 32*4=128 3
Number of external outputs (EO) 60 60*5=300 4
Number of external inquiries (EQ) 24 24*4=96 3
Number of internal files (ILF) 8 8*10=80 7
Number of external interfaces (EIF) 2 2*7=14 5
Algorithms used Count total → 618
From the above tables, Functional Point is calculated with the following formula
FP = Count-Total * [0.65 + 0.01 * ⅀(fi)]
= Count * CAF
Here, the count-total is taken from the chart.
CAF = [0.65 + 0.01 * ⅀(fi)]
1. ⅀(fi) = sum of all 14 questions and it also shows the complexity factor – CAF.
2. CAF varies from 0.65 to 1.35 and ⅀(fi) ranges from 0 to 70.
3. When ⅀(fi) = 0, CAF = 0.65 and when ⅀(fi) = 70, CAF = 0.65 + (0.01*70) = 0.65 + 0.7 = 1.35
Cyclomatic Complexity
•
The cyclomatic complexity of a code section is the quantitative measure of the number of linearly independent
paths in it. It is a software metric used to indicate the complexity of a program. It is computed using the Control
Flow Graph of the program. The nodes in the graph indicate the smallest group of commands of a program,
and a directed edge in it connects the two nodes i.e. if the second command might immediately follow the first
command.
For example, if the source code contains no control flow statement then its cyclomatic complexity will be 1,
and the source code contains a single path in it. Similarly, if the source code contains one if condition then
cyclomatic complexity will be 2 because there will be two paths one for true and the other for false.
Mathematically, for a structured program, the directed graph inside the control flow is the edge joining two
basic blocks of the program as control may pass from first to second.
So, cyclomatic complexity M would be defined as,
M = E – N + 2P where E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
In case, when exit point is directly connected back to the entry point. Here, the graph is strongly connected,
and cyclometric complexity is defined as
M=E–N+P
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
In the case of a single method, P is equal to 1. So, for a single subroutine, the formula can be defined as
M=E–N+2
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
How to Calculate Cyclomatic Complexity?
Steps that should be followed in calculating cyclomatic complexity and test cases design are:
Construction of graph with nodes and edges from code.
• Identification of independent paths.
• Cyclomatic Complexity Calculation
• Design of Test Cases
Let a section of code as such:
A = 10
IF B > C THEN
A=B
ELSE
A=C
ENDIF
Print A
Print B
Print C
Control Flow Graph of the above code
Cyclomatic Complexity
The cyclomatic complexity calculated for the above code will be from the control flow graph. The graph shows
seven shapes(nodes), and seven lines(edges), hence cyclomatic complexity is 7-7+2 = 2.
Use of Cyclomatic Complexity
• Determining the independent path executions thus proven to be very helpful for Developers and
Testers.
• It can make sure that every path has been tested at least once.
• Thus help to focus more on uncovered paths.
• Code coverage can be improved.
• Risks associated with the program can be evaluated.
• These metrics being used earlier in the program help in reducing the risks.
Advantages of Cyclomatic Complexity
• It can be used as a quality metric, given the relative complexity of various designs.
• It is able to compute faster than Halstead’s metrics.
• It is used to measure the minimum effort and best areas of concentration for testing.
• It is able to guide the testing process.
• It is easy to apply.
Disadvantages of Cyclomatic Complexity
• It is the measure of the program’s control complexity and not the data complexity.
• In this, nested conditional structures are harder to understand than non-nested structures.
• In the case of simple comparisons and decision structures, it may give a misleading figure.