BLOCK 3
BLOCK 3
Functionality and
Design Issues
UNIT 1 INTRODUCTION TO LAYER
FUNCTIONALITY AND DESIGN
ISSUES
Structure Page
Nos.
1.0 Introduction 5
1.1 Objectives 6
1.2 Connection Oriented vs. Connection-less Services 6
1.2.1 Connection-oriented Services
1.2.2 Connection-less Services
1.3 Implementation of the Network Layer Services 7
1.3.1 Packet Switching
1.3.2 Implementation of Connection -oriented Services
1.3.3 Implementation of Connection-less Services
1.4 Comparison between Virtual Circuit and Datagram Subnet 11
1.5 Addressing 13
1.5.1 Hierarchical Versus Flat Address
1.5.2 Static vs. Dynamic Address
1.5.3 IP Address
1.6 Concept of Congestion 16
1.7 Routing Concept 17
1.7.1 Main Issues in Routing
1.7.2 Classification of Routing Algorithm
1.8 Summarys 20
1.9 Solutions/Answers 20
1.10 Further Readings 22
1.0 INTRODUCTION
In the previous blocks of this course, we have learned the basic functions of physical
layer and data link layer in networking. Now, in this chapter, we will go through the
functions of the network layer.
The network layer is at level three in OSI model. It responds to service requests from
the transport layer and issues service requests to the data link layer. It is responsible
for end-to-end (source to destination) packet delivery, whereas the data link layer is
responsible for node-to-node (hop-to-hop) packet delivery. Three important functions
of the network layers are:
• Path Determination: It determines the route taken by the packets from the
source to the destination.
• Forwarding: It forwards packets from the router’s input to the appropriate
router output.
• Call Setup: Some network architectures require router call setup along the path
before the data flows. To perform these functions, the network layer must be
aware of the topology of the communication subnet (i.e., set of routers,
communication lines).
For end-to-end delivery, the network provides two type of services i.e., connection
oriented service and connection less service to the transport layer. The network
layer services meet the following entries [ref.1].
• Transport layer should not be aware of the topology of the network (subnet).
• Services should be independent of the router technology.
5
Network Layer
In this unit, we will first go through the basic concepts of these services and will then
differentiate between these two. Then, we will introduce some other concepts like
routing and congestion.
1.1 OBJECTIVES
i) The network guarantees that all packets will be delivered in order without loss
or duplication of data.
ii) Only a single path is established for the call, and all the data follows that path.
iii) The network guarantees a minimal amount of bandwidth and this bandwidth is
reserved for the duration of the call.
iv) If the network is over utilised, future call requests are refused.
6
Introduction to Layer
Functionality and
Design Issues
Connection-oriented transmission has three stages. These are:
The Internet Protocol (IP) and User Datagram Protocol (UDP) are connectionless
protocols, but TCP/IP (the most common use of IP) is connection-orientated.
In this section, we will examine how the network layer services are implemented.
Two different services are taken into consideration depending on the type of service
being offered. These two schemes are known as virtual circuit subnet (VC subnet) for
connection-oriented service and datagram subnet for connection-less services. A VC
subnet may be compared to the physical circuit required in a telephone setup. In a
connection-oriented service, a route from the source to the destination must be
established. In a datagram subnet, no advance set up is needed. In this case, packets
are routed independently. But, before we take up the implementation issues let us,
7
Network Layer
revisit the packet switching concepts once again. The services are implement through
a packet switched network.
Packet
Subnet
Destination
Source Machine Machine
R2 R3 Packet
R1 R4 R7
R6 R5
Router
LAN
This subnet works in the following manner. Whenever user wants to send a packet to
another users, s/he transmits the packet to the nearest router either on its own LAN or
over a point-to-point link to the carrier. The packet is stored for verification and then
transmitted further to the next router along the way until it reaches the final
destination machine. This mechanism is called packet switching.
But, why packet switching? Why not circuit switching? Now, let us discuss these
issues.
Circuit switching was not designed for packet transmission. It was primarily designed
for voice communication. It creates temporary (dialed) or permanent (leased)
dedicated links that are well suited to this type of communication [Ref. 2].
(i) Data transmission tend to be bursty, which means that packets arrive in spurts
with gaps in between. In such cases, transmission lines will be mostly idle
leading to wastage of resources if we use circuit switching for data transmission.
(ii) Single Data Rate: In circuit switching mechanism there is a single data rate for
the two end devices which limits flexibility and usefulness of circuit switched
connection for networks interconnection of a variety of digital devices.
8
Introduction to Layer
Functionality and
To implement connection-oriented services, we need to form a virtual-circuit Design Issues
subnet. This idea behind the creation of a VC is so that, a new route for every packet
sent.
In virtual circuits:
• First, a connection needs to be established.
• After establishing a connection, a route from the source machine to the
destination machine is chosen as part of the connection setup and stored in
tables inside the routers. This route is used for all traffic flowing over the
connection.
• After transmitting all the data packets, the connection is released. When the
connection is released, the virtual circuit is also terminated.
In a connection-oriented service, each packet carries an identifier that identifies the
virtual circuit it belongs to.
Now, let us take an example, consider the situation of a subnet in Figure 2. In this
figure, H1,H2 and H3 represent host machines and R1, R2, R3, R4, R5 and R6
represent routers. Processes are running on different hosts.
Here, host HI has established connection 1 with host H2. It is remembered as the first
entry in each of the routing tables as shown in Table 1. The first line of R1’s table
says that, if a packet bearing connection identifier 1 comes in from HI, it is to be sent
to router R3, and given connection identifier 1. Similarly, the first entry at R3 routes
the packet to R5, also with connection identifier 1.
Now, let us consider a situation in which, H3 also wants to establish a connection
with H2. It chooses connection identifier 1 (because it is initiating the connection and
this is its only connection) and informs the subnet to setup the virtual circuit. This
leads to the second row in the table. Note, that we have a conflict here because
although R1 can easily distinguish connection 1 packets from HI and connection 1
packets from H3, R3 cannot do this. For this reason, R1 assigns a different
connection identifier to the outgoing traffic for the second connection (No.2). In
order to avoiding conflicts of this nature, it is important that routers have the ability
to replace connection identifiers in outgoing packets. In some contexts, this is called
label switching.
Router
R4
H3
R2 Packet
1
R5 R6 H2
R1
LAN
4 2
H1
3
R3
Holt machine
9
Network Layer
R3’s Table
R1 1 R5 1
R1 1 R5 2
in out
R5’s Table
R3 1 R6 1
R3 1 R6 1
in out
1.3.3 Implementation of Connection-less Services
In this section, we shall discuss the implementation of these services i.e., how
connection-less services are implemented in real networks. To implement
connection-less services, we need a datagram subnet.
In these services, packets are individually injected into the subnet and their routing
decisions are not dependent on each other (packets). Therefore, in connectionless
services, no advance setup is needed. In this context, the packets are frequently
called datagrams and the subnet is called a datagram subnet.
Now, let us take an example to learn how a datagram subnet works. Consider the
situation of Figure 3. In this Figure, H1 and H2 represent host machines and R1, R2,
R3, R4, R5 and R6 represent routers. Suppose, that the process running at host H1
has a long message to be transmited to a process running at H2 machine. To do so, it
transfers the message to the transport layer with appropriate instructions to deliver it
to the process running at H2. Where is the transfer layer process running, can you
figure out? Well, it may also be running on H1 but within the operating system. The
transport layer process adds a transport header to the front of the message and
transfers the message (also called TPDU) to the network layer, The network layer
too, might be running as another procedure within the operating system.
Let us assume, that the message is five times longer than the maximum packet size,
therefore, the network layer has to break it into five packets, 1,2, 3, 4 and 5 and send
each of them in turn to router R1 (because it is linked to R1) using some point-to-
point protocol. After this, the carrier (supported by ISP) takes over. Every router has
an internal table telling it where to send packets for each possible destination. Each
table entry is a pair consisting of a destination and the outgoing line to use for that
destination. Only directly-connected lines can be used. For example, in Figure 3, R1
has only two outgoing lines-to R2 and R3. So every incoming packet must be sent to
one of these routers.
As the packets arrive at R1 from H1, packets 1, 2, and 3 were stored briefly (to verify
their checksums). Then, each packet was forwarded to R3 according to R1’s table
(table not shown here). Packet 1 was then forwarded to R5 and from R5 to R6. When
it got to R6, it was encapsulated in a data link layer frame and sent to H2. Packets 2
and 3 follow the same route.
The algorithm that manages the tables and makes the routing decisions is known as
the routing algorithm. In next unit, we shall study routing algorithms. Students are
10
Introduction to Layer
Functionality and
requested to refer to [Ref. 1] for further study on the implementation of connection Design Issues
oriented and connection less services. You should focus on connecting routing tables.
Router
Host machine
4 R4
H1 Packet
Destination
R2 machine
5 1
Process
R5 R6 H2
R1
LAN
2
3
R3
Both virtual circuits and datagrams have their pros and cons. We shall compare them
on the basis of different parameters. These various parameters are:
Virtual circuits allow packets to contain circuit numbers instead of full destination
addresses. A full destination address in every packet may represent a significant
amount of overhead, and hence waste bandwidth.
Using virtual circuits requires a setup phase, which takes time and consumes memory
resources. However, figuring out what to do with a data packet in a virtual-circuit
subnet is easy: the router simply uses the circuit number to index into a table to find
out where the packet goes. In a datagram subnet, a more complicated lookup
procedure is required to locate the entry for the destination.
A datagram subnet needs to have an entry for every possible destination, whereas a
virtual-Circuit subnet just needs an entry for each virtual circuit.
• Quality of service
Virtual circuits have some advantages in guaranteeing quality of service and avoiding
congestion within the subnet because resources (e.g., buffers, bandwidth, and CPU
cycles) can be reserved in advance, when the connection is established. Once the
packets start arriving, the necessary bandwidth and router capacity will be there. With
a datagram subnet, congestion avoidance is more difficult.
• Vulnerability
11
Network Layer
Virtual circuits also have a vulnerability problem. If, a router crashes and loses its
memory, even if it comes back a second later, all the virtual circuits passing through
it will have to be aborted. In contrast, if a datagram router goes down, only those
users whose packets were queued in the router at the time will suffer, and maybe not
even all those, depending upon whether they have already been acknowledged. The
loss of a communication line is fatal to virtual circuits using it but can be easily
compensated for if datagrams are used.
• Traffic balance
Datagrams also allow the routers to balance the traffic throughout the subnet, since
routes can be changed partway through a long sequence of packet transmissions.
A brief comparison between a virtual circuit subnet and a datagram subnet is given
in Table 2. Students should refer to Reference 1 for further discussion.
Table 2: Comparison between Virtual Circuit and Datagram Subnets (Source: Ref. [1])
12
Introduction to Layer
Functionality and
………………………………………………………………………………… Design Issues
…………………………………………………………………………………
…………………………………………………………………………………
………
1.5 ADDRESSING
End systems generally have only one physical network connection and thus, have
only one data-link address. Routers and other internetworking devices typically have
multiple physical network connections and therefore, have multiple data-link
addresses.
Media Access Control (MAC) addresses are used to identify network entities in
LANs that implement the IEEE MAC addresses of the data link layer. These
addresses are 48 bits in length and are expressed as 12 hexadecimal digits.
MAC addresses are unique for each LAN interface. These addresses consist of a
subset of data link layer addresses. Figure 4 illustrates the relationship between MAC
addresses, data-link addresses, and the IEEE sub-layers of the data link layer.
Network addresses are sometimes called virtual or logical addresses. These addresses
are used to identify an entity at the network layer of the OSI model. Network
addresses are usually hierarchical addresses.
13
Network Layer
A flat address space is organised into a single group, such as, your enrolment no.
Hierarchical addressing offers certain advantages over flat-addressing schemes. In
hierarchical addressing, address sorting and recalling is simplified using the
comparison operation. For example, “India” in a street address eliminates any other
country as a possible location. Figure 5 illustrates the difference between hierarchical
and flat address spaces.
In networking, the address to a device can be assigned in either of these two ways:
(ii) Dynamic addresses: Dynamic addresses are obtained by devices when they are
attached to a network, by means of some protocol-specific process. A device
using dynamic address often has a different address each time it connects to the
network.
1.5.3 IP Address
IP address is a unique address i.e., no two machines on the Internet can have same IP
address. It encodes it’s network number and host number. Every host and router, in a
internetwork has an IP address.
14
Introduction to Layer
Functionality and
The format of an IP address is a 32-bit numeric address written as four numbers Design Issues
separated by periods. Each number can be zero to 255. For example, 1.160.10.240
could be an IP address. These numbers defines three fields:
(i) Class type: Indicate the IP class, to which the packet belongs:
Figure 6: IP address
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
………
In the network layer, when the number of packets sent to the network is greater than
the number of packets the network can handle (capacity of network), a problem
occurs that is known as congestion. This is just like congestion on a road due to
heavy traffic. In networking, congestion occurs on shared networks when, multiple
users contend for access to the same resources (bandwidth, buffers, and queues).
When the number of packet sent into the network is within the limits, almost all
packets are delivered, however, the traffic load increases beyond the network
capacity. As a result the system starts discarding packets.
15
Network Layer
destination
Traffic discarded
due to congestion
Figure 7: Congestion
Because routers receive packets faster than they can forward them, one of these two
things may happen in case of congestion:
• The subnet may prevent additional packets from entering the congested region
until those already present can be processed, or
• The congested routers can discard queued packets to make room for those that
are arriving currently.
Congestion Control
Congestion control refers to the network mechanism and techniques used to control
congestion and keep the load below the networks capacity.
By storing content closer to users i.e., caching can be the best congestion control
scheme. In this manner, majority of the traffic could be obtained locally rather than
being obtained from distant servers along routed paths that may experience
congestion.
Some basic techniques to manage congestion are:
16
Introduction to Layer
Functionality and
of resource allocation. This technique is difficult, but can eliminate network Design Issues
congestion by blocking traffic that is in excess of the network capacity.
Suppose, you need to go from location (A) to another location (B) in your city and
more than one routes are available for going from location A to location B. In this
case, first you decide the best route for going from location A to B. This decision
may be based on a number of factors such as distance (route with minimum traffic
distance), time (route with minimum traffic jam), cost etc. After deciding the best
route you start moving on that route. The same principle is at work here, in computer
networks also. While transferring data packets in a packet switched network the same
principle is applied, and this is known as routing. Now, we can say that routing is the
act of moving data packets in packet-switched network, from a source to a
destination. Along the way, several intermediate nodes typically are encountered.
Routing occurs at Layer 3 (the network layer) in OSI reference model. It involves two
basic activities:
There are two main performance measures that are substantially affected by the
routing algorithm – throughput (quantity of service) and latency (average packet
delay when quality of service is required). The parameter throughput refers to the
number of packets delivered in the subnet. Routing interacts with flow control in
determining these performance measures by means of a feedback mechanism shows
in Figure 8 When the traffic load offered by the external resources to the subnet is
within the limits of the carrying capacity of the subnet, it will be fully accepted into
the network, that is,
Delay
Offered load Throughput
Routing Delay
Flow control
Rejected load
17
Network Layer
But, when the offered load exceeds the limit, the packet will be rejected by the flow
control algorithm and
The traffic accepted into the network will experience an average delay per packet that
will depend on the routes chosen by the routing algorithm.
However, throughput will also be greatly affected (if only indirectly) by the routing
algorithm because typical flow control schemes operate on the basis of striking a
balance between throughput and delay. Therefore, as the routing algorithm is more
successful in keeping delay low, the flow control algorithm allows more traffic into
the network, while the precise balance between delay and throughput will be
determined by flow control, the effect of good routing under high offered load
conditions is to realise a more favourable delay-throughput curve along which flow
control operates, as shown in Figure 9.
Poor routing
Delay
Good routing
Thoughput
(Source: Ref.[2])
Let us take an example to understand the intencacy. In the network of Figure 10, all
links have a capacity of 10 units. There is a single destination (R6) and two origins
(R1 and R2). The offered pockets from each of R1 and R2 to R5 and R6 is 5 units.
Here, the offered load is light and can easily be accommodated with a short delay by
routing along the leftmost and rightmost paths, 1-3-6 and 2-5-6, respectively. If
instead, however, the routes 1-4-6 and 2-4-6 are used, the flow on link (4,6) with
equal capacity, resulting in very large delays.
5 units 5 units
Origin 1 2 Origin
4
3 5
18
Introduction to Layer
Functionality and
Figure 10: Example a sub network Design Issues
Observe Figure 10 once again. All links have a capacity of 10 units. If, all traffic is
routed through the middle link (R4,R6), congestion occurs. If, instead, paths (R1-R3-
R6) and (R2-R5-R6) are used , the average delay is shorter/lesses.
In conclusion, the effect of good routing is to increase throughput for the same value
of average delay per packet under high offered load conditions and decrease average
delay per packet under low and moderate offered load conditions. Furthermore, it is
evident as low as possible for any given level of offered load. While this is easier said
than done, analytically. Students are requested to refer to (Ref. 2) for further
discussion. You are requested to further enhance your knowledge by reading [Ref. 2].
19
Network Layer
1.8 SUMMARY
In this unit, we looked at the two types of end-to-end delivery services in computer
networks i.e., connection oriented service and connection less service. Connection-
oriented service is a reliable network service, while connection-less service is
unreliable network service. Then we studied the concept of addressing. A network
addresses identifies devices separately or as members of a group. Internetwork
addresses can be categorised into three types i.e., data link layer addresses, media
access control (MAC) addresses and network layer addresses. After this, we studied a
problem that occurs at the network layer level i.e., congestion. It is a problem that
occurs due to overload on the network. Then, we discussed routing. It is the act of
moving data packets in packet-switched network, from a source to a destination. We
also examined the relationship between routing and flow control through an example
and digrams.
1.9 SOLUTIONS/ANSWERS
Check Your Progress 1
1) a
2) b
3) Connection-oriented service is sometimes called a “reliable” network
service because:
• It guarantees that data will arrive in the proper sequence.
• Single connection for entire message facilitates acknowledgement process
and retransmission of damaged and lost frames.
20
Introduction to Layer
Functionality and
Check Your Progress 3 Design Issues
1) In the network layer, when the number of packets sent to the network is greater
then the number of packets the network can handle (capacity of network), a
problem occurs that is known as congestion.
4) In adaptive routing, routing decisions are taken on each packet separately i.e.,
for the packets belonging to the same destination, the router may select a new
route for each packet.
In non-adaptive routing, routing decisions are not taken again and again i.e.,
once the router decides a route to the destination, it sends all packets for that
destination on the same route.
21
Routing Algorithms
UNIT 2 ROUTING ALGORITHMS
Structure Page Nos.
2.0 Introduction 23
2.1 Objectives 23
2.2 Flooding 23
2.3 Shortest Path Routing Algorithm 25
2.4 Distance Vector Routing 26
2.4.1 Comparison
2.4.2 The Count-to-Infinity Problem
2.5 Link State Routing 30
2.6 Hierarchical Routing 33
2.7 Broadcast Routing 35
2.8 Multicast Routing 36
2.9 Summary 39
2.10 Solutions/Answer 40
2.11 Further Readings 41
2.0 INTRODUCTION
As you have studied earlier, the main function of the network layer is to find the best
route from a source to a destination. In routing, the route with the minimum cost is
considered to be the best route. For this purpose, a router plays an important role. On
the basis of cost of a each link, a router tries to find the optimal route with the help of
a good routing algorithm. There are a large number of routing algorithms. These
algorithms are a part of the network layer and are responsible for deciding on which
output line an incoming packet should be transmitted. Some of these routing
algorithms are discussed in this unit.
2.1 OBJECTIVES
2.2 FLOODING
Consider an example of any network topology (VC subnet) in which, there are some
link failures or in which, a few routers are not operational. These failures will cause
changes in the network typology which have to be communicated to all the nodes in
the network. This is called broadcasting. These could be many such examples of
broadcasting in which the message has to be passed on to the entire network.
23
Network Layer all nodes in the network. For example, in the Figure 1(a), R1 will send its packets to
R2 and R3. R2 will send the packet to R5 and R4. Two additional rules are also
applied in order to limit the number of packets to be transmitted. First, a node will not
relay the packet back to the node from which the packet was obtained. For example,
R2 will not send the packet back to R1 if, it has received it from R1. Second, a node
will transmit the packet to its neighbours at most once; this can be ensured by
including on the packet the ID number of the origin node and a sequence number,
which is incremented with each new packet issued by the origin node. By storing the
highest sequence number received for each node, and by not relaying packets with
sequence numbers that are less than or equal to the one stored, a node can avoid
transmitting the same packet more than once, on each of its incident links. On
observing these rules, you will notice that, links need not preserve the order of packet
transmissions; the sequence numbers can be used to recognise the correct order. The
following figure gives an example of flooding and illustrates how, the total number of
packet transmission per packet broadcast lies between L and 2L, where L is the
number of bi-directional links of the network. In this Figure 1(a) Packet broadcasting
from router R1 to all other nodes by using flooding [as in Figure 1(a)] or a spanning
tree [as in Figure 1(b)]. Arrows indicate packet transmissions at the time shown. Each
packet transmission time is assumed to be one unit. Flooding requires at least as many
packet transmissions as the spanning tree method and usually many more. In this
example, the time required for the broadcast packet to reach all nodes is the same for
the two methods. In general, however, depending on the choice of the spanning tree,
the time required for flooding may be less than for the spanning tree method. The
spanning tree is used to avoid the looping of packets in the subnet.
Packet
R2 R5
R2 R5
2 2
0 1 0 2
1 2 B 1
A R1 R7
R4 2 1
1 R1 R4
R7
2
2 0
0 R3 R6
R3 1 R6 1
Router
(i) Dijkstra algorithm divides the node into two sets i.e., tentative and
permanent. T F
24
2) What is a spanning tree? Routing Algorithms
…………………………………………………………………………………...
…………………………………………………………………………………..
……………………………………………………………………………….….
Shortest path routing algorithm is a simple and easy to understand technique. The
basic idea of this technique is to build a graph of the subnet, with each node of the
graph representing a router and each arc of the graph representing a communication
line i.e., link. For finding a route between a given pair of routers, the algorithm just
finds the shortest path between them on the graph. The length of a path can be
measured in a number of ways as on the basis of the number of hops, or on the basis
of geographic distance etc.
There are a number of algorithms for computing the shortest path between two nodes
of a graph. One of the most used algorithm is the Dijkstra algorithm. This is
explained below:
In this algorithm, each node has a label which represents its distance from the source
node along the best known path. On the basis of these labels, the algorithm divides
the node into two sets i.e., tentative and permanent. As in the beginning no paths are
known, so all labels are tentative. The Algorithm works in the following manner:
1) As shown in the Figure below, the source node (A) has been chosen as T-node,
and so its label is permanent.
2) In this step, as you see B, C are the tentative nodes directly linked to
T-node (A). Among these nodes, since B has less weight, it has been chosen as
T-node and its label has changed to permanent.
25
Network Layer
3) In this step, as you see D, E are the tentative nodes directly linked to T-node(B).
Among these nodes, since D has less weight, it has been chosen as T-node and
its label has changed to permanent.
4) In this step, as you see C, E are the tentative nodes directly linked to T-node(D).
Among these nodes, since E has less weight, it has been chosen as T-node and
its label has changed to permanent.
5) E is the destination node. Now, since the destination node (E) has been, reached
so, we stop here, and the shortest path is A –B –D –E.
Nowadays, computer networks generally use dynamic routing algorithms rather than
the static ones described above because, static algorithms do not take the current
network load into account. Distance vector routing and link state routing are two
main dynamic algorithms. In this section, we will go through the distance vector
routing algorithm. It is also known as Belman-Ford routing algorithm.
26
Bellman-Ford Algorithm Routing Algorithms
The Bellman-Ford algorithm can be stated as follows: Find the shortest paths from a
given source node subject keeping in mind the constraint that, the paths contain at
most one link; then, find the shortest paths, keeping in mind a contraint of paths of at
most two links, and so on. This algorithm also proceeds in stages. The description of
the algorighm is given below.
s = source node
w(i, j) = link cost from node i to node j; w(i, j) = ∞ if the two nodes are not directly
connected; w(i, j) ≥ 0 if the two nodes are directly connected.
Lh(n) = cost of the least-cost path from node s to node n under the constraint of no
more than h links
1. [Initialisation]
2. [Update]
min
Lh + 1(n) = [Lh (j) = w(j, n)]
j
Connect n with the predecessor node j that achieves the minimum, and eliminate any
connection of n with a different predecessor node formed during an earlier iteration.
The path from s to n terminates with the link from j to n.
For the iteration of step 2 with h = K, and for each destination node n, the algorithm
compares potential paths from s to n of length K + 1 with the path that existed at the
end of the previous iteration. If the previous, shorter path has less cost, then that path
is retained. Otherwise a new path with length K + 1 is defined from s to n; this path
consists of a path of length K from s to some node j, plus a direct hop from node j to
node n. In this case, the path from s to j that is used is the K-hop path for j defined in
the previous iteration.
Table 1 shows the result of applying this algorithm to a public switched network,
using s = 1. At each step, the least-cost paths with a maximum number of links equal
to h are found. After the final iteration, the least-cost path to each node and the cost of
that path has been developed. The same procedure can be used with node 2 as the
source node, and so on. Students should apply Dijkstra’s algorithm to this subnet and
observe that the result will eventually be the same.
27
Network Layer (a)
Link between
two routers
8
3
5 5 Router
R2 R3
R6
8
2
1
3 2
2
3 1 4
1
R1 1
R5
R4
7 1
H Lh(2) Path Lh(3) Path Lh(4) Path Lh(5) Path Lh(6) Path
0 ∞ — ∞ — ∞ — ∞ — ∞ —
1 2 1-2 5 1-3 1 1-4 ∞ — ∞ —
2 2 1-2 4 1-4-3 1 1-4 2 1-4-5 10 1-3-6
3 2 1-2 3 1-4-5-3 1 1-4 2 1-4-5 4 1-4-5-6
4 2 1-2 3 1-4-5-3 1 1-4 2 1-4-5 4 1-4-5-6
2.4.1 Comparison
Now, let us compare the two algorithms in terms of what information is required by
each node to find out the optional path in the Bellman-Ford algorithm. In step 2, the
calculation for node n involves knowledge of the link cost to all neighboruing nodes
to node n [i.e., w( j., w( j, n)] plus the total path cost to each of those neighbouring
nodes from a particular source node s (i.e., Lh(j)]. Each node can maintain a set of
costs and associated paths for every other node in the network and, exchange this
information with its direct neighbours from time to time. Each node can therefore, use
the expression in step 2 of the Bellman-Ford algorithm, based only on information
from its neighbours and knowledge of its link costs, to update it costs and paths. On
the other hand, consider Dijkstra’s algorithm. Step 3, it appears, required that each
node have complete topological information about the network. That is, each node
must know the link costs of all links in the network. Thus, for this algorithm,
information must be exchanged with all other nodes.
In general, an evaluation of the relative merits of the two algorithms should consider
the processing time of the algorithms and the amount of information that must be
collected from other nodes in the network or internet. The evaluation will depend on
the implementation appraoch and the specific implementation.
28
A final point: Both algorithm are known to converge under static conditions of Routing Algorithms
topology, and link costs and will converge to the same solution. If the link costs
change over time the algorithm will attempt to catch up with these changes. However,
if the link cost depends on traffic, which in turn depends on the routes chosen, then a
feedback condition exists, that could result in instablities.
2.4.2 The Count-to-Infinity Problem
One of the serious drawbacks of the Bellman-Food algorithm is that it quickly
responds to a path will a shorter delay but, responds slowly to a path with a longer
delay. This is also known as count to infinity problem. Consider a subnet in which a
router, whose best route to destination X is large. If, on the next exchange neighbour,
A suddenly reports a short delay to X, the router just switches over to using line A to
send traffic to X. In one vector exchange, the good news is processed.
To see how fast good news propagates, consider the five-node (linear) subnet of the
following figure. (Figure 4), where the delay metric is the number of hops. In the
Figure 4 (a) there are five routers Ra, Rb, Rc, Rd and Re linked to each other linearly.
Suppose, a router Ra is down initially and all the other routers know this. In other
words, they have all recorded the delay to Ra as infinity.
Linear subnet
Ra Rb Rc Rd Re
— — — — Initial distance
1 — — — After 1 exchange of message
1 2 — — After 2 exchanges of message
1 2 3 — After 3 exchanges of message
1 2 3 4 After 4 exchanges of message
Ra Rb Rc Rd Re Linear subnet
having 5 routers
1 2 3 4 Initial distance
3 2 3 4 After 1 exchange of message
3 4 3 4 After 2 exchanges of message
5 4 5 4 After 3 exchanges of message
5 6 5 6 After 4 exchanges of message
7 6 7 6 After 5 exchanges of message
7 8 7 8 After 6 exchanges of message
— — — —
Rb’s table
Rd’s table
Rc’s table Re’s table
(b)
a
29
Network Layer We will describe this problem in the following stages: (i) when router Ra is up, and
(ii) when router Ra is down. Now let us take the first stage. When Ra is up, the other
routers in the subnet learn about it via the information (vector) exchanges. At the time
of the first exchange, Rb learns that its left neighbour has zero delay to Ra. Rb now
makes an entry in its routing table that Ra is just one hop away to the left. All the
other routers still think that Ra is down. At this point, the routing table entries for Ra
are as shown in the second row of Figure 4(b). On the next exchange, Rc learns that
Rb has a path of length 1 to A, so it updates its routing table to indicate a path of
length 2, but Rd and Re do not hear the good news until later. Clearly, the good news
is spreading at the rate of one hop per exchange. In a subnet whose longest path is of
length N hops, within N exchanges everyone will know about the newly-revived lines
and routers.
Now, let us consider the second stage Figure 4(b), in which all the lines and routers
are intially up. Routers Rb,Rc,Rd and Re are at a distance of 1,2,3 and 4 from A.
Suddenly, A goes down, or alternatively, the line between A and B is cut, which is
effectively the same thing from B’s point of view.
At the first packet exchange, Rb does not hear anything from Ra. Fortunately, Rc says:
Do not worry; I have a path to A of length 2. Little does B know that C’s path runs
through Rb itself. For all Rb knows, Rc might have ten lines all with separate paths to
Ra of length 2. As a result, Rb thinks it can reach Ra via Rc, with a path length of 3.
Rd and Re do not update their entries on the first exchange.
On the second exchange, C notices that each of its neighbours are claiming a path to
Ra of length 3. It picks one of them at random and makes its new distance to Ra 4, as
shown in the third row of Figure 4(b). Subsequent exchanges produce the history
shown in the rest of Figure 4(b).
Form Figure 4, it should be clear why bad news travels slowly: no router ever has a
value higher than the minimum of all its neighbours. Gradually, all routers work their
way up to infinity, but the number of exchanges required depends on the numerical
value used for infinity. For this reason, it is wise to set infinity to the longest path plus
1. If the metric time is delay, there is no well-defined upper bound, so a high value is
needed to prevent a path with a long delay from being treated as down. Not entirely
surprisingly, this problem is known as the count-to-infinity problem. The core of the
problem is that when X tells Y that it has a path somewhere, Y has no way of knowing
whether it itself is on the path.
As explained above distance vector routing algorithm has a number of problems like
count to infinity problem. For these reasons, it was replaced by a new algorithm,
known as the link state routing.
Link state routing protocols are like a road map. A link state router cannot be fooled
as easily into making bad routing decisions, because it has a complete picture of the
network. The reason is that, unlike approximation approach of distance vector, link
state routers have first hand information from all their peer routers. Each router
originates information about itself, its directly connected links, and the state of those
links. This information is passed around from router to router, each router making a
copy of it, but never changing it. Link-state involves each router building up the
complete topology of the entire network (or at least of the partition on which the
router is situated), thus, each router contains the same information. With this method,
routers only send information to of all the other routers when there is a change in the
30
topology of the network. The ultimate objective is that every router should have Routing Algorithms
identical information about the network, and each router should be able to calculate its
own best path independently. Independently calculate its own best paths.
In contrast to the distance-vector routing protocol, which works by sharing its
knowledge of the entire network with its neighbours, link-state routing works by
having the routers inform every router in the network about its nearest neighbours.
The entire routing table is not distributed any router but, the part of the table
containing its neighbours is:
Link-state is also known as shortest path first.
Link State Packet
When a router floods the network with information about its neighbourhood, it is said
to be advertising. The basis of this advertising is a short packet called a link state
packet (LSP). An LSP usually contains four fields: the ID of the advertiser, the ID of
the destination network, the cost, and the ID of the neighbour router. The structure of
a LSP is shown in Table 2.
Table 2: Link state packet (LSP)
1) Neighbour discovery
The Router has to discover its neighbours and learn their network addresses. As a
router is booted, its first task is to learn who its neighbours are.
The Router does this by sending a special HELLO packet on each point-to-point line.
The router on the other end is expected to send a reply disclosing its identity. These
names must be globally unique. If two or more routers are connected by a LAN, the
situation becomes slightly more complicated. One way of modeling the LAN is to
consider it as a node itself. Please see reference [1] for further explanation through a
diagram.
2) Measure delay
Another job that a router needs to perform is to measure the delay or cost to each of its
neighbours. The most common way to determine this delay is to send over the line a
special ECHO packet that the other side is required to send back immediately. By
measuring the round-trip time and dividing it by two, the sending router can get a
reasonable estimate of the delay. For even better results, the test can be conducted
several times and the average used.
This method implicitly assumes that delays are symmetric, which may not always be
the case.
3) Building link state packets
After collecting the information needed for the exchange, the next step for each router
is to build a link state packet containing all the data. This packet starts with the
31
Network Layer identity of the sender, followed by a sequence number and age, and a list of
neighbours. For each neighbour, the delay to that neighbour is given.
As an example, let’s consider the subnet given in Figure 5 with delays shown as
labels on the lines. For this network, the corresponding link state packets for all six
routers are shown in the Table 3.
2 C
B
4
A 7 6 1
D E
3 4
Table 3: The link state packets (LSPs) for the subnet in figure
A B C D E F
Seq. Seq. Seq. Seq. Seq. Seq.
Age Age Age Age Age Age
B 4 C 2 B 2 A 7 D 1 A 3
D 7 A 4 F 6 E 1 C 4
F 3 F 4 D 6
Building the link state packets is easy. The hard part is determining when to build
them. One possibility, is to build them periodically, that is, at regular intervals.
Another possibility is to build them when some significant event occurs, such as a line
or neighbour going down or coming back up again or changing its properties
appreciably.
Let us describe the basic algorithm in distributing the link state packet. The
fundamental concept here is flooding to distribute the packets. But to keep the number
of packets flowing in the subnet under control, each packet contains a sequence
number that is incremented for each new packet delivered. When a new link state
packet arrives, it is checked against the list of packets already scene by a router. It is
discarded in case the packet is old; otherwise it is forwarded on all lines except the
one it arrived on. A router discards an obsolete packet (i.e., with a lower sequence) in
case it has seen the packet with a highest sequence number.
The age of a data packet is used to prevent corruption of the sequence number from
causing valid data to be ignored. The age field is decremented once per second by the
routers which forward the packet. When it hits zero it is discarded.
How often should data be exchanged?
32
5) Compute shortest path tree Routing Algorithms
After accumulating all link state packets, a router can construct the entire subnet graph
because every link is represented. In fact, every link is represented twice, once for
each direction. The two values can be averaged or used separately.
Now, an algorithm like Dijkstra’s algorithm can be run locally to construct the
shortest path to all possible destinations. The results of this algorithm can be installed
in the routing tables, and normal operation resumed.
• In link state protocol, the memory required to store the data is proportional to k *
n, for n routers each with k neighbors and the time required to compute can also
be large.
• In it bad data e.g., data from routers in error will corrupt the computation.
As you see, in both link state and distance vector algorithms, every router has to save
some information about other routers. When the network size grows, the number of
routers in the network increases. Consequently, the size of routing tables increases, as
well, and routers cannot handle network traffic as efficiently. We use hierarchical
routing to overcome this problem. Let’s examine this subject with an example:
We use distance vector algorithms to find best routers between nodes. In the situation
depicted below in Figure 6, every node of the network has to save a routing table
with 17 records.
33
Network Layer Table 4: A’s Routing Table
34
Routing Algorithms
35
Network Layer in each packet it includes only those destinations that are to use the line. Therefore,
the destination set is partitioned among the output lines. In this, after a sufficient
number of hops, each packet will carry only one destination and can be treated as a
normal packet.
Advantage of this method is that it makes excellent use of bandwidth and generates
only the minimum number of packets required to do the job.
In this method each router must have knowledge of some spanning tree. Sometimes
this information is available (e.g., with link state routing) but sometimes it is not (e.g.,
with distance vector routing), this is the major disadvantage of this method.
In this method, when a broadcast packet arrives at a router, the router checks whether
the packet arrived on the line that is normally used for sending packets to the source
of the broadcast or not.
If the packet arrived on the line that is normally used for sending packets to the source
of the broadcast then
Router forwards copies of it onto all lines except the one it arrived on.
Else (i.e., packet arrived on a line other than the preferred one for reaching the source)
In many cases, you need to send same data to multiple clients at the same time. In this
case, if, we use unicasting then the server will connect to each of its clients again and
again, but each time it will send an identical data stream to each client. This is a waste
of both server and network capacity. If, we use broadcasting in this case, it would be
inefficient because sometimes receivers are not interested in the message but they
receive it nonetheless, or sometimes they are interested but are not supposed to see the
message.
In such cases i.e., for sending a message to a group of users (clients), we use another
technique known as multicasting. The routing algorithm used for multicasting, is
called multicast routing.
36
mainly want to know which of their hosts belong to which group. For this, either the Routing Algorithms
host must inform their router about changes in the group membership, or routers must
query their hosts periodically. On receiving this information, routers tell their
neighbours, so the informations propagated through the subnet.
Now, we will learn the working of multicasting through an example. In our example
(as shown in Figure 8), we have taken a network containing two groups i.e., group 1
and 2. Here, some routers are attached to hosts that belong to only one of these groups
and some routers are attached to hosts that belong to both of these groups.
2 1,2
1
2
1,2
To do multicast routing, first, each router computes a spanning tree covering all other
routers. For example, Figure 9 shows spanning tree for the leftmost router.
2 1,2
1
2
1,2
Now, when a process sends a multicast packet to a group, the first router examines its
spanning tree and prunes it. Pruning is the task of removing all lines that do not lead
to hosts that are members of the group. For example, Fig. 10 shows the pruned
spanning tree for group 1 and Fig. 11 shows the pruned spanning tree for group 2.
There are a number of ways of pruning the spanning tree. One of the simplest ones
that can be used, if link state routing is used and each router is aware of the complete
topology, including the hosts that belong to those groups. Then, the spanning tree can
be pruned, starting at the end of each path, working toward the root, and removing all
routers that do not belong to the group under consideration. With distance vector
routing, a different pruning strategy can be followed. The basic algorithm is reverse
path forwarding. However, whenever a router with no hosts interested in a particular
group and no connections to other routers, receives a multicast message for that group,
37
Network Layer it responds with a PRUNE message, thus, telling the sender not to send it any more
multicasts for that group. When a router with no group members among its own hosts
has received such a message on its lines, it, too, can respond with a PRUNE message.
In this way, the subnet is recursively pruned.
2 2
After pruning, multicast packets are forwarded only along the appropriate spanning
tree. This algorithm needs to store separate pruned spanning tree for each member of
each group. Therefore, this would not be good for large networks.
38
2) Answer the following questions in brief. Routing Algorithms
2.9 SUMMARY
In this unit, we first studied different routing algorithms. First, we looked at finding
the route between a given pair of routers. The algorithm finds the shortest path
between them on the graph. A number of algorithms for computing the shortest path
between two nodes of a graph are known. Here, we have studied the Dijkstra
algorithm.
Next, we studied flooding. In flooding, every incoming packet is sent out on every
outgoing line except the line from which it arrived. This algorithm is very simple to
implement, but it generates lots of redundant packets. It discovers all routes, including
the optimal one, therefore this is robust and gives high performance.
Next, we studied the Belman-Ford routing algorithm. In this algorithm each host
maintains a routing table. This routing table has an entry for every other router in the
subnet. These tables are updated by exchanging information with the neighbours.
Next, we studied the link state routing algorithm. In this algorithm, each router
originates information about itself, its directly connected links, and the state of those
links. This information is passed around from router to router, each router making a
copy of it, but never changing it. The ultimate objective is that every router has
identical information about the network, and each router will independently calculate
its own best paths.
39
Network Layer
2.10 SOLUTIONS/ANSWERS
Check Your Progress 1
1) (i) True
(ii) True
(iii) False
1) (i) False
(ii) True
2) (a) Problems with distance vector routing algorithm are: it uses only
approximation, neighbours, slowly increase their path length to a dead node,
and the condition of being dead (infinite distance) is reached by counting to
infinity, one at a time.
(b) In link state routing, a router floods the network with information about
3 of its neighbours, by using a short packet, known as link state packet
(LSP). An LSP usually contains four fields: the ID of the advertiser, the ID
of the destination network, the cost, and the ID of the neighbour router.
1) (a) True
(b) False
(c) False
(b) In multicasting, pruning is the task of removing all lines from the spanning
tree of a router that do not lead to hosts that are members of a
particular group.
40
Routing Algorithms
2.11 FURTHER READINGS
41
Network Layer
UNIT 3 CONGESTION CONTROL IN
PUBLIC SWITCHED NETWORK
Structure Page Nos.
3.0 Introduction 42
3.1 Objectives 43
3.2 Reasons for Congestion in the network 43
3.3 Congestion Control vs. Flow Control 43
3.4 Congestion Prevention Mechanism 44
3.5 General Principles of Congestion Control 45
3.6 Open Loop Control 46
3.6.1 Admission Control
3.6.2 Traffic Policing and its Implementation
3.6.3 Traffic Shaping and its Implementation
3.6.4 Difference between Leaky Bucket Traffic Shaper and
Token Bucket Traffic Shaper
3.7 Congestion Control in Packet-switched Networks 49
3.8 Summary 50
3.9 Solutions/Answers 50
3.10 Further Readings 51
3.0 INTRODUCTION
Congestion occurs when the number of packets being transmitted through the public
switched networks approaches the packet handling capacity of the network. When the
number of packets dumped into the subnet by the hosts, is within its carrying capacity,
all packets are delivered (except for a few that are afflicted with transmission errors).
The networks get overloaded and start dropping the packet, which leads to congestion
in the network. Due to the dropping of packets, the destination machine may demand
the sources to retransmit the packet, which will result in more packets in the network
and this leads to further congestion. The net result is that the throughput of the
network will be very low as, illustrated in the Figure 1.
delivered
Packets delivered (throughput)
no congestion
congestion here
dropping of packets
42
Congestion Control in Public
3.1 OBJECTIVES Switched Network
43
Network Layer The reason for comparing congestion and flow control is that, some congestion
control algorithms operate by sending messages back to the senders to slow down
incase, the network is congested. In case, the receiver is overloaded, the host will get a
similar message to slow down.
The purpose of this section is to examine various mechanisms used at the different
layers to achieve different goals. But, from the congestion control point of view, these
mechanism are not very effective. We will start with data link layer first. Go back N
and Selection Repeat are flow control mechanism available at the data link layer.
Incase, there is any error with the packet, Go back N retransmits all the packets up to
the packet where the error occurred. For example, there is an error in 5th packet, it
means that it will retransmit 1, 2, 3, 4, and 5 which create extra load on the network,
thereby, leading to congestion. Selective repeat retransmits only that packet. With
respect to congestion control, selective repeat is clearly better than go back N.
44
At the network good routing algorithm can help avoid congestion by distributing the Congestion Control in Public
traffic over all the lines, whereas a bad one can send too much traffic over already Switched Network
congested lines. Finally, packet lifetime management deals with the duration a packet
may live before being discarded. If, it is too long, lost packets may reside in the
network a long time, but if, it is too short, packets may sometimes time out before
reaching their destination, thus, inducing retransmissions. Therefore, we require good
routing algorithm and optimal packet life time.
Between transport and data link layer there are common issues (Flow Control,
acknowledge mechanism) with respect to Congestion Control mechanism but at the
transport layer, the extra problem is related to determining time out internal across the
network, which is a difficult problem.
From our earlier discussions it appears that congestion problem is not very easy to
solve. Typically, the solution depends on the type of requirements for the application
(e.g., Qo S, high bandwidth, fast processing). Like routing algorithm, congestion
control algorithms have been classified into two types:
• Open Loop
• Closed Loop.
Open loop solutions attempt to solve the problem with a good design, that ensures
that congestion does not occur in the network. Once having network is running
midcourse corrections are not made. Open loop algorithm works on two basic
mechanisms: i) Admission Control ii) Resource Reservation. Admission control is
a function performed by a network to accept or reject traffic flow. Therefore, the
purpose of open loop solution is to ensure that the traffic generated by the source will
not lower the performance of network below the specified Qo S. The network will
accept traffic till QoS parameters are satisfied otherwise, it rejects the traffic.
In contrast, closed loop solutions are based on the concept of a feedback loop. These
algorithms are called closed loop because the state of the network has to be fed up to
the source that regulates traffic. Closed loop algorithms follow dynamic approach to
the solution of congestion problems. It reacts during the congestion occurrence period
or when the congestion is about to happen. A variety of metrices have been proposed
to monitor the state of a subnet in order to observe congestion. These are:
• Queue length
• The number of retransmitted packets due to timeout
• Percentage of rejected packets due to shortage of the router’s memory
• Average packet delay.
To prevent congestion from occurring, this mechanism monitors the system to detect
when and where congestion occurs, pass is this information to places where action can
be taken usually at the source and finally adjusts the systems operation to correct the
problem.
The presence of congestion means that the offered load in the network is (temporarily)
greater than the resources (routers) can handle. Two straight forward solutions are
to increase the resources or decrease the load. To increase the resources the
following mechanism may be used as suggested in Tanenbaum [Ref 1].
45
Network Layer i) Higher bandwidth may be achieved by increasing transmission power of a satellite.
However, sometimes it is not possible to increase the capacity or it has already been
increased to the limit. But, if the network has reached the maximum limit, the
alternative is to reduce the load. The following mechanism may be used (i) Denying
service to some users (ii) Degrading services to some or all users.
For subnets that use virtual circuits internally, these methods can be used at the
network layer. In the next section, we will focus on their use in the network layer. We
will also discuss the open loop control mechanism in detail. The closed loop
mechanism will be discussed in Block 4 Unit 2 as a part of TCP Protocol.
As mentioned earlier Open Loop Control algorithm prevent the occurrence of from
Congestion occurrence rather than dealing with it after it has occurred. It does not rely
on feedback information to regulate the traffic. Thus, this technique is based on the
assumption that once the packets are accepted from a particular source, the network
will not get overloaded. In this section, we will examine several such techniques and
its implementation. Learners are requested to Leon Garcia’s book [Ref. 2].
• The bucket has certain depth to hold water just like a network can accept a
certain number of packets.
• The bucket leaks at a certain rate (if there is water is the bucket) no matter at
what rate water enters the bucket. In terms of computer networks, it should be
interpreted as follows: No matter at what rate the packets arrive at the input
lines of a routers, routers in a subnet passes to its outgoing link at a fixed rate.
46
• If the bucket does not overflow when the water is poured into the bucket, then Congestion Control in Public
the bucket of water is said to be conforming. In terms of network, if the traffic Switched Network
is within the agreed norms, all packets will be transferred.
• The bucket will spillover if, it is full and if, additional water is poured into it, if
it gets more packets than it can handle, it will lead to congestion and then the
network due to which the additional packets will be lost.
If, we expect the traffic flow to be very smooth, then the bucket has to be of a shallow
type. In case, the flow is bursty in nature, the bucket should be deeper. In summary,
what we want to observe is whether the outflow of packets corresponds to the arrival
rate of packets or not? Implementation of a leaky bucket is similar to queue data
structure implementation. When a packet arrives, if there is space left in the queue, it
gets appended to the queue otherwise, it gets rejected.
Leon Garcia [Ref 2] has defined traffic shaping as the process of altering traffic flow
to another flow. It is mainly used for smoothening the traffic. Consider an example,
where a host is generating data at 30 kbps, which, it can transmit to the network in
several ways (as shown in the following Figure 2).
30 Kbps
(a) Time
Time
100 Kbps
Time
Figure 2: Possible traffic patterns at the average rate of 30 Kbps
You can make observation from the Figure 2 that the Figure 2 (a) shows the
smoothened pattern will create less stress on the network but the destination machine
may not want to wait for 1 Second to retrieve 30 kbps data at each period. Now, we
will look at its implementation. There are two mechanism:
Leaky Bucket Traffic Shaper: It is a very simple mechanism in which data stored at
the senders buffer, is passed on at a constant interval to smoothen traffic as shown
47
Network Layer in Figure 3. The buffer is used to store bursty traffic. This size defines the maximum
burst that can be accommodated.
Packet Buffer
Traffic Generated by an application Communication Channel
Server
Smoothened Traffic
Packet
Source: Ref.[2]
Figure 3: A leaky bucket traffic shaper
This mechanism is different from a leaky bucket algorithm which was used in traffic
policing. The bucket in traffic policing is just a counter whereas, a bucket is traffic
shaper is a buffer that stores the packets.
Token Bucket Traffic Shaper: The leaky bucket traffic shaper has a very restricted
approach. Since, the output pattern is always constant no matter how bursty traffic is.
Many applications produce variable rate traffic; sometimes bursty but sometimes
normal. If, such traffic is allowed to pass through a leaky bucket traffic shaper, it may
cause a very long delay. One such algorithms that deals with such situations is the
token bucket algorithm.
The following are the features of token bucket traffic shaper:
• Token is used here, as a permit to send a packet. Unless there is a token, no
packet can be transmitted.
• A Token bucket holds tokens which are generated periodically at a constant
rate.
• New tokens are discarded, incase, the token buckets are full.
• A packet can be transmitted only if, there is a token in the token buffer. For
example; in the Figure 4 there are two tokens in the token buffer and five
packets to be transmitted. Only two packets will be transmitted and the other
three will wait for tokens.
• Traffic burstiness is proportional to the size of a token bucket.
Token bucket
Token
Packet Buffer
Traffic generated by an application Communication channel
Server
Smoothened Traffic
Packet
Source: Ref.[2]
Figure 4: Token bucket traffic shaper
48
Now, let us discuss the operation of this algorithm; Congestion Control in Public
Switched Network
Just assume that the token bucket is empty and the numbers of packets have arrived in
the buffer. Since, there is no token in the token buffer, the packets have to wait until
the new packet is generated. Since, tokens are generated periodically, the packet will
be also transmitted periodically at the rate at which the tokens arrive. In, the next
section, we will compare between the leaky bucket traffic shaper and token bucket
traffic shaper. Students are also requested to see question 2 of Check Your Progress 2.
Table 3: Leaky Bucket Traffic Shaper and Token Bucket Traffic Shaper
1) Send a control packet (choke packet) from a node where congestion has
occurred to some or all source nodes. This choke packet will have the effect of
stopping or slowing the rate of transmission from sources and therefore, it will
reduce total load on the network. This approach requires additional traffic on
the network during the period of congestion.
2) Make use of an end-to-end probe packet. Such a packet could be time stamped
to measure the delay between two particular endpoints. This has the
disadvantage of adding overhead to the network.
49
Network Layer Check Your Progress 2
1) What are the different approaches to open loop control?
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
2) What is the difference between leaky bucket traffic shaper and token bucket
traffic shaper?
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
3.8 SUMMARY
In this unit, we examined several aspects of congestion i.e., what is Congestion? How
does it occur? We also differentiated between congestion control and flow control.
Then, we gave two broad classification of congestion control; open loop and closed
loop. At the end, we touched upon issues related to congestion control in packet
switched network.
3.9 SOLUTIONS/ANSWERS
1)
Congestion Control is needed when buffers Flow Control is needed when the buffers at
in packet switches overflow or congest. the receiver are not depleted as fast as the
data arrives.
Congestion is end to end, it includes all Flow in between one data sender and one
hosts, links and routers. It is a global issue. receiver. It can be done on link-to-link or
end-to-end basis. It is a local issue.
2) The purpose of open loop solution is to ensure that the traffic generated by the
source will not lower the performance of the network below the specified Qo S.
The network will accept traffic till QoS parameters are satisfied, otherwise, it
rejects the packets.
In contrast, closed loop solutions are based on the concept of a feedback loop.
These algorithms are called closed loop because the state of the network has to
be fed, up to the source that regulates traffic. Closed loop algorithms follow
dynamic approach to solution of congestion problems. It reacts during the
congestion occurrence period or when the congestion is about to happen.
50
Check Your Progress 2 Congestion Control in Public
Switched Network
1) The following are the different approaches:
• Admission Control Mechanism
• Traffic Policing
• Traffic Shaping
2) (i) Token bucket algorithm is more flexible than leaky bucket traffic shaper
algorithm but both are used to regulate traffic.
(ii) The leaky bucket algorithm does not allow idle hosts to save up permission
to send large bursty packets later, whereas, token bucket algorithm allows
saving up to maximum size of the bucket N. This means that burst up to N
packet can be sent at once, allowing some burstiness in the output stream
and giving fast response to sudden bursts of output [Ref 1].
(iii) Token bucket algorithm throws away tokens (i.e., transmission capacity),
when the bucket fills up but never throws packets. In contrast, the leaky
bucket algorithm discards the packets when the bucket fills up.
51
Network Layer
52
Congestion Control in Public
Switched Network
53
Network Layer
UNIT 4 INTERNETWORKING
Structure Page Nos.
4.0 Introduction 52
4.1 Objectives 52
4.2 Internetworking 52
4.2.1 How does a Network differ?
4.2.2 Networks Connecting Mechanisms
4.2.3 Tunneling and Encapsulation
4.3 Network Layer Protocols 55
4.3.1 IP Datagram Formats
4.3.2 Internet Control Message Protocol (ICMP)
4.3.3 OSPF: The Interior Gateway Routing Protocol
4.3.4 BGP: The Exterior Gateway Routing Protocol
4.4 Summary 68
4.5 Solutions/Answers 68
4.6 Further Readings 69
4.0 INTRODUCTION
There are many ways in which one network differs from another. Some of the
parameters in which a network differs from another network are packet length, quality
of services, error handling mechanisms, flow control mechanism, congestion control
mechanism, security issues and addressing mechanisms. Therefore, problems are
bound to occur when we require interconnection between two different networks.
Different mechanisms have been proposed to solve this problem: Tunneling is used
when the source and destination are the same type of network but, there is a different
network in-between. Fragmentation may be used for different maximum packet sizes
of different networks. The network layer has a large set of protocols besides IP. Some
of these are OSPF and BGP and ICMP. In this unit, we will discuss some of these
protocols as well as some internetworking mechanisms.
4.1 OBJECTIVES
4.2 INTERNETWORKING
The Internet is comprised of several networks, each one with different protocols.
There are many reasons for having different networks (thus different protocols):
52
Internetworking
• Some PCs still run on Novell’s NCP/IPX or Appletalk.
• Wireless Network will have different protocols.
• A large number of Telecommunication companies provide ATM facilities.
In this section, we will examine some issues that arise when two or more networks are
interconnected. The purpose of interconnecting is to allow any node or any network
(e.g., Ethernet) to access data to any other node on any other network. (e.g., ATM).
Users should not be aware of the existence of multiple networks.
Tanenbaum [Ref.1] has defined several features (Kindly refer to Table1) that
distinguishes one network from another. These differences have be resolved while
internetworking. All these features are defined at the network layer only, although, the
network differs at the other layers too. They might have different encoding techniques
at the physical layer, different frame formats at the data link layer, and different Q0 S
at the transport layer etc.
Features Options
Types of services Connection-oriented, connection-less,
Protocols IP, IPX, SNA, ATM
Addressing Scheme Flat vs. Hierarchical
Maximum Packet size Different for each network
Flow Control Sliding window, Credit based
Congestion Control Mechanism Leaky bucket, Token bucket, Hop by
Hop, Choke Packets
Accounting By connect time, packet by packet, byte
by byte
We have been addressing the problems of connecting network in the earlier blocks
also. Let us revisit these topics again. Networks can be connected by the following
devices:
• Repeaters or Hubs can be used to connect networks at the physical layer. The
purpose of these devices is to move bits from one network to another network.
They are mainly analog devices and do not understand higher layer protocols.
• At data link layer, bridges and switches were introduced to connect multiple
LANs. They work at the frame level rather than bits level. They examine the
MAC address of frames and forward the frames to different LANs. They may
do little translation from one protocol (e.g., Token ring, Ethernet) to another
MAC Layer Protocol. Routers were used at the network layer, which also does
translation incase the network uses different network layer protocols.
Finally, the transport layer and application layer gateways deal with conversion of
protocols at the transport and application layer respectively, in order to interconnect
networks.
53
Network Layer The main difference between the two operations is that, with a switch, the entire frame
is forwarded to a different LAN on the basis of its MAC address. With a router, the
packet is extracted and encapsulated in a different kind of a frame and forwarded to a
remote router on the basis of the IP address in the packet. Switches need not
understand the network layer protocol to switch a packet, whereas, a router requires to
do so. Tanenbaum [Ref.1] has described two basic mechanisms of internetworking:
Concatenated virtual circuit and connectionless internetworking. In the next
sections we will talk about them.
The essential feature of this mechanism is that a series of Virtual Circuits is setup
from the source machine on one network to the destination machine on another
network through one or more gateways. Just like a router, each gateway maintains
tables, indicating the virtual circuits that are to be used for packet forwarding. This
scheme works when all the networks follow the same Q0S parameters. But, if only
some networks support reliable delivery service, then all the schemes will not work. In
summary, this scheme has the same advantage and disadvantage of a Virtual Circuit
within a subnet.
Datagram Model
Unlike the previous model there is no concept of virtual circuits, therefore, there is no
guarantee of packet delivery. Each packet contains the full source and destination
address and are forwarded independently. This strategy uses multiple routes and
therefore, achieve higher bandwidth.
A major disadvantage of this approach is that, it can be used over subnets that do not
use Virtual Circuit inside. For example, many LAN and some mobile networks
support this approach.
It is used when the source and destination networks are the same but the network,
which lies in-between, is different. It uses a mechanism called encapsulation where, a
data transfer unit of one protocol is enclosed inside a different kind of protocol.
Tunneling allows us to carry one kind of frame that uses a particular network but uses,
a different kind of frame.
Suppose two hosts located very far away from each other wants to communicate and
both have access to the Internet link. It means that both of them are running TCP/IP
based protocol. The carrier (WAN) which lies between the two hosts is based at X.25.
Its format is different from TCP/IP. Therefore, the IP datagram forwarded by the host
one will be encapsulated in X.25 network layer packet and will be transported to the
address of the router of the destination host, when it gets there. The destination router
removes the IP packet and sends it to host 2. WAN can be considered as a big tunnel
54
extending from one router to another [Ref.1]. The packet from host 1 travels from one Internetworking
end of a X.25 based tunnel to another end of the tunnel encapsulated properly.
Sending and receiving hosts are not concerned about the process. It is done by the
concerned router at the other end.
(i) IP
(ii) ICMP
(iii) RIP, OSPF and BGP
55
Network Layer
Transport Layer
RIP IP
OSPF
BGP Network Layer
ICMP
Physical Layer
(i) IP: The first component is IP Protocol, which defines the followings:
• Fields in IP datagram
• Address formats
• Action taken by routers and end systems on a IP datagram based on the
values in these fields.
(ii) ICMP: The Second Component of the network layer is ICMP (Internet Control
Message Protocol) used by the hosts, routers and gateways to communicate
network layer information to each other. The most typical use of ICMP is for error
reporting.
(iii) RIP, OSPF and BGP: The third component is related to routing protocols: RIP
and OSPF are used for Intra-AS routing, whereas, BGP is used as exterior
gateway routing protocol.
An IP datagram consists of a header part and a data part. The header has a 20-byte
fixed part and a variable length optional part as shown in the Figure2(a). The header
format is shown in Figure 2(b). It is transmitted in big-endian order: from left to
right, with the high-order bit of the Version field going first. On little endian
machines, software conversion is required on both the transmission header as well as
the reception header. The key fields in the IPv4 Internet Protocol version 4 datagram
header are the followings.
20-60 bytes
Header Data
56
Internetworking
32 Bits
A 0 Network Host
B 10 Network Host
The Version field specifies the IP version of the protocol, the datagram belongs to. By
including the version in each datagram, the router can determine/interpret the
reminder of the IP datagram.
The field defines the length of the header in multiples of four bytes. The four bytes
can represent a number between 0 and 15, which when multiplied by 4, results in a 60
bytes. A typical IP datagram has 20 byte header only, because most IP datagram do
not contain options.
The Type of service (8 bits) defines how the datagram should be handled. It defines
bits to specify priority, reliability, delay, level of throughput to meet the different
requirements. For example, for digitised voice, fast delivery, beats accurate delivery.
For file transfer, error-free transmission is more important than fast transmission.
The Total length includes everything in the datagram both header and data. The
maximum length is 65,535 bytes. At present, this upper limit is tolerable, but with
future gigabit networks, larger datagrams may be needed.
Next, comes an unused bit and then two 1-bit fields. DF stands for Don’t Fragment.
It is an order to the router not to fragment the datagram because, the destination is
incapable of putting the pieces back together again.
MF stands for More Fragments. All fragments except the last one have this bit set.
It is needed to know when all fragments of a datagram have arrived.
The Fragment offset (13 bits) depicts the location of that the current datagram, this
fragment belongs to. All fragments except, the last one in a datagram, must be a
57
Network Layer multiple of 8 bytes, the elementary fragment unit. Since 13 bits are provided, there is
a maximum of 8192 fragments per datagram, with a maximum datagram length of
65,536 bytes, one more than the Total length field.
The Time to live (8 bit) field is a counter used to limit packet lifetimes. It is supposed
to count time in seconds allowing a maximum lifetime of 255 sec. It must be
decremented on each hop and is supposed to be decremented multiple times when
queued for a long time in the router. In practice, it just counts hops. When it hits
zero, the packet is dropped and a warning packet is sent back to the source host. This
feature, prevents datagrams from wandering around forever, something that otherwise
might happen if the routing tables become corrupted.
Protocol (8 bits) is used when IP reaches its final destination. When, the network
layer has assembled a complete datagram, it needs to know what to do with it. The
Protocol field identifies the transport protocol the network layers needs to give it to.
TCP is one possibility, but so are UDP and some others. The numbering of protocols
is global across the entire Internet.
The Header checkum verifies the header only. Such a checkum is useful for detecting
errors generated by bad memory words inside a router. This algorithm is more robust
than a normal add. Note, that the Header checksum must be recomputed at each hop
because, at least one field always changes (the time to live field), but tricks can be
used to speed up the computation.
The Source address and Destination IP address: These fields carry the 32 bit IP
addresses of the source and destination address. One portion of the IP address
indicates the network and the other portions indicate the host (or router) on the
network. The IP addresses will be described in the next section.
The Option filed (32 bits): This field allows an IP header to be extended to be more
functional. It can carry fields that control routing, timing and security.
IP Addressing
All IP addresses are 32 bits long and are used in the Source address and Destination
address fields of IP packets.
For several decades, IP addresses were divided into the five categories given in the
Figure. The different classes are designed to cover the needs of different types of
organisations.
The three main address classes are class A, class B, and class C. By examining the
first few bits of an address, IP software can quickly determine the class, and therefore,
the structure, of an address. IP follows these rules to determine the address class:
• Class A: If, the first bit of an IP address is 0, it is the address of a class A
network. The first bit of a class A address identifies the address class.
58
The next 7 bits identify the network, and the last 24 bits identify the host. There Internetworking
are fewer than 128 class A network numbers, but each class A network can be
composed of millions of hosts.
• Class B: If, the first 2 bits of the address are 1 0, it is a class B network address.
The first 2 bits identify class; the next 14 bits identify the network, and the last
16 bits identify the host. There are thousands of class B network numbers and
each class B network can contain thousands of hosts.
• Class C: If, the first 3 bits of the address are 1 1 0, it is a class C network
address. In a class C address, the first 3 bits are class identifiers; the next 21 bits
are the network address, and the last 8 bits identify the host. There are millions
of class C network numbers, but each class C network is composed of fewer
than 254 hosts.
• Class D: If, the first 4 bits of the address are 1 1 1 0, it is a multicast address.
These addresses are sometimes called class D addresses, but they don’t really
refer to specific networks. Multicast addresses are used to address groups of
computers together at moment in time. Multicast addresses, identify a group of
computers that share a common application, such as a video conference, as
opposed to a group of computers that share a common network.
• Class E: If, the first four bits of the address are 1 1 1 1, it is a special reserved
address. These addresses are called class E addresses, but they don’t really refer
to specific networks. No numbers are currently assigned in this range.
IP addresses are usually written as four decimal numbers separated by dots (periods).
Each of the four numbers is in the range 0-255 (the decimal values possible for a
single byte). Because the bits that identify class are contiguous with the network bits
of the address, we can lump them together and look at the address as composed of full
bytes of network address and full bytes of host address. If the value of the first byte is:
• Less than 128, the address is class A; the first byte is the network number, and
the next three bytes are the host address.
• From 128 to 191, the address is class B; the first two bytes identify the network,
and the last two bytes identify the host.
• From 192 to 223, the address is class C; the first three bytes are the network
address, and the last byte is the host number.
• From 224 to 239, the address is multicast. There is no network part. The entire
address identifies a specific multicast group.
• Greater than 239, the address is reserved.
The following table depicts each class range with other details.
59
Network Layer The IP address, which provides, universal addressing across all the networks of the
Internet, is one of the great strengths of the TCP/IP protocol suite. However, the
original class structure of the IP address has weaknesses. The TCP/IP designers did
not envision the enormous scale of today’s network. When TCP/IP was being
designed, networking was limited to large organisations that could afford substantial
computer systems. The idea of a powerful UNIX system on every desktop did not
exist. At that time, a 32-bit address seemed so large that it was divided into classes to
reduce the processing load on routers, even though dividing the address into classes
sharply reduced the number of host addresses actually available for use. For example,
assigning a large network a single class B address, instead of six class C addresses,
reduced the load on the router because the router needed to keep only one route for
that entire organisation. However, an organisation that was given the class B address
probably did not have 64,000 computers, so most of the host addresses available to the
organisation were never assigned.
The class-structured address design was critically strained by the rapid growth of the
Internet. At one point it appeared that all class B addresses might be rapidly
exhausted. To prevent this, a new way of looking at IP addresses without a class
structure was developed.
The size of a network (i.e., the number of host addresses available for use on it) is a
function of the number of bits used to identify the host portion of the address. If, a
subnet mask shows that 8 bits are used for the host portion of the address block, a
maximum of 256 possible host addresses are available for that specific network.
Similarly, if a subnet mask shows that 16 bits are used for the host portion of the
address block, a maximum of 65,536 possible host addresses are available for use on
that network.
If a network administrator needs to split a single network into multiple virtual
networks, the bit-pattern in use with the subnet mask can be changed to allow as many
networks as necessary. For example, assume that we want to split the 24-bit
192.168.10.0 network (which allows for 8 bits of host addressing, or a maximum of
256 host addresses) into two smaller networks. All we have to do in this situation is,
change the subnet mask of the devices on the network so that they use 25 bits for the
network instead of 24 bits, resulting in two distinct networks with 128 possible host
addresses on each network. In this case, the first network would have a range of
network addresses between 192.168.10.0 -192.168.10.127, while the second network
would have a range of addresses between 192.168.10.128 -192.168.10.255.
60
Networks can also be enlarged through the use of a technique known as Internetworking
“supernetting,” which works by extending the host portion of a subnet mask to the
left, into the network portion of the address. Using this technique, a pair of networks
with 24-bit subnet masks can be turned into a single large network with a 23-bit
subnet mask. However, this works only if you have two neighbouring 24-bit network
blocks, with the lower network having an even value (when the network portion of the
address is shrunk, the trailing bit from the original network portion of the subnet mask
should fall into the host portion of the new subnet mask, so the new network mask
will consume both networks). For example, it is possible to combine the 24-bit
192.168.10.0 and 192.168.11.0 networks together since the loss of the trailing bit from
each network (00001010 vs. 00001011) produces the same 23-bit subnet mask
(0000101x), resulting in a consolidated 192.168.10.0 network. However, it is not
possible to combine the 24-bit 192.168.11.0 and 192.168.12.0 networks, since the
binary values in the seventh bit position (00001011 vs. 00001100) do not match when
the trailing bit is removed.
Classless Inter-Domain Routing
In the modern networking environment defined by RFC 1519 [Classless Inter-Domain
Routing (CIDR)], the subnet mask of a network is typically annotated in written form
as a “slash prefix” that trails the network number. In the subnetting example in the
previous paragraph, the original 24-bit network would be written as 192.168.10.0/24,
while the two new networks would be written as 192.168.10.0/25 and
192.168.10.128/25. Likewise, when the 192.168.10.0/24 and 192.168.11.0/24
networks were joined together as a single supernet, the resulting network would be
written as 192.168.10.0/23. Note, that the slash prefix annotation is generally used for
human benefit; infrastructure devices still use the 32-bit binary subnet mask internally
to identify networks and their routes. All networks must reserve host addresses (made
up entirely of either ones or zeros), to be used by the networks themselves. This is so
that, each subnet will have a network-specific address (the all-zeroes address) and a
broadcast address (the all-ones address). For example, a /24 network allows for 8 bits
of host addresses, but only 254 of the 256 possible addresses are available for use.
Similarly, /25 networks have a maximum of 7 bits for host addresses, with 126 of the
128 possible addresses available (the all-ones and all-zeroes addresses from each
subnet must be set aside for the subnets themselves). All the systems on the same
subnet must use the same subnet mask in order to communicate with each other
directly. If, they use different subnet masks they will think they are on different
networks, and will not be able to communicate with each other without going through
a router first. Hosts on different networks can use different subnet maks, although the
routers will have to be aware of the subnet masks in use on each of the segments.
Subnet masks are used only by systems that need to communicate with the network
directly. For example, external systems do not need to be aware of the subnet masks in
use on your internal networks, since those systems will route data to your network by
way of your parent network’s address block. As such, remote routers need to know
only the provider’s subnet mask. For example, if you have a small network that uses
only a /28 prefix that is, a subset of your ISP’s /20 network, remote routers need to
know only about your upstream provider’s /20 network, while your upstream provider
needs to know your subnet mask in order to get the data to your local /28 network.
The rapid depletion of the class B addresses showed that three primary address classes
were not enough: class A was much too large and class C was much too small. Even a
class B address was too large for many networks but was used because it was better
than the other alternatives.
The obvious solution to the class B address crisis was to force organisations to use
multiple class C addresses. There were millions of these addresses available and they
were in no immediate danger of depletion. As is often the case, the obvious solution is
not as simple as it may seem. Each class C address requires its own entry within the
61
Network Layer routing table. Assigning thousands or millions of class C addresses would cause the
routing table to grow so rapidly that the routers would soon be overwhelmed. The
solution requires a new way of assigning addresses and a new way of looking at
addresses.
Originally network addresses were assigned in more or less sequential order as they
were requested. This worked fine when the network was small and centralised.
However, it did not take network topology into account. Thus, only random chance
would determine if the same intermediate routers would be used to reach network
195.4.12.0 and network 195.4.13.0, which makes it difficult to reduce the size of the
routing table. Addresses can only be aggregated if they are contiguous numbers and
are reachable through the same route. For example, if addresses are contiguous for one
service provider, a single route can be created for that aggregation because that
service provider will have a limited number of routes to the Internet. But if one
network address is in France and the next contiguous address is in Australia, creating
a consolidated route for these addresses will not work.
Today, large, contiguous blocks of addresses are assigned to large network service
providers in a manner that better reflects the topology of the network. The service
providers then allocate chunks of these address blocks to the organisations to which
they provide network services. This alleviates the short-term shortage of class B
addresses and, because the assignment of addressees reflects the topology of the
network, it permits route aggregation. Under this new scheme, we know that network
195.4.12.0 and network 195.4.13.0 are reachable through the same intermediate
routers. In fact, both these addresses are in the range of the addresses assigned to
Europe, 194.0.0.0 to 195.255.255.255. Assigning addresses that reflect the topology
of the network enables route aggregation, but does not implement it. As long as
network 195.4.12.0 and network 195.4.13.0 are interpreted as separate class C
addresses, they will require separate entries in the routing table. A new, flexible way
of defining addresses is therefore, needed.
Evaluating addresses according to the class rules discussed above limits the length of
network numbers to 8, 16, or 24 bits - 1, 2, or 3 bytes. The IP address, however, is not
really byte-oriented. It is 32 contiguous bits. A more flexible way to interpret the
network and host portions of an address is with a bit mask. An address bit mask works
in this way: if a bit is on in the mask, that equivalent bit in the address is interpreted as
a network bit; if a bit in the mask is off, the bit belongs to the host part of the address.
For example, if address 195.4.12.0 is interpreted as a class C address, the first 24 bits
are the network numbers and the last 8 bits are the host addresses. The network mask
that represents this is 255.255.255.0, 24 bits on and 8 bits off. The bit mask that is
derived from the traditional class structure is called the default mask or the natural
mask.
However, with bit masks we are no longer limited by the address class structure. A
mask of 255.255.0.0 can be applied to network address 195.4.0.0. This mask includes
all addresses from 195.4.0.0 to 195.4.255.255 in a single network number. In effect, it
creates a network number as large as a class B network in the class C address space.
Using bit masks to create networks larger than the natural mask is called supernetting,
and the use of a mask instead of the address class to determine the destination network
is called Classless Inter-Domain Routing (CIDR).
Specifying both the address and the mask is cumbersome when writing out addresses.
A shorthand notation has been developed for writing CIDR addresses. Instead of
writing network 172.16.26.32 with a mask of 255.255.255.224, we can write
172.16.26.32/27. The format of this notation is address/prefix-length, where prefix-
length is the number of bits in the network portion of the address. Without this
notation, the address 172.16.26.32 could easily be interpreted as a host address. RFC
62
1878 list all 32 possible prefix values. But little documentation is needed because the Internetworking
CIDR prefix is much easier to understand and remember than address classes. I know
that 10.104.0.19 is a class A address, but writing it as 10.104.0.19/8 shows me that
this address has 8 bits for the network number and therefore, 24 bits for the host
number. I don’t have to remember anything about the class A address structure.
Internet-Legal Versus Private Addressing
Not all firms have the luxury of using Internet-legal addresses on their hosts, for any
number of reasons. For example, there may be legacy applications that use hardcode
addresses, or there may be too many systems across the organisation for a clean
upgrade to be successful. If you are unable to use Internet-legal addresses, you should
at least be aware that there are groups of “private” Internet addresses that can be used
on internal networks by anyone. These address pools were set-aside in RFC 1918, and
therefore, cannot be “assigned” to any organisation. The Internet’s backbone routers
are configured explicitly not to route packets with these addresses, so they are
completely useless outside an organisation’s internal network. The address blocks
available are listed in Table3.
Since these addresses cannot be routed across the Internet, you must use an address-
translation gateway or a proxy server in conjunction with them. Otherwise, you will
not be able to communicate with any hosts on the Internet.
An important note here is that, since, nobody can use these addresses on the Internet,
it is safe to assume that anybody who is using these addresses is also utilising an
address-translation gateway of some sort. Therefore, while you will never see these
addresses used as destinations on the Internet, if your organisation establishes a
private connection to a partner organisation that is using the same block of addresses
that you are using, your firms will not be able to communicate on the Internet. The
packets destined for your partner’s network will appear to be local to your network,
and will never be forwarded to the remote network.
63
Network Layer There are many other problems that arise from using these addresses, making their
general usage difficult for normal operations. For example, many application-layer
protocols embed addressing information directly into the protocol stream, and in order
for these protocols to work properly, the address-translation gateway has to be aware
of their mechanics. In the preceding scenario, the gateway has to rewrite the private
addresses (which are stored as application data inside the application protocol),
rewrite the UDP/TCP and IP checksums, and possibly rewrite TCP sequence numbers
as well. This is difficult to do even with simple and open protocols such as FTP, and
extremely difficult with proprietary, encrypted, or dynamic applications (these are
problems for many database protocols, network games, and voice-over-IP services, in
particular). These gateways almost never work for all the applications in use at a
specific location.
Fragmentation
What happens if the original host sends a source packet which is too large to be
handled by the destination network? The routing algorithm can hardly bypass the
destination.
Basically, the only solution to the problem is to allow routers to break up packets into
fragments.
The data in the IP datagram is broken among two or more smaller IP datagrams and
these smaller fragments are then sent over the outgoing link.
64
Table 4: Fragmentation Table Internetworking
This means that 4,980 data bytes in the original datagram must be allocated to four
separate segments (each of which are also IP datagram). The original datagram has
been stamped with an identification number 999. It is desirable to have a minimum
number of fragments because fragmentation and reassembling creates extra overheads
or a network system and a host. This is done by limiting the size of UDP and TCP
segments to small size.
ICMP is often considered part of IP but architecturally lies just above IP as ICMP
messages are carried inside IP packets. Similar to TCP and UDP segments which are
carried as IP payloads, ICMP messages are also carried as IP payloads. Note, that, a
datagram carries only the addresses of the original sender and the final destination. It
does not know the addresses of the previous router (s) that passed a message. For this
reason, ICMP can send messages only to the source and not to an intermediate router.
Student are required to refer to Reference [1] & [5] for details of message types.
Open Shortest Path First (OSPF) has become standard interior gateway routing
protocol. It supports many advanced features [Ref.1 and Ref.5] to meet a long list of
requirements.
• Load distribution: When multiple paths to a destination have the same cost
OSPF allows multiple paths to be used.
65
Network Layer • Support for hierarchy within a single routing domain. By 1988, the internet
had grown so large that no router was expected to know the entire topology.
OSPF was designed so that no router would have to do so.
• Security: All exchanges between OSPF routers are authenticated and allows
only trusted routers to participate. OSPF supports three kinds of connections
and networks [Ref.1]:
OSPF allows a longer AS to be divided into smaller areas. Topology and details of
one area is not visible to others. Every AS has a backbone area (called Area 0) as
shown in Figure3.
AS boundary router Backbone
AS 1
Backbone router
Area
Internal router
All areas are connected to the backbone possibly by a tunnel, so it is possible to move
from one area to another area through a backbone. The primary role of the backbone
66
area is to route traffic between the other areas in the AS. Inter area routing within the Internetworking
AS requires the follows movement of packets as shown below by arrows.
After having described all the components of OSPF, let us now conclude the topic by
describing its operation.
At its heart, however, OSPF is a link state protocol that uses flooding of link state
information and Dijkstra’s Least-Cost path algorithm. Using flooding each router
informs all other routers in its area of its neighbours and costs. This information allow
each router to construct the graph for its area (s) and computers the shortest path using
Dijkstra’s algorithm. This is done by backbone routers also. In addition backbone
routers accept information from the area border routers in order to compute the best
route from each backbone router to every other router. This information is propagated
back to area border routers, which advertise it within the their areas. In this manner,
the optimal route is selected.
The purpose of Border Gateway Protocol is to enable two different ASes to exchange
routing information so that, IP traffic can flow across the AS border. A different
protocol is needed between the ASes because the objectives of an interior gateway and
exterior gateway routing protocol are different. Exterior gateway routing protocol
such as BGP is related to policy matters. BGP is fundamentally a distance vector
protocol but, it is more appropriately characterised as path vector protocol [Ref.5].
Instead of maintaining just the cost to each destination, each BGP router keeps track
of the path used [Ref.1]. Neighbouring BGP routers, known as BGP peers exchange
detailed information alongwith the list of ASes on a path to a given destination rather
than record cost information.
The main advantage of using BGP is to solve the count to infinity problem which is
illustrated in the following Figure 4.
A B C D
E G
F
H I J K
Figure 4 : Solution to Count to infinity problems in BGP
67
Network Layer After receiving all the paths from the neighbours, G will find the best route available.
It will outright reject the path from C and E, since they pass through G itself.
Therefore, the choice left is between a route announced by B and H. BGP easily
solves count to infinity problems. Now, suppose C crashes or the line B-C is down.
Then if B receives, two routes from its 2 neighbours: ABCDK and FBCDK, then
these which can be rejected because it passes through C itself. Other distance vector
algorithms make the wrong choice because, they cannot tell which of their neighbours
have independent routes to their destination or not.
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
4.4 SUMMARY
In this unit, we defined, a large number of concepts and protocols. Tunneling is used
when the source and destination networks are identical but the network, which lies in
between, is different. A subnet allows a network to be split into several parts for
Internet use but still acts like a single network to the outside world. It makes
management of IP addresses simpler. Another reason for creating a subnet is to,
establish security between different work groups. Internet is made of a large number
of autonomous regions controlled by a different organisation which can, use its own
routing algorithm inside. A routing algorithm within an autonomous region (such as
LAN, WAN) is called an interior gateway protocol, an algorithm for routing
between different autonomous regions are called exterior gateway routing
protocols.
4.5 SOLUTIONS/ANSWERS
Check Your Progress 1
1) The following are the important features in which one network differs form
another
• Protocols
• Addressing mechanism
• Size of a packet
• Quality of service
• Flow control
• Congestion control.
68
2) There are two such mechanisms: Internetworking
3) Tunneling is used when the source and destination networks are the same but
the network which lies in between is different. It uses a mechanism called
encapsulation, where data transfer unit of one protocol is enclosed inside a
different protocol.
3) It is different from other BGP protocol because its router keeps track of the path
used instead of maintaining just the cost to each destination. Similarly, instead
of periodically giving each neighbour its estimated cost to each possible
destination, each BGP router tells its neighbours the exact path it is using.
69
Network Layer
70
Internetworking
71